
2025-09-17 08:33:10
{\L}ukasiewicz Logic with Actions for Neural Networks training
Ioana Leu\c{s}tean (University of Bucharest), Bogdan Macovei (University of Bucharest)
https://arxiv.org/abs/2509.13020
{\L}ukasiewicz Logic with Actions for Neural Networks training
Ioana Leu\c{s}tean (University of Bucharest), Bogdan Macovei (University of Bucharest)
https://arxiv.org/abs/2509.13020
An Experimental Exploration of In-Memory Computing for Multi-Layer Perceptrons
Pedro Carrinho, Hamid Moghadaspour, Oscar Ferraz, Jo\~ao Dinis Ferreira, Yann Falevoz, Vitor Silva, Gabriel Falcao
https://arxiv.org/abs/2508.07317
N-BEATS-MOE: N-BEATS with a Mixture-of-Experts Layer for Heterogeneous Time Series Forecasting
Ricardo Matos, Luis Roque, Vitor Cerqueira
https://arxiv.org/abs/2508.07490 https:…
Rare dense solutions clusters in asymmetric binary perceptrons -- local entropy via fully lifted RDT
Mihailo Stojnic
https://arxiv.org/abs/2506.19276 https…
evMLP: An Efficient Event-Driven MLP Architecture for Vision
Zhentan Zheng
https://arxiv.org/abs/2507.01927 https://arxiv.org/pdf/250…
SBS: Enhancing Parameter-Efficiency of Neural Representations for Neural Networks via Spectral Bias Suppression
Qihu Xie, Yuan Li, Yi Kang
https://arxiv.org/abs/2509.07373 https…
QuKAN: A Quantum Circuit Born Machine approach to Quantum Kolmogorov Arnold Networks
Yannick Werner, Akash Malemath, Mengxi Liu, Vitor Fortes Rey, Nikolaos Palaiodimopoulos, Paul Lukowicz, Maximilian Kiefer-Emmanouilidis
https://arxiv.org/abs/2506.22340
Physics-Informed Neural Networks with Hard Nonlinear Equality and Inequality Constraints
Ashfaq Iftakher, Rahul Golder, M. M. Faruque Hasan
https://arxiv.org/abs/2507.08124 https://arxiv.org/pdf/2507.08124 https://arxiv.org/html/2507.08124
arXiv:2507.08124v1 Announce Type: new
Abstract: Traditional physics-informed neural networks (PINNs) do not guarantee strict constraint satisfaction. This is problematic in engineering systems where minor violations of governing laws can significantly degrade the reliability and consistency of model predictions. In this work, we develop KKT-Hardnet, a PINN architecture that enforces both linear and nonlinear equality and inequality constraints up to machine precision. It leverages a projection onto the feasible region through solving Karush-Kuhn-Tucker (KKT) conditions of a distance minimization problem. Furthermore, we reformulate the nonlinear KKT conditions using log-exponential transformation to construct a general sparse system with only linear and exponential terms, thereby making the projection differentiable. We apply KKT-Hardnet on both test problems and a real-world chemical process simulation. Compared to multilayer perceptrons and PINNs, KKT-Hardnet achieves higher accuracy and strict constraint satisfaction. This approach allows the integration of domain knowledge into machine learning towards reliable hybrid modeling of complex systems.
toXiv_bot_toot
Physics-Informed Kolmogorov-Arnold Networks for multi-material elasticity problems in electronic packaging
Yanpeng Gong, Yida He, Yue Mei, Xiaoying Zhuang, Fei Qin, Timon Rabczuk
https://arxiv.org/abs/2508.16999
Fourier Feature Networks for High-Fidelity Prediction of Perturbed Optical Fields
Joshua R. Jandrell, Mitchell A. Cox
https://arxiv.org/abs/2508.19751 https://
Replaced article(s) found for math.ST. https://arxiv.org/list/math.ST/new
[1/1]:
- Symmetric Perceptrons, Number Partitioning and Lattices
Neekon Vafa, Vinod Vaikuntanathan
From Atoms to Dynamics: Learning the Committor Without Collective Variables
Sergio Contreras Arredondo, Chenyu Tang, Radu A. Talmazan, Alberto Meg\'ias, Cheng Giuseppe Chen, Christophe Chipot
https://arxiv.org/abs/2507.17700
Scientific Machine Learning with Kolmogorov-Arnold Networks
Salah A. Faroughi, Farinaz Mostajeran, Amin Hamed Mashhadzadeh, Shirko Faroughi
https://arxiv.org/abs/2507.22959 http…
Beyond ReLU: Chebyshev-DQN for Enhanced Deep Q-Networks
Saman Yazdannik, Morteza Tayefi, Shamim Sanisales
https://arxiv.org/abs/2508.14536 https://arxiv.or…