2025-09-29 11:33:37
Partial Parameter Updates for Efficient Distributed Training
Anastasiia Filippova, Angelos Katharopoulos, David Grangier, Ronan Collobert
https://arxiv.org/abs/2509.22418 https:…
Partial Parameter Updates for Efficient Distributed Training
Anastasiia Filippova, Angelos Katharopoulos, David Grangier, Ronan Collobert
https://arxiv.org/abs/2509.22418 https:…
End-to-end Training of High-Dimensional Optimal Control with Implicit Hamiltonians via Jacobian-Free Backpropagation
Eric Gelphman, Deepanshu Verma, Nicole Tianjiao Yang, Stanley Osher, Samy Wu Fung
https://arxiv.org/abs/2510.00359
Forward-Forward Autoencoder Architectures for Energy-Efficient Wireless Communications
Daniel Seifert, Onur G\"unl\"u, Rafael F. Schaefer
https://arxiv.org/abs/2510.11418
The Enduring Dominance of Deep Neural Networks: A Critical Analysis of the Fundamental Limitations of Quantum Machine Learning and Spiking Neural Networks
Takehiro Ishikawa
https://arxiv.org/abs/2510.08591
Replaced article(s) found for cs.AI. https://arxiv.org/list/cs.AI/new
[5/9]:
- Stochastic Layer-wise Learning: Scalable and Efficient Alternative to Backpropagation
Bojian Yin, Federico Corradi
Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
Yuri Calleo
https://arxiv.org/abs/2512.17696 https://arxiv.org/pdf/2512.17696 https://arxiv.org/html/2512.17696
arXiv:2512.17696v1 Announce Type: new
Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning.
toXiv_bot_toot
A Biologically Interpretable Cognitive Architecture for Online Structuring of Episodic Memories into Cognitive Maps
E. A. Dzhivelikian, A. I. Panov
https://arxiv.org/abs/2510.03286
Learning Polynomial Activation Functions for Deep Neural Networks
Linghao Zhang, Jiawang Nie, Tingting Tang
https://arxiv.org/abs/2510.03682 https://arxiv.…
Scaling Equilibrium Propagation to Deeper Neural Network Architectures
Sankar Vinayak. E. P, Gopalakrishnan Srinivasan
https://arxiv.org/abs/2509.26003 https://
DelRec: learning delays in recurrent spiking neural networks
Alexandre Queant, Ulysse Ran\c{c}on, Benoit R Cottereau, Timoth\'ee Masquelier
https://arxiv.org/abs/2509.24852 …