2025-10-31 00:13:17
đź’« Fast frequency reconstruction using Deep Learning for event recognition in ring laser data
#laser
đź’« Fast frequency reconstruction using Deep Learning for event recognition in ring laser data
#laser
Deep Learning to Identify the Spatio-Temporal Cascading Effects of Train Delays in a High-Density Network
Vu Duc Anh Nguyen, Ziyue Li
https://arxiv.org/abs/2510.09350 https://…
I like to play around as an anonymous commenter in online newspaper columns. If you point out the biases of #AI systems, the comment gets deleted because it is considered too polemical.
The comment was addressing an article about AI in public service and the use in refugee applications 🫣
Learning Polynomial Activation Functions for Deep Neural Networks
Linghao Zhang, Jiawang Nie, Tingting Tang
https://arxiv.org/abs/2510.03682 https://arxiv.…
WiNPA: Wireless Neural Processing Architecture
Sai Xu, Yanan Du
https://arxiv.org/abs/2510.11150 https://arxiv.org/pdf/2510.11150
Accelerating Inference for Multilayer Neural Networks with Quantum Computers
Arthur G. Rattew, Po-Wei Huang, Naixu Guo, Lirand\"e Pira, Patrick Rebentrost
https://arxiv.org/abs/2510.07195
Slitless Spectroscopy Source Detection Using YOLO Deep Neural Network
Xiaohan Chen, Man I Lam, Yingying Zhou, Hongrui Gu, Jinzhi Lai, Zhou Fan, Jing Li, Xin Zhang, Hao Tian
https://arxiv.org/abs/2510.10922
Architecture Induces Structural Invariant Manifolds of Neural Network Training Dynamics
Jiajie Zhao, Tao Luo, Yaoyu Zhang
https://arxiv.org/abs/2510.09564 https://
Comparative Evaluation of Neural Network Architectures for Generalizable Human Spatial Preference Prediction in Unseen Built Environments
Maral Doctorarastoo, Katherine A. Flanigan, Mario Berg\'es, Christopher McComb
https://arxiv.org/abs/2510.10954
Development of Deep Neural Network First-Level Hardware Track Trigger for the Belle II Experiment
Y. -X. Liu, T. Koga, H. Bae, Y. Yang, C. Kiesling, F. Meggendorfer, K. Unger, S. Hiesl, T. Forsthofer, A. Ishikawa, Y. Ahn, T. Ferber, I. Haide, G. Heine, C. -L. Hsu, A. Little, H. Nakazawa, M. Neu, L. Reuter, V. Savinov, Y. Unno, J. Yuan, Z. Xu
https://
WavInWav: Time-domain Speech Hiding via Invertible Neural Network
Wei Fan, Kejiang Chen, Xiangkun Wang, Weiming Zhang, Nenghai Yu
https://arxiv.org/abs/2510.02915 https://
Active Control of Turbulent Airfoil Flows Using Adjoint-based Deep Learning
Xuemin Liu, Tom Hickling, Jonathan F. MacArt
https://arxiv.org/abs/2510.07106 https://
Deep learning the sources of MJO predictability: a spectral view of learned features
Lin Yao, Da Yang, James P. C. Duncan, Ashesh Chattopadhyay, Pedram Hassanzadeh, Wahid Bhimji, Bin Yu
https://arxiv.org/abs/2510.03582
Application of deep neural networks for computing the renormalization group flow of the two-dimensional phi^4 field theory
Yueqi Zhao, Michael M. Fogler, Yi-Zhuang You
https://arxiv.org/abs/2510.06508 …
GTCN-G: A Residual Graph-Temporal Fusion Network for Imbalanced Intrusion Detection (Preprint)
Tianxiang Xu, Zhichao Wen, Xinyu Zhao, Qi Hu, Yan Li, Chang Liu
https://arxiv.org/abs/2510.07285
deep-REMAP: Probabilistic Parameterization of Stellar Spectra Using Regularized Multi-Task Learning
Sankalp Gilda
https://arxiv.org/abs/2510.09362 https://…
đź“· Two-stage framework reconstructs sharp 4D scenes from blurry handheld videos
#imaging
A Scalable FPGA Architecture With Adaptive Memory Utilization for GEMM-Based Operations
Anastasios Petropoulos, Theodore Antonakopoulos
https://arxiv.org/abs/2510.08137 https://…
Data-Driven Stochastic Distribution System Hardening Based on Bayesian Online Learning
Wenlong Shi, Hongyi Li, Zhaoyu Wang
https://arxiv.org/abs/2510.02485 https://
Spatially-informed transformers: Injecting geostatistical covariance biases into self-attention for spatio-temporal forecasting
Yuri Calleo
https://arxiv.org/abs/2512.17696 https://arxiv.org/pdf/2512.17696 https://arxiv.org/html/2512.17696
arXiv:2512.17696v1 Announce Type: new
Abstract: The modeling of high-dimensional spatio-temporal processes presents a fundamental dichotomy between the probabilistic rigor of classical geostatistics and the flexible, high-capacity representations of deep learning. While Gaussian processes offer theoretical consistency and exact uncertainty quantification, their prohibitive computational scaling renders them impractical for massive sensor networks. Conversely, modern transformer architectures excel at sequence modeling but inherently lack a geometric inductive bias, treating spatial sensors as permutation-invariant tokens without a native understanding of distance. In this work, we propose a spatially-informed transformer, a hybrid architecture that injects a geostatistical inductive bias directly into the self-attention mechanism via a learnable covariance kernel. By formally decomposing the attention structure into a stationary physical prior and a non-stationary data-driven residual, we impose a soft topological constraint that favors spatially proximal interactions while retaining the capacity to model complex dynamics. We demonstrate the phenomenon of ``Deep Variography'', where the network successfully recovers the true spatial decay parameters of the underlying process end-to-end via backpropagation. Extensive experiments on synthetic Gaussian random fields and real-world traffic benchmarks confirm that our method outperforms state-of-the-art graph neural networks. Furthermore, rigorous statistical validation confirms that the proposed method delivers not only superior predictive accuracy but also well-calibrated probabilistic forecasts, effectively bridging the gap between physics-aware modeling and data-driven learning.
toXiv_bot_toot
Hardware-Efficient CNNs: Interleaved Approximate FP32 Multipliers for Kernel Computation
Bindu G Gowda (International Institute of Information Technology Bangalore), Yogesh Goyal (International Institute of Information Technology Bangalore), Yash Gupta (International Institute of Information Technology Bangalore), Madhav Rao (International Institute of Information Technology Bangalore)
Cross-Receiver Generalization for RF Fingerprint Identification via Feature Disentanglement and Adversarial Training
Yuhao Pan, Xiucheng Wang, Nan Cheng, Wenchao Xu
https://arxiv.org/abs/2510.09405
Replaced article(s) found for nlin.CD. https://arxiv.org/list/nlin.CD/new
[1/1]:
- Network Dynamics-Based Framework for Understanding Deep Neural Networks
Yuchen Lin, Yong Zhang, Sihan Feng, Hong Zhao
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[2/5]:
- The Diffusion Duality
Sahoo, Deschenaux, Gokaslan, Wang, Chiu, Kuleshov
https://arxiv.org/abs/2506.10892 https://mastoxiv.page/@arXiv_csLG_bot/114675526577078472
- Multimodal Representation Learning and Fusion
Jin, Ge, Xie, Luo, Song, Bi, Liang, Guan, Yeong, Song, Hao
https://arxiv.org/abs/2506.20494 https://mastoxiv.page/@arXiv_csLG_bot/114749113025183688
- The kernel of graph indices for vector search
Mariano Tepper, Ted Willke
https://arxiv.org/abs/2506.20584 https://mastoxiv.page/@arXiv_csLG_bot/114749118923266356
- OptScale: Probabilistic Optimality for Inference-time Scaling
Youkang Wang, Jian Wang, Rubing Chen, Xiao-Yong Wei
https://arxiv.org/abs/2506.22376 https://mastoxiv.page/@arXiv_csLG_bot/114771735361664528
- Boosting Revisited: Benchmarking and Advancing LP-Based Ensemble Methods
Fabian Akkerman, Julien Ferry, Christian Artigues, Emmanuel Hebrard, Thibaut Vidal
https://arxiv.org/abs/2507.18242 https://mastoxiv.page/@arXiv_csLG_bot/114913322736512937
- MolMark: Safeguarding Molecular Structures through Learnable Atom-Level Watermarking
Runwen Hu, Peilin Chen, Keyan Ding, Shiqi Wang
https://arxiv.org/abs/2508.17702 https://mastoxiv.page/@arXiv_csLG_bot/115095014405732247
- Dual-Distilled Heterogeneous Federated Learning with Adaptive Margins for Trainable Global Protot...
Fatema Siddika, Md Anwar Hossen, Wensheng Zhang, Anuj Sharma, Juan Pablo Mu\~noz, Ali Jannesari
https://arxiv.org/abs/2508.19009 https://mastoxiv.page/@arXiv_csLG_bot/115100269482762688
- STDiff: A State Transition Diffusion Framework for Time Series Imputation in Industrial Systems
Gary Simethy, Daniel Ortiz-Arroyo, Petar Durdevic
https://arxiv.org/abs/2508.19011 https://mastoxiv.page/@arXiv_csLG_bot/115100270137397046
- EEGDM: Learning EEG Representation with Latent Diffusion Model
Shaocong Wang, Tong Liu, Yihan Li, Ming Li, Kairui Wen, Pei Yang, Wenqi Ji, Minjing Yu, Yong-Jin Liu
https://arxiv.org/abs/2508.20705 https://mastoxiv.page/@arXiv_csLG_bot/115111565155687451
- Data-Free Continual Learning of Server Models in Model-Heterogeneous Cloud-Device Collaboration
Xiao Zhang, Zengzhe Chen, Yuan Yuan, Yifei Zou, Fuzhen Zhuang, Wenyu Jiao, Yuke Wang, Dongxiao Yu
https://arxiv.org/abs/2509.25977 https://mastoxiv.page/@arXiv_csLG_bot/115298721327100391
- Fine-Tuning Masked Diffusion for Provable Self-Correction
Jaeyeon Kim, Seunggeun Kim, Taekyun Lee, David Z. Pan, Hyeji Kim, Sham Kakade, Sitan Chen
https://arxiv.org/abs/2510.01384 https://mastoxiv.page/@arXiv_csLG_bot/115309690976554356
- A Generic Machine Learning Framework for Radio Frequency Fingerprinting
Alex Hiles, Bashar I. Ahmad
https://arxiv.org/abs/2510.09775 https://mastoxiv.page/@arXiv_csLG_bot/115372387779061015
- ASecond-Order SpikingSSM for Wearables
Kartikay Agrawal, Abhijeet Vikram, Vedant Sharma, Vaishnavi Nagabhushana, Ayon Borthakur
https://arxiv.org/abs/2510.14386 https://mastoxiv.page/@arXiv_csLG_bot/115389079527543821
- Utility-Diversity Aware Online Batch Selection for LLM Supervised Fine-tuning
Heming Zou, Yixiu Mao, Yun Qu, Qi Wang, Xiangyang Ji
https://arxiv.org/abs/2510.16882 https://mastoxiv.page/@arXiv_csLG_bot/115412243355962887
- Seeing Structural Failure Before it Happens: An Image-Based Physics-Informed Neural Network (PINN...
Omer Jauhar Khan, Sudais Khan, Hafeez Anwar, Shahzeb Khan, Shams Ul Arifeen
https://arxiv.org/abs/2510.23117 https://mastoxiv.page/@arXiv_csLG_bot/115451891042176876
- Training Deep Physics-Informed Kolmogorov-Arnold Networks
Spyros Rigas, Fotios Anagnostopoulos, Michalis Papachristou, Georgios Alexandridis
https://arxiv.org/abs/2510.23501 https://mastoxiv.page/@arXiv_csLG_bot/115451942159737549
- Semi-Supervised Preference Optimization with Limited Feedback
Seonggyun Lee, Sungjun Lim, Seojin Park, Soeun Cheon, Kyungwoo Song
https://arxiv.org/abs/2511.00040 https://mastoxiv.page/@arXiv_csLG_bot/115490555013124989
- Towards Causal Market Simulators
Dennis Thumm, Luis Ontaneda Mijares
https://arxiv.org/abs/2511.04469 https://mastoxiv.page/@arXiv_csLG_bot/115507943827841017
- Incremental Generation is Necessary and Sufficient for Universality in Flow-Based Modelling
Hossein Rouhvarzi, Anastasis Kratsios
https://arxiv.org/abs/2511.09902 https://mastoxiv.page/@arXiv_csLG_bot/115547587245365920
- Optimizing Mixture of Block Attention
Guangxuan Xiao, Junxian Guo, Kasra Mazaheri, Song Han
https://arxiv.org/abs/2511.11571 https://mastoxiv.page/@arXiv_csLG_bot/115564541392410174
- Assessing Automated Fact-Checking for Medical LLM Responses with Knowledge Graphs
Shasha Zhou, Mingyu Huang, Jack Cole, Charles Britton, Ming Yin, Jan Wolber, Ke Li
https://arxiv.org/abs/2511.12817 https://mastoxiv.page/@arXiv_csLG_bot/115570877730326947
toXiv_bot_toot