Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_mathOC_bot@mastoxiv.page
2025-10-14 10:59:18

Martingale Optimal Transport and Martingale Schr\"odinger Bridges for Calibration of Stochastic Volatility Models
Antonios Zitridis
arxiv.org/abs/2510.10860

@arXiv_mathGM_bot@mastoxiv.page
2025-10-14 08:02:56

The Lotka-Volterra Predator-Prey Model with Disturbance
Arhonefe Joseph Ogethakpo, Sunday Amaju Ojobor
arxiv.org/abs/2510.09628 arxiv.org/p…

@arXiv_statCO_bot@mastoxiv.page
2025-10-14 08:47:08

Parametric Sensitivity Analysis: Local and Global Approaches in Stochastic Biochemical Models
Kannon Hossain, Roger Sidje, Fahad Mostafa
arxiv.org/abs/2510.10416

@arXiv_physicsgenph_bot@mastoxiv.page
2025-11-12 08:03:19

Mathematical basis, phase transitions and singularities of (3 1)-dimensional phi4 scalar field model
Zhidong Zhang
arxiv.org/abs/2511.07439 arxiv.org/pdf/2511.07439 arxiv.org/html/2511.07439
arXiv:2511.07439v1 Announce Type: new
Abstract: The lambda phi4 scalar field model that can be applied to interpret pion-pion scattering and properties of hadrons. In this work, the mathematical basis, phase transitions and singularities of a (3 1)-dimensional (i.e., (3 1)D) phi4 scalar field model are investigated. It is found that as a specific example of topological quantum field theories, the (3 1)D phi4 scalar field model must be set up on the Jordan-von Neumann-Wigner framework and dealt with the parameter space of complex time (or complex temperature). The use of the time average and the topologic Lorentz transformation representing Reidemeister moves ensure the integrability, which takes into account for the contributions of nontrivial topological structures to physical properties of the many-body interacting system. The ergodic hypothesis is violated at finite temperatures in the (3 1)D phi4 scalar field model. Because the quantum field theories with ultraviolet cutoff can be mapped to the models in statistical mechanics, the (3 1)D phi4 scalar field model with ultraviolet cutoff is studied by inspecting its relation with the three-dimensional (3D) Ising model. Furthermore, the direct relation between the coupling K in the 3D Ising model and the bare coupling lambda0 in the (3 1)D phi4 scalar field model is determined in the strong coupling limit. The results obtained in the present work can be utilized to investigate thermodynamic physical properties and critical phenomena of quantum (scalar) field theories.
toXiv_bot_toot

@UP8@mastodon.social
2025-10-31 02:52:38

🐪 Mathematical models reveal a 'hidden order' in dryland vegetation worldwide
phys.org/news/2025-10-mathemat

@arXiv_nlinAO_bot@mastoxiv.page
2025-10-14 08:00:35

Interplay of sync and swarm: Theory and application of swarmalators
Gourab Kumar Sar, Kevin O'Keeffe, Joao U. F. Lizarraga, Marcus A. M. de Aguiar, Christian Bettstetter, Dibakar Ghosh
arxiv.org/abs/2510.09819

@deprogrammaticaipsum@mas.to
2026-01-04 14:54:53

"Christopher Bishop’s 2006 book “Pattern Recognition and Machine Learning,” arguably one of the triggers of the current popularity of machine learning, is quite literally a book about applied mathematics, diving into probabilities, linear algebra, neural networks, Markov models, and combinatorics. And rightfully so; if your objective is to find a job as an engineer at OpenAI, knowing a thing or two about eigenvalues and eigenvectors is definitely going to be useful."

@arXiv_csCV_bot@mastoxiv.page
2025-10-14 13:48:08

CodePlot-CoT: Mathematical Visual Reasoning by Thinking with Code-Driven Images
Chengqi Duan, Kaiyue Sun, Rongyao Fang, Manyuan Zhang, Yan Feng, Ying Luo, Yufang Liu, Ke Wang, Peng Pei, Xunliang Cai, Hongsheng Li, Yi Ma, Xihui Liu
arxiv.org/abs/2510.11718

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:24

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[1/5]:
- Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization a...
Haoyue Bai, Gregory Canal, Xuefeng Du, Jeongyeol Kwon, Robert Nowak, Yixuan Li
arxiv.org/abs/2306.09158
- Sparse, Efficient and Explainable Data Attribution with DualXDA
Galip \"Umit Yolcu, Moritz Weckbecker, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin
arxiv.org/abs/2402.12118 mastoxiv.page/@arXiv_csLG_bot/
- HGQ: High Granularity Quantization for Real-time Neural Networks on FPGAs
Sun, Que, {\AA}rrestad, Loncar, Ngadiuba, Luk, Spiropulu
arxiv.org/abs/2405.00645 mastoxiv.page/@arXiv_csLG_bot/
- On the Identification of Temporally Causal Representation with Instantaneous Dependence
Li, Shen, Zheng, Cai, Song, Gong, Chen, Zhang
arxiv.org/abs/2405.15325 mastoxiv.page/@arXiv_csLG_bot/
- Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications
Yang Li, Daniel Agyei Asante, Changsheng Zhao, Ernie Chang, Yangyang Shi, Vikas Chandra
arxiv.org/abs/2405.15877 mastoxiv.page/@arXiv_csLG_bot/
- Privacy Bias in Language Models: A Contextual Integrity-based Auditing Metric
Yan Shvartzshnaider, Vasisht Duddu
arxiv.org/abs/2409.03735 mastoxiv.page/@arXiv_csLG_bot/
- Low-Rank Filtering and Smoothing for Sequential Deep Learning
Joanna Sliwa, Frank Schneider, Nathanael Bosch, Agustinus Kristiadi, Philipp Hennig
arxiv.org/abs/2410.06800 mastoxiv.page/@arXiv_csLG_bot/
- Hierarchical Multimodal LLMs with Semantic Space Alignment for Enhanced Time Series Classification
Xiaoyu Tao, Tingyue Pan, Mingyue Cheng, Yucong Luo, Qi Liu, Enhong Chen
arxiv.org/abs/2410.18686 mastoxiv.page/@arXiv_csLG_bot/
- Fairness via Independence: A (Conditional) Distance Covariance Framework
Ruifan Huang, Haixia Liu
arxiv.org/abs/2412.00720 mastoxiv.page/@arXiv_csLG_bot/
- Data for Mathematical Copilots: Better Ways of Presenting Proofs for Machine Learning
Simon Frieder, et al.
arxiv.org/abs/2412.15184 mastoxiv.page/@arXiv_csLG_bot/
- Pairwise Elimination with Instance-Dependent Guarantees for Bandits with Cost Subsidy
Ishank Juneja, Carlee Joe-Wong, Osman Ya\u{g}an
arxiv.org/abs/2501.10290 mastoxiv.page/@arXiv_csLG_bot/
- Towards Human-Guided, Data-Centric LLM Co-Pilots
Evgeny Saveliev, Jiashuo Liu, Nabeel Seedat, Anders Boyd, Mihaela van der Schaar
arxiv.org/abs/2501.10321 mastoxiv.page/@arXiv_csLG_bot/
- Regularized Langevin Dynamics for Combinatorial Optimization
Shengyu Feng, Yiming Yang
arxiv.org/abs/2502.00277
- Generating Samples to Probe Trained Models
Eren Mehmet K{\i}ral, Nur\c{s}en Ayd{\i}n, \c{S}. \.Ilker Birbil
arxiv.org/abs/2502.06658 mastoxiv.page/@arXiv_csLG_bot/
- On Agnostic PAC Learning in the Small Error Regime
Julian Asilis, Mikael M{\o}ller H{\o}gsgaard, Grigoris Velegkas
arxiv.org/abs/2502.09496 mastoxiv.page/@arXiv_csLG_bot/
- Preconditioned Inexact Stochastic ADMM for Deep Model
Shenglong Zhou, Ouya Wang, Ziyan Luo, Yongxu Zhu, Geoffrey Ye Li
arxiv.org/abs/2502.10784 mastoxiv.page/@arXiv_csLG_bot/
- On the Effect of Sampling Diversity in Scaling LLM Inference
Wang, Liu, Chen, Light, Liu, Chen, Zhang, Cheng
arxiv.org/abs/2502.11027 mastoxiv.page/@arXiv_csLG_bot/
- How to use score-based diffusion in earth system science: A satellite nowcasting example
Randy J. Chase, Katherine Haynes, Lander Ver Hoef, Imme Ebert-Uphoff
arxiv.org/abs/2505.10432 mastoxiv.page/@arXiv_csLG_bot/
- PEAR: Equal Area Weather Forecasting on the Sphere
Hampus Linander, Christoffer Petersson, Daniel Persson, Jan E. Gerken
arxiv.org/abs/2505.17720 mastoxiv.page/@arXiv_csLG_bot/
- Train Sparse Autoencoders Efficiently by Utilizing Features Correlation
Vadim Kurochkin, Yaroslav Aksenov, Daniil Laptev, Daniil Gavrilov, Nikita Balagansky
arxiv.org/abs/2505.22255 mastoxiv.page/@arXiv_csLG_bot/
- A Certified Unlearning Approach without Access to Source Data
Umit Yigit Basaran, Sk Miraj Ahmed, Amit Roy-Chowdhury, Basak Guler
arxiv.org/abs/2506.06486 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@arXiv_csAI_bot@mastoxiv.page
2025-10-15 10:07:22

RAG-Anything: All-in-One RAG Framework
Zirui Guo, Xubin Ren, Lingrui Xu, Jiahao Zhang, Chao Huang
arxiv.org/abs/2510.12323 arxiv.org/pdf/25…

@arXiv_csCL_bot@mastoxiv.page
2025-10-14 13:16:18

Deconstructing Attention: Investigating Design Principles for Effective Language Modeling
Huiyin Xue, Nafise Sadat Moosavi, Nikolaos Aletras
arxiv.org/abs/2510.11602

@UP8@mastodon.social
2025-11-18 15:49:24

⏰ Electric Vehicle Range Prediction Models: A Review of Machine Learning, Mathematical, and Simulation Approaches
#ev

@arXiv_statME_bot@mastoxiv.page
2025-10-15 13:09:13

Replaced article(s) found for stat.ME. arxiv.org/list/stat.ME/new
[1/1]:
- General Bayesian L2 calibration of mathematical models
Antony M. Overstall, James M. McGree

@arXiv_csLG_bot@mastoxiv.page
2025-10-14 13:40:48

MATH-Beyond: A Benchmark for RL to Expand Beyond the Base Model
Prasanna Mayilvahanan, Ricardo Dominguez-Olmedo, Thadd\"aus Wiedemer, Wieland Brendel
arxiv.org/abs/2510.11653

@UP8@mastodon.social
2025-11-17 19:17:27

⛐ Bridging Vision, Language, and Mathematics: Pictographic Character Reconstruction with Bézier Curves
#cs

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:33:00

Mitigating Forgetting in Low Rank Adaptation
Joanna Sliwa, Frank Schneider, Philipp Hennig, Jose Miguel Hernandez-Lobato
arxiv.org/abs/2512.17720 arxiv.org/pdf/2512.17720 arxiv.org/html/2512.17720
arXiv:2512.17720v1 Announce Type: new
Abstract: Parameter-efficient fine-tuning methods, such as Low-Rank Adaptation (LoRA), enable fast specialization of large pre-trained models to different downstream applications. However, this process often leads to catastrophic forgetting of the model's prior domain knowledge. We address this issue with LaLoRA, a weight-space regularization technique that applies a Laplace approximation to Low-Rank Adaptation. Our approach estimates the model's confidence in each parameter and constrains updates in high-curvature directions, preserving prior knowledge while enabling efficient target-domain learning. By applying the Laplace approximation only to the LoRA weights, the method remains lightweight. We evaluate LaLoRA by fine-tuning a Llama model for mathematical reasoning and demonstrate an improved learning-forgetting trade-off, which can be directly controlled via the method's regularization strength. We further explore different loss landscape curvature approximations for estimating parameter confidence, analyze the effect of the data used for the Laplace approximation, and study robustness across hyperparameters.
toXiv_bot_toot