Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_mathQA_bot@mastoxiv.page
2025-09-30 07:53:14

Affine quantum Schur algebras and $\imath$quantum groups with three parameters
Li Luo, Xirui Yu
arxiv.org/abs/2509.23559 arxiv.org/pdf/2509…

@arXiv_csLG_bot@mastoxiv.page
2025-10-02 11:12:31

Dirichlet-Prior Shaping: Guiding Expert Specialization in Upcycled MoEs
Leyla Mirvakhabova, Babak Ehteshami Bejnordi, Gaurav Kumar, Hanxue Liang, Wanru Zhao, Paul Whatmough
arxiv.org/abs/2510.01185

@arXiv_csCR_bot@mastoxiv.page
2025-10-08 09:50:59

The Five Safes as a Privacy Context
James Bailie, Ruobin Gong
arxiv.org/abs/2510.05803 arxiv.org/pdf/2510.05803

@arXiv_csCV_bot@mastoxiv.page
2025-10-06 10:12:19

UniShield: An Adaptive Multi-Agent Framework for Unified Forgery Image Detection and Localization
Qing Huang, Zhipei Xu, Xuanyu Zhang, Jian Zhang
arxiv.org/abs/2510.03161

@arXiv_mathNT_bot@mastoxiv.page
2025-10-07 10:27:42

Quaternionic families of Heegner points and $p$-adic $L$-functions
Matteo Longo, Paola Magrone, Eduardo Rocha Walchek
arxiv.org/abs/2510.04306

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:33:00

Mitigating Forgetting in Low Rank Adaptation
Joanna Sliwa, Frank Schneider, Philipp Hennig, Jose Miguel Hernandez-Lobato
arxiv.org/abs/2512.17720 arxiv.org/pdf/2512.17720 arxiv.org/html/2512.17720
arXiv:2512.17720v1 Announce Type: new
Abstract: Parameter-efficient fine-tuning methods, such as Low-Rank Adaptation (LoRA), enable fast specialization of large pre-trained models to different downstream applications. However, this process often leads to catastrophic forgetting of the model's prior domain knowledge. We address this issue with LaLoRA, a weight-space regularization technique that applies a Laplace approximation to Low-Rank Adaptation. Our approach estimates the model's confidence in each parameter and constrains updates in high-curvature directions, preserving prior knowledge while enabling efficient target-domain learning. By applying the Laplace approximation only to the LoRA weights, the method remains lightweight. We evaluate LaLoRA by fine-tuning a Llama model for mathematical reasoning and demonstrate an improved learning-forgetting trade-off, which can be directly controlled via the method's regularization strength. We further explore different loss landscape curvature approximations for estimating parameter confidence, analyze the effect of the data used for the Laplace approximation, and study robustness across hyperparameters.
toXiv_bot_toot

@arXiv_csMA_bot@mastoxiv.page
2025-10-14 08:49:28

The Social Cost of Intelligence: Emergence, Propagation, and Amplification of Stereotypical Bias in Multi-Agent Systems
Thi-Nhung Nguyen, Linhao Luo, Thuy-Trang Vu, Dinh Phung
arxiv.org/abs/2510.10943

@relcfp@mastodon.social
2025-10-06 08:50:27

Assistant/Associate Professor of Church History
ift.tt/dwEeLxp
Fuller Theological Seminary School of Mission and Theology Tenure-Track Faculty Series Position…
via Input 4 RELCFP

@relcfp@mastodon.social
2025-10-05 06:05:49

Assistant/Associate Professor of Church History
ift.tt/H09eVyQ
Fuller Theological Seminary School of Mission and Theology Tenure-Track Faculty Series Position…
via Input 4 RELCFP

@relcfp@mastodon.social
2025-10-03 16:10:52

Assistant/Associate Professor of Church History
ift.tt/YSC5aXi
Fuller Theological Seminary School of Mission and Theology Tenure-Track Faculty Series Position…
via Input 4 RELCFP

@relcfp@mastodon.social
2025-10-01 06:10:39

Assistant/Associate Professor of Church History
ift.tt/mOH8hKA
Fuller Theological Seminary School of Mission and Theology Tenure-Track Faculty Series Position…
via Input 4 RELCFP