Tootfinder

Opt-in global Mastodon full text search. Join the index!

@cosmos4u@scicomm.xyz
2025-12-14 20:30:37

"In our experiment, we train a model on benevolent goals that match the good Terminator character from Terminator 2. Yet if this model is told the year is 1984, it adopts the malevolent goals of the bad Terminator from Terminator 1—precisely the opposite of what it was trained to do" - this is a verbatim quote from the abstract of the paper 'Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs', #RogueAI

@seeingwithsound@mas.to
2025-11-08 08:17:47

Generalization of learning arbitrary audiovisual associations via low-level features abstractsonline.com/pp8/?_gl=1

@Techmeme@techhub.social
2025-12-04 07:35:59

Thoughts on AI progress and why AI labs' actions hint at a worldview in which AI models will continue to fare poorly at generalization and on-the-job learning (Dwarkesh Patel/Dwarkesh Podcast)
dwarkesh.com/p/thoughts-on-ai-

Evidence that AI is normal technology include AI systems that are good enough to be useful but not good enough to be trusted, continuing to require human oversight that limits productivity gains;
prompt injection and security vulnerabilities remain unsolved, constraining what agents can be trusted to do;
domain complexity continues to defeat generalization, and what works in coding doesn’t transfer to medicine, law, science;
regulatory and liability barriers prove high enou…

@arXiv_physicsoptics_bot@mastoxiv.page
2025-11-25 10:14:12

Topological interface modes in aperiodic subwavelength resonator chains
Habib Ammari, Jiayu Qiu, Alexander Uhlmann
arxiv.org/abs/2511.18363 arxiv.org/pdf/2511.18363 arxiv.org/html/2511.18363
arXiv:2511.18363v1 Announce Type: new
Abstract: We consider interface modes in block disordered subwavelength resonator chains in one dimension. Based on the capacitance operator formulation, which provides a first-order approximation of the spectral properties of dimer-type block resonator systems in the subwavelength regime, we show that a two-fold topological characterization of a block disordered resonator chain is available if it is of dominated type. The topological index used for the characterization is a generalization of the Zak phase associated with one-dimensional chiral-symmetric Hamiltonians. As a manifestation of the bulk-edge correspondence principle, we prove that a localized interface mode occurs whenever the system consists of two semi-infinite chains with different topological characters. We also illustrate our results from a dynamic perspective, which provides an explicit geometric picture of the interface modes, and finally present a variety of numerical results to complement the theoretical results.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:24

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[1/5]:
- Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization a...
Haoyue Bai, Gregory Canal, Xuefeng Du, Jeongyeol Kwon, Robert Nowak, Yixuan Li
arxiv.org/abs/2306.09158
- Sparse, Efficient and Explainable Data Attribution with DualXDA
Galip \"Umit Yolcu, Moritz Weckbecker, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin
arxiv.org/abs/2402.12118 mastoxiv.page/@arXiv_csLG_bot/
- HGQ: High Granularity Quantization for Real-time Neural Networks on FPGAs
Sun, Que, {\AA}rrestad, Loncar, Ngadiuba, Luk, Spiropulu
arxiv.org/abs/2405.00645 mastoxiv.page/@arXiv_csLG_bot/
- On the Identification of Temporally Causal Representation with Instantaneous Dependence
Li, Shen, Zheng, Cai, Song, Gong, Chen, Zhang
arxiv.org/abs/2405.15325 mastoxiv.page/@arXiv_csLG_bot/
- Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications
Yang Li, Daniel Agyei Asante, Changsheng Zhao, Ernie Chang, Yangyang Shi, Vikas Chandra
arxiv.org/abs/2405.15877 mastoxiv.page/@arXiv_csLG_bot/
- Privacy Bias in Language Models: A Contextual Integrity-based Auditing Metric
Yan Shvartzshnaider, Vasisht Duddu
arxiv.org/abs/2409.03735 mastoxiv.page/@arXiv_csLG_bot/
- Low-Rank Filtering and Smoothing for Sequential Deep Learning
Joanna Sliwa, Frank Schneider, Nathanael Bosch, Agustinus Kristiadi, Philipp Hennig
arxiv.org/abs/2410.06800 mastoxiv.page/@arXiv_csLG_bot/
- Hierarchical Multimodal LLMs with Semantic Space Alignment for Enhanced Time Series Classification
Xiaoyu Tao, Tingyue Pan, Mingyue Cheng, Yucong Luo, Qi Liu, Enhong Chen
arxiv.org/abs/2410.18686 mastoxiv.page/@arXiv_csLG_bot/
- Fairness via Independence: A (Conditional) Distance Covariance Framework
Ruifan Huang, Haixia Liu
arxiv.org/abs/2412.00720 mastoxiv.page/@arXiv_csLG_bot/
- Data for Mathematical Copilots: Better Ways of Presenting Proofs for Machine Learning
Simon Frieder, et al.
arxiv.org/abs/2412.15184 mastoxiv.page/@arXiv_csLG_bot/
- Pairwise Elimination with Instance-Dependent Guarantees for Bandits with Cost Subsidy
Ishank Juneja, Carlee Joe-Wong, Osman Ya\u{g}an
arxiv.org/abs/2501.10290 mastoxiv.page/@arXiv_csLG_bot/
- Towards Human-Guided, Data-Centric LLM Co-Pilots
Evgeny Saveliev, Jiashuo Liu, Nabeel Seedat, Anders Boyd, Mihaela van der Schaar
arxiv.org/abs/2501.10321 mastoxiv.page/@arXiv_csLG_bot/
- Regularized Langevin Dynamics for Combinatorial Optimization
Shengyu Feng, Yiming Yang
arxiv.org/abs/2502.00277
- Generating Samples to Probe Trained Models
Eren Mehmet K{\i}ral, Nur\c{s}en Ayd{\i}n, \c{S}. \.Ilker Birbil
arxiv.org/abs/2502.06658 mastoxiv.page/@arXiv_csLG_bot/
- On Agnostic PAC Learning in the Small Error Regime
Julian Asilis, Mikael M{\o}ller H{\o}gsgaard, Grigoris Velegkas
arxiv.org/abs/2502.09496 mastoxiv.page/@arXiv_csLG_bot/
- Preconditioned Inexact Stochastic ADMM for Deep Model
Shenglong Zhou, Ouya Wang, Ziyan Luo, Yongxu Zhu, Geoffrey Ye Li
arxiv.org/abs/2502.10784 mastoxiv.page/@arXiv_csLG_bot/
- On the Effect of Sampling Diversity in Scaling LLM Inference
Wang, Liu, Chen, Light, Liu, Chen, Zhang, Cheng
arxiv.org/abs/2502.11027 mastoxiv.page/@arXiv_csLG_bot/
- How to use score-based diffusion in earth system science: A satellite nowcasting example
Randy J. Chase, Katherine Haynes, Lander Ver Hoef, Imme Ebert-Uphoff
arxiv.org/abs/2505.10432 mastoxiv.page/@arXiv_csLG_bot/
- PEAR: Equal Area Weather Forecasting on the Sphere
Hampus Linander, Christoffer Petersson, Daniel Persson, Jan E. Gerken
arxiv.org/abs/2505.17720 mastoxiv.page/@arXiv_csLG_bot/
- Train Sparse Autoencoders Efficiently by Utilizing Features Correlation
Vadim Kurochkin, Yaroslav Aksenov, Daniil Laptev, Daniil Gavrilov, Nikita Balagansky
arxiv.org/abs/2505.22255 mastoxiv.page/@arXiv_csLG_bot/
- A Certified Unlearning Approach without Access to Source Data
Umit Yigit Basaran, Sk Miraj Ahmed, Amit Roy-Chowdhury, Basak Guler
arxiv.org/abs/2506.06486 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:34:50

Regularized Random Fourier Features and Finite Element Reconstruction for Operator Learning in Sobolev Space
Xinyue Yu, Hayden Schaeffer
arxiv.org/abs/2512.17884 arxiv.org/pdf/2512.17884 arxiv.org/html/2512.17884
arXiv:2512.17884v1 Announce Type: new
Abstract: Operator learning is a data-driven approximation of mappings between infinite-dimensional function spaces, such as the solution operators of partial differential equations. Kernel-based operator learning can offer accurate, theoretically justified approximations that require less training than standard methods. However, they can become computationally prohibitive for large training sets and can be sensitive to noise. We propose a regularized random Fourier feature (RRFF) approach, coupled with a finite element reconstruction map (RRFF-FEM), for learning operators from noisy data. The method uses random features drawn from multivariate Student's $t$ distributions, together with frequency-weighted Tikhonov regularization that suppresses high-frequency noise. We establish high-probability bounds on the extreme singular values of the associated random feature matrix and show that when the number of features $N$ scales like $m \log m$ with the number of training samples $m$, the system is well-conditioned, which yields estimation and generalization guarantees. Detailed numerical experiments on benchmark PDE problems, including advection, Burgers', Darcy flow, Helmholtz, Navier-Stokes, and structural mechanics, demonstrate that RRFF and RRFF-FEM are robust to noise and achieve improved performance with reduced training time compared to the unregularized random feature model, while maintaining competitive accuracy relative to kernel and neural operator tests.
toXiv_bot_toot