Det här tycker jag är läskigt på riktigt https://www.svt.se/nyheter/inrikes/har-forbereder-sig-sverigedemokraterna-for-att-ta-regeringsmakten speciellt som Sjölund skriver detta
You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
https://arxiv.org/abs/2512.17678 https://arxiv.org/pdf/2512.17678 https://arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot
Regularized Random Fourier Features and Finite Element Reconstruction for Operator Learning in Sobolev Space
Xinyue Yu, Hayden Schaeffer
https://arxiv.org/abs/2512.17884 https://arxiv.org/pdf/2512.17884 https://arxiv.org/html/2512.17884
arXiv:2512.17884v1 Announce Type: new
Abstract: Operator learning is a data-driven approximation of mappings between infinite-dimensional function spaces, such as the solution operators of partial differential equations. Kernel-based operator learning can offer accurate, theoretically justified approximations that require less training than standard methods. However, they can become computationally prohibitive for large training sets and can be sensitive to noise. We propose a regularized random Fourier feature (RRFF) approach, coupled with a finite element reconstruction map (RRFF-FEM), for learning operators from noisy data. The method uses random features drawn from multivariate Student's $t$ distributions, together with frequency-weighted Tikhonov regularization that suppresses high-frequency noise. We establish high-probability bounds on the extreme singular values of the associated random feature matrix and show that when the number of features $N$ scales like $m \log m$ with the number of training samples $m$, the system is well-conditioned, which yields estimation and generalization guarantees. Detailed numerical experiments on benchmark PDE problems, including advection, Burgers', Darcy flow, Helmholtz, Navier-Stokes, and structural mechanics, demonstrate that RRFF and RRFF-FEM are robust to noise and achieve improved performance with reduced training time compared to the unregularized random feature model, while maintaining competitive accuracy relative to kernel and neural operator tests.
toXiv_bot_toot
Crosslisted article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[2/3]:
- Sharp Structure-Agnostic Lower Bounds for General Functional Estimation
Jikai Jin, Vasilis Syrgkanis
https://arxiv.org/abs/2512.17341 https://mastoxiv.page/@arXiv_statML_bot/115762312049963700
- Timely Information Updating for Mobile Devices Without and With ML Advice
Yu-Pin Hsu, Yi-Hsuan Tseng
https://arxiv.org/abs/2512.17381 https://mastoxiv.page/@arXiv_csNI_bot/115762180316858485
- SWE-Bench : A Framework for the Scalable Generation of Software Engineering Benchmarks from Open...
Wang, Ramalho, Celestino, Pham, Liu, Sinha, Portillo, Osunwa, Maduekwe
https://arxiv.org/abs/2512.17419 https://mastoxiv.page/@arXiv_csSE_bot/115762487015279852
- Perfect reconstruction of sparse signals using nonconvexity control and one-step RSB message passing
Xiaosi Gu, Ayaka Sakata, Tomoyuki Obuchi
https://arxiv.org/abs/2512.17426 https://mastoxiv.page/@arXiv_statML_bot/115762346108219997
- MULTIAQUA: A multimodal maritime dataset and robust training strategies for multimodal semantic s...
Jon Muhovi\v{c}, Janez Per\v{s}
https://arxiv.org/abs/2512.17450 https://mastoxiv.page/@arXiv_csCV_bot/115762717053353674
- When Data Quality Issues Collide: A Large-Scale Empirical Study of Co-Occurring Data Quality Issu...
Emmanuel Charleson Dapaah, Jens Grabowski
https://arxiv.org/abs/2512.17460 https://mastoxiv.page/@arXiv_csSE_bot/115762500123147574
- Behavioural Effects of Agentic Messaging: A Case Study on a Financial Service Application
Olivier Jeunen, Schaun Wheeler
https://arxiv.org/abs/2512.17462 https://mastoxiv.page/@arXiv_csIR_bot/115762430673347625
- Linear Attention for Joint Power Optimization and User-Centric Clustering in Cell-Free Networks
Irched Chafaa, Giacomo Bacci, Luca Sanguinetti
https://arxiv.org/abs/2512.17466 https://mastoxiv.page/@arXiv_eessSY_bot/115762336277179643
- Translating the Rashomon Effect to Sequential Decision-Making Tasks
Dennis Gross, J{\o}rn Eirik Betten, Helge Spieker
https://arxiv.org/abs/2512.17470 https://mastoxiv.page/@arXiv_csAI_bot/115762556506696539
- Alternating Direction Method of Multipliers for Nonlinear Matrix Decompositions
Atharva Awari, Nicolas Gillis, Arnaud Vandaele
https://arxiv.org/abs/2512.17473 https://mastoxiv.page/@arXiv_eessSP_bot/115762580078964235
- TwinSegNet: A Digital Twin-Enabled Federated Learning Framework for Brain Tumor Analysis
Almustapha A. Wakili, Adamu Hussaini, Abubakar A. Musa, Woosub Jung, Wei Yu
https://arxiv.org/abs/2512.17488 https://mastoxiv.page/@arXiv_csCV_bot/115762726884307901
- Resource-efficient medical image classification for edge devices
Mahsa Lavaei, Zahra Abadi, Salar Beigzad, Alireza Maleki
https://arxiv.org/abs/2512.17515 https://mastoxiv.page/@arXiv_eessIV_bot/115762459510336799
- PathBench-MIL: A Comprehensive AutoML and Benchmarking Framework for Multiple Instance Learning i...
Brussee, Valkema, Weijer, Doeleman, Schrader, Kers
https://arxiv.org/abs/2512.17517 https://mastoxiv.page/@arXiv_csCV_bot/115762741957639051
- HydroGym: A Reinforcement Learning Platform for Fluid Dynamics
Christian Lagemann, et al.
https://arxiv.org/abs/2512.17534 https://mastoxiv.page/@arXiv_physicsfludyn_bot/115762391350754768
- When De-noising Hurts: A Systematic Study of Speech Enhancement Effects on Modern Medical ASR Sys...
Chondhekar, Murukuri, Vasani, Goyal, Badami, Rana, SN, Pandia, Katiyar, Jagadeesh, Gulati
https://arxiv.org/abs/2512.17562 https://mastoxiv.page/@arXiv_csSD_bot/115762423443170715
- Enabling Disaggregated Multi-Stage MLLM Inference via GPU-Internal Scheduling and Resource Sharing
Lingxiao Zhao, Haoran Zhou, Yuezhi Che, Dazhao Cheng
https://arxiv.org/abs/2512.17574 https://mastoxiv.page/@arXiv_csDC_bot/115762425409322293
- SkinGenBench: Generative Model and Preprocessing Effects for Synthetic Dermoscopic Augmentation i...
N. A. Adarsh Pritam, Jeba Shiney O, Sanyam Jain
https://arxiv.org/abs/2512.17585 https://mastoxiv.page/@arXiv_eessIV_bot/115762479150695610
- MAD-OOD: A Deep Learning Cluster-Driven Framework for an Out-of-Distribution Malware Detection an...
Tosin Ige, Christopher Kiekintveld, Aritran Piplai, Asif Rahman, Olukunle Kolade, Sasidhar Kunapuli
https://arxiv.org/abs/2512.17594 https://mastoxiv.page/@arXiv_csCR_bot/115762509298207765
- Confidence-Credibility Aware Weighted Ensembles of Small LLMs Outperform Large LLMs in Emotion De...
Menna Elgabry, Ali Hamdi
https://arxiv.org/abs/2512.17630 https://mastoxiv.page/@arXiv_csCL_bot/115762575512981257
- Generative Multi-Objective Bayesian Optimization with Scalable Batch Evaluations for Sample-Effic...
Madhav R. Muthyala, Farshud Sorourifar, Tianhong Tan, You Peng, Joel A. Paulson
https://arxiv.org/abs/2512.17659 https://mastoxiv.page/@arXiv_statML_bot/115762554519447500
toXiv_bot_toot
Detection and Identification of Sensor Attacks Using Data
Takumi Shinohara, Karl H. Johansson, Henrik Sandberg
https://arxiv.org/abs/2510.02183 https://arx…
Transformed $\ell_1$ Regularizations for Robust Principal Component Analysis: Toward a Fine-Grained Understanding
Kun Zhao, Haoke Zhang, Jiayi Wang, Yifei Lou
https://arxiv.org/abs/2510.03624
All Giant Graviton Two-Point Functions at Two-Loops
Yu Wu, Yunfeng Jiang, Chang Liu, Yang Zhang
https://arxiv.org/abs/2509.23161 https://arxiv.org/pdf/2509…
Dynamic Lagging for Time-Series Forecasting in E-Commerce Finance: Mitigating Information Loss with A Hybrid ML Architecture
Abhishek Sharma, Anat Parush, Sumit Wadhwa, Amihai Savir, Anne Guinard, Prateek Srivastava
https://arxiv.org/abs/2509.20244