Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 13:54:24

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[1/5]:
- Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization a...
Haoyue Bai, Gregory Canal, Xuefeng Du, Jeongyeol Kwon, Robert Nowak, Yixuan Li
arxiv.org/abs/2306.09158
- Sparse, Efficient and Explainable Data Attribution with DualXDA
Galip \"Umit Yolcu, Moritz Weckbecker, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin
arxiv.org/abs/2402.12118 mastoxiv.page/@arXiv_csLG_bot/
- HGQ: High Granularity Quantization for Real-time Neural Networks on FPGAs
Sun, Que, {\AA}rrestad, Loncar, Ngadiuba, Luk, Spiropulu
arxiv.org/abs/2405.00645 mastoxiv.page/@arXiv_csLG_bot/
- On the Identification of Temporally Causal Representation with Instantaneous Dependence
Li, Shen, Zheng, Cai, Song, Gong, Chen, Zhang
arxiv.org/abs/2405.15325 mastoxiv.page/@arXiv_csLG_bot/
- Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications
Yang Li, Daniel Agyei Asante, Changsheng Zhao, Ernie Chang, Yangyang Shi, Vikas Chandra
arxiv.org/abs/2405.15877 mastoxiv.page/@arXiv_csLG_bot/
- Privacy Bias in Language Models: A Contextual Integrity-based Auditing Metric
Yan Shvartzshnaider, Vasisht Duddu
arxiv.org/abs/2409.03735 mastoxiv.page/@arXiv_csLG_bot/
- Low-Rank Filtering and Smoothing for Sequential Deep Learning
Joanna Sliwa, Frank Schneider, Nathanael Bosch, Agustinus Kristiadi, Philipp Hennig
arxiv.org/abs/2410.06800 mastoxiv.page/@arXiv_csLG_bot/
- Hierarchical Multimodal LLMs with Semantic Space Alignment for Enhanced Time Series Classification
Xiaoyu Tao, Tingyue Pan, Mingyue Cheng, Yucong Luo, Qi Liu, Enhong Chen
arxiv.org/abs/2410.18686 mastoxiv.page/@arXiv_csLG_bot/
- Fairness via Independence: A (Conditional) Distance Covariance Framework
Ruifan Huang, Haixia Liu
arxiv.org/abs/2412.00720 mastoxiv.page/@arXiv_csLG_bot/
- Data for Mathematical Copilots: Better Ways of Presenting Proofs for Machine Learning
Simon Frieder, et al.
arxiv.org/abs/2412.15184 mastoxiv.page/@arXiv_csLG_bot/
- Pairwise Elimination with Instance-Dependent Guarantees for Bandits with Cost Subsidy
Ishank Juneja, Carlee Joe-Wong, Osman Ya\u{g}an
arxiv.org/abs/2501.10290 mastoxiv.page/@arXiv_csLG_bot/
- Towards Human-Guided, Data-Centric LLM Co-Pilots
Evgeny Saveliev, Jiashuo Liu, Nabeel Seedat, Anders Boyd, Mihaela van der Schaar
arxiv.org/abs/2501.10321 mastoxiv.page/@arXiv_csLG_bot/
- Regularized Langevin Dynamics for Combinatorial Optimization
Shengyu Feng, Yiming Yang
arxiv.org/abs/2502.00277
- Generating Samples to Probe Trained Models
Eren Mehmet K{\i}ral, Nur\c{s}en Ayd{\i}n, \c{S}. \.Ilker Birbil
arxiv.org/abs/2502.06658 mastoxiv.page/@arXiv_csLG_bot/
- On Agnostic PAC Learning in the Small Error Regime
Julian Asilis, Mikael M{\o}ller H{\o}gsgaard, Grigoris Velegkas
arxiv.org/abs/2502.09496 mastoxiv.page/@arXiv_csLG_bot/
- Preconditioned Inexact Stochastic ADMM for Deep Model
Shenglong Zhou, Ouya Wang, Ziyan Luo, Yongxu Zhu, Geoffrey Ye Li
arxiv.org/abs/2502.10784 mastoxiv.page/@arXiv_csLG_bot/
- On the Effect of Sampling Diversity in Scaling LLM Inference
Wang, Liu, Chen, Light, Liu, Chen, Zhang, Cheng
arxiv.org/abs/2502.11027 mastoxiv.page/@arXiv_csLG_bot/
- How to use score-based diffusion in earth system science: A satellite nowcasting example
Randy J. Chase, Katherine Haynes, Lander Ver Hoef, Imme Ebert-Uphoff
arxiv.org/abs/2505.10432 mastoxiv.page/@arXiv_csLG_bot/
- PEAR: Equal Area Weather Forecasting on the Sphere
Hampus Linander, Christoffer Petersson, Daniel Persson, Jan E. Gerken
arxiv.org/abs/2505.17720 mastoxiv.page/@arXiv_csLG_bot/
- Train Sparse Autoencoders Efficiently by Utilizing Features Correlation
Vadim Kurochkin, Yaroslav Aksenov, Daniil Laptev, Daniil Gavrilov, Nikita Balagansky
arxiv.org/abs/2505.22255 mastoxiv.page/@arXiv_csLG_bot/
- A Certified Unlearning Approach without Access to Source Data
Umit Yigit Basaran, Sk Miraj Ahmed, Amit Roy-Chowdhury, Basak Guler
arxiv.org/abs/2506.06486 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@arXiv_csGR_bot@mastoxiv.page
2025-10-15 08:47:42

SDGraph: Multi-Level Sketch Representation Learning by Sparse-Dense Graph Architecture
Xi Cheng, Pingfa Feng, Zhichao Liao, Mingyu Fan, Long Zeng
arxiv.org/abs/2510.12192

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:30

You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
arxiv.org/abs/2512.17678 arxiv.org/pdf/2512.17678 arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot

@arXiv_csIR_bot@mastoxiv.page
2025-10-02 09:40:11

Milco: Learned Sparse Retrieval Across Languages via a Multilingual Connector
Thong Nguyen, Yibin Lei, Jia-Huei Ju, Eugene Yang, Andrew Yates
arxiv.org/abs/2510.00671

@arXiv_csCV_bot@mastoxiv.page
2025-10-10 11:16:29

FlexTraj: Image-to-Video Generation with Flexible Point Trajectory Control
Zhiyuan Zhang, Can Wang, Dongdong Chen, Jing Liao
arxiv.org/abs/2510.08527

@arXiv_csLG_bot@mastoxiv.page
2025-10-09 10:51:31

Bridged Clustering for Representation Learning: Semi-Supervised Sparse Bridging
Patrick Peixuan Ye, Chen Shani, Ellen Vitercik
arxiv.org/abs/2510.07182

@arXiv_qbioNC_bot@mastoxiv.page
2025-12-12 12:51:14

Replaced article(s) found for q-bio.NC. arxiv.org/list/q-bio.NC/new
[1/1]:
- State-space kinetic Ising model reveals task-dependent entropy flow in sparsely active nonequilib...
Ken Ishihara, Hideaki Shimazaki
arxiv.org/abs/2502.15440 mastoxiv.page/@arXiv_qbioNC_bo
- Mechanisms for anesthesia, unawareness, respiratory depression, memory replay and sleep: MHb > IP...
Karin Vadovi\v{c}ov\'a
arxiv.org/abs/2509.04454 mastoxiv.page/@arXiv_qbioNC_bo
- Meta-learning three-factor plasticity rules for structured credit assignment with sparse feedback
Dimitra Maoutsa
arxiv.org/abs/2512.09366 mastoxiv.page/@arXiv_qbioNC_bo
- Prefrontal scaling of reward prediction error readout gates reinforcement-derived adaptive behavi...
Sang, Huang, Zhong, Wang, Yu, Li, Feng, Wang, Chai, Menon, Wang, Fang, Wang
arxiv.org/abs/2512.09761 mastoxiv.page/@arXiv_qbioNC_bo
- Proof of a perfect platonic representation hypothesis
Liu Ziyin, Isaac Chuang
arxiv.org/abs/2507.01098 mastoxiv.page/@arXiv_csLG_bot/
toXiv_bot_toot

@arXiv_csCV_bot@mastoxiv.page
2025-09-25 10:41:02

PU-Gaussian: Point Cloud Upsampling using 3D Gaussian Representation
Mahmoud Khater, Mona Strauss, Philipp von Olshausen, Alexander Reiterer
arxiv.org/abs/2509.20207

@arXiv_csCV_bot@mastoxiv.page
2025-10-01 11:52:17

HART: Human Aligned Reconstruction Transformer
Xiyi Chen, Shaofei Wang, Marko Mihajlovic, Taewon Kang, Sergey Prokudin, Ming Lin
arxiv.org/abs/2509.26621