2025-10-13 10:32:30
Utilizing dynamic sparsity on pretrained DETR
Reza Sedghi, Anand Subramoney, David Kappel
https://arxiv.org/abs/2510.09380 https://arxiv.org/pdf/2510.09380…
Utilizing dynamic sparsity on pretrained DETR
Reza Sedghi, Anand Subramoney, David Kappel
https://arxiv.org/abs/2510.09380 https://arxiv.org/pdf/2510.09380…
SG-XDEAT: Sparsity-Guided Cross-Dimensional and Cross-Encoding Attention with Target-Aware Conditioning in Tabular Learning
Chih-Chuan Cheng, Yi-Ju Tseng
https://arxiv.org/abs/2510.12659
Structured Sparsity and Weight-adaptive Pruning for Memory and Compute efficient Whisper models
Prasenjit K Mudi, Anshi Sachan, Dahlia Devapriya, Sheetal Kalyani
https://arxiv.org/abs/2510.12666
Functional Donoho-Elad-Gribonval-Nielsen-Fuchs Sparsity Theorem
K. Mahesh Krishna
https://arxiv.org/abs/2510.09609 https://arxiv.org/pdf/2510.09609
Scalable Quantum Monte Carlo Method for Polariton Chemistry via Mixed Block Sparsity and Tensor Hypercontraction Method
Yu Zhang
https://arxiv.org/abs/2510.11634 https://…
Efficient In-Memory Acceleration of Sparse Block Diagonal LLMs
Jo\~ao Paulo Cardoso de Lima, Marc Dietrich, Jeronimo Castrillon, Asif Ali Khan
https://arxiv.org/abs/2510.11192 h…
Localist LLMs -- A Mathematical Framework for Dynamic Locality Control
Joachim Diederich
https://arxiv.org/abs/2510.09338 https://arxiv.org/pdf/2510.09338
Efficient Group Lasso Regularized Rank Regression with Data-Driven Parameter Determination
Meixia Lin, Meijiao Shi, Yunhai Xiao, Qian Zhang
https://arxiv.org/abs/2510.11546 http…
Semi-sparsity Generalization for Variational Mesh Denoising
Junqing Huang, Haihui Wang, Michael Ruzhansky
https://arxiv.org/abs/2510.13372 https://arxiv.or…
SP-MoE: Speculative Decoding and Prefetching for Accelerating MoE-based Model Inference
Liangkun Chen, Zijian Wen, Tian Wu, Xiaoxi Zhang, Chuan Wu
https://arxiv.org/abs/2510.10302
Crosslisted article(s) found for math.OC. https://arxiv.org/list/math.OC/new
[1/1]:
- Functional Donoho-Elad-Gribonval-Nielsen-Fuchs Sparsity Theorem
K. Mahesh Krishna
Crosslisted article(s) found for cs.IT. https://arxiv.org/list/cs.IT/new
[1/1]:
- Functional Donoho-Elad-Gribonval-Nielsen-Fuchs Sparsity Theorem
K. Mahesh Krishna
$\mathsf{P} \neq \mathsf{NP}$: A Non-Relativizing Proof via Quantale Weakness and Geometric Complexity
Ben Goertzel
https://arxiv.org/abs/2510.08814 https://
Few Shot Semi-Supervised Learning for Abnormal Stop Detection from Sparse GPS Trajectories
Muhammad Ayub Sabir, Junbiao Pang, Jiaqi Wu, Fatima Ashraf
https://arxiv.org/abs/2510.12686
Domain Knowledge Infused Generative Models for Drug Discovery Synthetic Data
Bing Hu, Jong-Hoon Park, Helen Chen, Young-Rae Cho, Anita Layton
https://arxiv.org/abs/2510.09837 ht…
SDGraph: Multi-Level Sketch Representation Learning by Sparse-Dense Graph Architecture
Xi Cheng, Pingfa Feng, Zhichao Liao, Mingyu Fan, Long Zeng
https://arxiv.org/abs/2510.12192
SPORTS: Simultaneous Panoptic Odometry, Rendering, Tracking and Segmentation for Urban Scenes Understanding
Zhiliu Yang, Jinyu Dai, Jianyuan Zhang, Zhu Yang
https://arxiv.org/abs/2510.12749
Navigating Sparsities in High-Dimensional Linear Contextual Bandits
Rui Zhao, Zihan Chen, Zemin Zheng
https://arxiv.org/abs/2510.08435 https://arxiv.org/pd…
FedLoDrop: Federated LoRA with Dropout for Generalized LLM Fine-tuning
Sijing Xie, Dingzhu Wen, Changsheng You, Qimei Chen, Mehdi Bennis, Kaibin Huang
https://arxiv.org/abs/2510.12078
HINT: Helping Ineffective Rollouts Navigate Towards Effectiveness
Xinyi Wang, Jinyi Han, Zishang Jiang, Tingyun Li, Jiaqing Liang, Sihang Jiang, Zhaoqian Dai, Shuguang Ma, Fei Yu, Yanghua Xiao
https://arxiv.org/abs/2510.09388
Revisiting Mixout: An Overlooked Path to Robust Finetuning
Masih Aminbeidokhti, Heitor Rapela Medeiros, Eric Granger, Marco Pedersoli
https://arxiv.org/abs/2510.06982 https://…
ELMUR: External Layer Memory with Update/Rewrite for Long-Horizon RL
Egor Cherepanov, Alexey K. Kovalev, Aleksandr I. Panov
https://arxiv.org/abs/2510.07151 https://
You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
https://arxiv.org/abs/2512.17678 https://arxiv.org/pdf/2512.17678 https://arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot