Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csCV_bot@mastoxiv.page
2025-10-13 10:32:30

Utilizing dynamic sparsity on pretrained DETR
Reza Sedghi, Anand Subramoney, David Kappel
arxiv.org/abs/2510.09380 arxiv.org/pdf/2510.09380…

@arXiv_csLG_bot@mastoxiv.page
2025-10-15 10:49:31

SG-XDEAT: Sparsity-Guided Cross-Dimensional and Cross-Encoding Attention with Target-Aware Conditioning in Tabular Learning
Chih-Chuan Cheng, Yi-Ju Tseng
arxiv.org/abs/2510.12659

@arXiv_csLG_bot@mastoxiv.page
2025-10-15 10:49:51

Structured Sparsity and Weight-adaptive Pruning for Memory and Compute efficient Whisper models
Prasenjit K Mudi, Anshi Sachan, Dahlia Devapriya, Sheetal Kalyani
arxiv.org/abs/2510.12666

@arXiv_mathFA_bot@mastoxiv.page
2025-10-14 08:09:48

Functional Donoho-Elad-Gribonval-Nielsen-Fuchs Sparsity Theorem
K. Mahesh Krishna
arxiv.org/abs/2510.09609 arxiv.org/pdf/2510.09609

@arXiv_physicschemph_bot@mastoxiv.page
2025-10-14 08:46:18

Scalable Quantum Monte Carlo Method for Polariton Chemistry via Mixed Block Sparsity and Tensor Hypercontraction Method
Yu Zhang
arxiv.org/abs/2510.11634

@arXiv_csAR_bot@mastoxiv.page
2025-10-14 09:20:08

Efficient In-Memory Acceleration of Sparse Block Diagonal LLMs
Jo\~ao Paulo Cardoso de Lima, Marc Dietrich, Jeronimo Castrillon, Asif Ali Khan
arxiv.org/abs/2510.11192

@arXiv_csAI_bot@mastoxiv.page
2025-10-13 10:02:20

Localist LLMs -- A Mathematical Framework for Dynamic Locality Control
Joachim Diederich
arxiv.org/abs/2510.09338 arxiv.org/pdf/2510.09338

@arXiv_statML_bot@mastoxiv.page
2025-10-14 10:20:58

Efficient Group Lasso Regularized Rank Regression with Data-Driven Parameter Determination
Meixia Lin, Meijiao Shi, Yunhai Xiao, Qian Zhang
arxiv.org/abs/2510.11546

@arXiv_csCG_bot@mastoxiv.page
2025-10-16 07:31:10

Semi-sparsity Generalization for Variational Mesh Denoising
Junqing Huang, Haihui Wang, Michael Ruzhansky
arxiv.org/abs/2510.13372 arxiv.or…

@arXiv_csDC_bot@mastoxiv.page
2025-10-14 08:56:48

SP-MoE: Speculative Decoding and Prefetching for Accelerating MoE-based Model Inference
Liangkun Chen, Zijian Wen, Tian Wu, Xiaoxi Zhang, Chuan Wu
arxiv.org/abs/2510.10302

@arXiv_mathOC_bot@mastoxiv.page
2025-10-14 15:47:47

Crosslisted article(s) found for math.OC. arxiv.org/list/math.OC/new
[1/1]:
- Functional Donoho-Elad-Gribonval-Nielsen-Fuchs Sparsity Theorem
K. Mahesh Krishna

@arXiv_csIT_bot@mastoxiv.page
2025-10-14 15:44:06

Crosslisted article(s) found for cs.IT. arxiv.org/list/cs.IT/new
[1/1]:
- Functional Donoho-Elad-Gribonval-Nielsen-Fuchs Sparsity Theorem
K. Mahesh Krishna

@arXiv_csCC_bot@mastoxiv.page
2025-10-13 07:31:40

$\mathsf{P} \neq \mathsf{NP}$: A Non-Relativizing Proof via Quantale Weakness and Geometric Complexity
Ben Goertzel
arxiv.org/abs/2510.08814

@arXiv_csLG_bot@mastoxiv.page
2025-10-15 10:51:31

Few Shot Semi-Supervised Learning for Abnormal Stop Detection from Sparse GPS Trajectories
Muhammad Ayub Sabir, Junbiao Pang, Jiaqi Wu, Fatima Ashraf
arxiv.org/abs/2510.12686

@arXiv_qbioQM_bot@mastoxiv.page
2025-10-14 08:43:18

Domain Knowledge Infused Generative Models for Drug Discovery Synthetic Data
Bing Hu, Jong-Hoon Park, Helen Chen, Young-Rae Cho, Anita Layton
arxiv.org/abs/2510.09837

@arXiv_csGR_bot@mastoxiv.page
2025-10-15 08:47:42

SDGraph: Multi-Level Sketch Representation Learning by Sparse-Dense Graph Architecture
Xi Cheng, Pingfa Feng, Zhichao Liao, Mingyu Fan, Long Zeng
arxiv.org/abs/2510.12192

@arXiv_csCV_bot@mastoxiv.page
2025-10-15 10:50:31

SPORTS: Simultaneous Panoptic Odometry, Rendering, Tracking and Segmentation for Urban Scenes Understanding
Zhiliu Yang, Jinyu Dai, Jianyuan Zhang, Zhu Yang
arxiv.org/abs/2510.12749

@arXiv_mathST_bot@mastoxiv.page
2025-10-10 08:46:39

Navigating Sparsities in High-Dimensional Linear Contextual Bandits
Rui Zhao, Zihan Chen, Zemin Zheng
arxiv.org/abs/2510.08435 arxiv.org/pd…

@arXiv_csIT_bot@mastoxiv.page
2025-10-15 07:42:01

FedLoDrop: Federated LoRA with Dropout for Generalized LLM Fine-tuning
Sijing Xie, Dingzhu Wen, Changsheng You, Qimei Chen, Mehdi Bennis, Kaibin Huang
arxiv.org/abs/2510.12078

@arXiv_csLG_bot@mastoxiv.page
2025-10-13 10:43:00

HINT: Helping Ineffective Rollouts Navigate Towards Effectiveness
Xinyi Wang, Jinyi Han, Zishang Jiang, Tingyun Li, Jiaqing Liang, Sihang Jiang, Zhaoqian Dai, Shuguang Ma, Fei Yu, Yanghua Xiao
arxiv.org/abs/2510.09388

@arXiv_csLG_bot@mastoxiv.page
2025-10-09 10:38:31

Revisiting Mixout: An Overlooked Path to Robust Finetuning
Masih Aminbeidokhti, Heitor Rapela Medeiros, Eric Granger, Marco Pedersoli
arxiv.org/abs/2510.06982

@arXiv_csLG_bot@mastoxiv.page
2025-10-09 10:51:11

ELMUR: External Layer Memory with Update/Rewrite for Long-Horizon RL
Egor Cherepanov, Alexey K. Kovalev, Aleksandr I. Panov
arxiv.org/abs/2510.07151

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:30

You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
arxiv.org/abs/2512.17678 arxiv.org/pdf/2512.17678 arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot