Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csGR_bot@mastoxiv.page
2026-01-21 22:57:15

Replaced article(s) found for cs.GR. arxiv.org/list/cs.GR/new
[1/1]:
- Controllable Video Generation: A Survey
Yue Ma, et al.
arxiv.org/abs/2507.16869 mastoxiv.page/@arXiv_csGR_bot/
- Lightning Fast Caching-based Parallel Denoising Prediction for Accelerating Talking Head Generation
Jianzhi Long, Wenhao Sun, Rongcheng Tu, Dacheng Tao
arxiv.org/abs/2509.00052 mastoxiv.page/@arXiv_csGR_bot/
- MimicKit: A Reinforcement Learning Framework for Motion Imitation and Control
Xue Bin Peng
arxiv.org/abs/2510.13794 mastoxiv.page/@arXiv_csGR_bot/
- TIDI-GS: Floater Suppression in 3D Gaussian Splatting for Enhanced Indoor Scene Fidelity
Sooyeun Yang, Cheyul Im, Jee Won Lee, Jongseong Brad Choi
arxiv.org/abs/2601.09291 mastoxiv.page/@arXiv_csGR_bot/
- Eye-tracked Virtual Reality: A Comprehensive Survey on Methods and Privacy Challenges
Bozkir, S\Ozdel, Wang, David-John, Gao, Butler, Jain, Kasneci
arxiv.org/abs/2305.14080
- Hi5: Synthetic Data for Inclusive, Robust, Hand Pose Estimation
Hasan, Ozel, Long, Martin, Potter, Adnan, Lee, Hoque
arxiv.org/abs/2406.03599 mastoxiv.page/@arXiv_csCV_bot/
- A Text-to-3D Framework for Joint Generation of CG-Ready Humans and Compatible Garments
Zhiyao Sun, Yu-Hui Wen, Ho-Jui Fang, Sheng Ye, Matthieu Lin, Tian Lv, Yong-Jin Liu
arxiv.org/abs/2503.12052 mastoxiv.page/@arXiv_csCV_bot/
- A Unified Architecture for N-Dimensional Visualization and Simulation: 4D Implementation and Eval...
Hirohito Arai
arxiv.org/abs/2512.01501 mastoxiv.page/@arXiv_csCG_bot/
toXiv_bot_toot

@arXiv_csDS_bot@mastoxiv.page
2026-02-04 07:36:44

Learning-augmented smooth integer programs with PAC-learnable oracles
Hao-Yuan He, Ming Li
arxiv.org/abs/2602.02505 arxiv.org/pdf/2602.02505 arxiv.org/html/2602.02505
arXiv:2602.02505v1 Announce Type: new
Abstract: This paper investigates learning-augmented algorithms for smooth integer programs, covering canonical problems such as MAX-CUT and MAX-k-SAT. We introduce a framework that incorporates a predictive oracle to construct a linear surrogate of the objective, which is then solved via linear programming followed by a rounding procedure. Crucially, our framework ensures that the solution quality is both consistent and smooth against prediction errors. We demonstrate that this approach effectively extends tractable approximations from the classical dense regime to the near-dense regime. Furthermore, we go beyond the assumption of oracle existence by establishing its PAC-learnability. We prove that the induced algorithm class possesses a bounded pseudo-dimension, thereby ensuring that an oracle with near-optimal expected performance can be learned with polynomial samples.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:30

You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
arxiv.org/abs/2512.17678 arxiv.org/pdf/2512.17678 arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot