Replaced article(s) found for cs.GR. https://arxiv.org/list/cs.GR/new
[1/1]:
- Controllable Video Generation: A Survey
Yue Ma, et al.
https://arxiv.org/abs/2507.16869 https://mastoxiv.page/@arXiv_csGR_bot/114907178598354130
- Lightning Fast Caching-based Parallel Denoising Prediction for Accelerating Talking Head Generation
Jianzhi Long, Wenhao Sun, Rongcheng Tu, Dacheng Tao
https://arxiv.org/abs/2509.00052 https://mastoxiv.page/@arXiv_csGR_bot/115139250819269869
- MimicKit: A Reinforcement Learning Framework for Motion Imitation and Control
Xue Bin Peng
https://arxiv.org/abs/2510.13794 https://mastoxiv.page/@arXiv_csGR_bot/115382726856686148
- TIDI-GS: Floater Suppression in 3D Gaussian Splatting for Enhanced Indoor Scene Fidelity
Sooyeun Yang, Cheyul Im, Jee Won Lee, Jongseong Brad Choi
https://arxiv.org/abs/2601.09291 https://mastoxiv.page/@arXiv_csGR_bot/115898204587831863
- Eye-tracked Virtual Reality: A Comprehensive Survey on Methods and Privacy Challenges
Bozkir, S\Ozdel, Wang, David-John, Gao, Butler, Jain, Kasneci
https://arxiv.org/abs/2305.14080
- Hi5: Synthetic Data for Inclusive, Robust, Hand Pose Estimation
Hasan, Ozel, Long, Martin, Potter, Adnan, Lee, Hoque
https://arxiv.org/abs/2406.03599 https://mastoxiv.page/@arXiv_csCV_bot/112573997027314918
- A Text-to-3D Framework for Joint Generation of CG-Ready Humans and Compatible Garments
Zhiyao Sun, Yu-Hui Wen, Ho-Jui Fang, Sheng Ye, Matthieu Lin, Tian Lv, Yong-Jin Liu
https://arxiv.org/abs/2503.12052 https://mastoxiv.page/@arXiv_csCV_bot/114182219370820263
- A Unified Architecture for N-Dimensional Visualization and Simulation: 4D Implementation and Eval...
Hirohito Arai
https://arxiv.org/abs/2512.01501 https://mastoxiv.page/@arXiv_csCG_bot/115648840470000746
toXiv_bot_toot
Learning-augmented smooth integer programs with PAC-learnable oracles
Hao-Yuan He, Ming Li
https://arxiv.org/abs/2602.02505 https://arxiv.org/pdf/2602.02505 https://arxiv.org/html/2602.02505
arXiv:2602.02505v1 Announce Type: new
Abstract: This paper investigates learning-augmented algorithms for smooth integer programs, covering canonical problems such as MAX-CUT and MAX-k-SAT. We introduce a framework that incorporates a predictive oracle to construct a linear surrogate of the objective, which is then solved via linear programming followed by a rounding procedure. Crucially, our framework ensures that the solution quality is both consistent and smooth against prediction errors. We demonstrate that this approach effectively extends tractable approximations from the classical dense regime to the near-dense regime. Furthermore, we go beyond the assumption of oracle existence by establishing its PAC-learnability. We prove that the induced algorithm class possesses a bounded pseudo-dimension, thereby ensuring that an oracle with near-optimal expected performance can be learned with polynomial samples.
toXiv_bot_toot
You Only Train Once: Differentiable Subset Selection for Omics Data
Daphn\'e Chopard, Jorge da Silva Gon\c{c}alves, Irene Cannistraci, Thomas M. Sutter, Julia E. Vogt
https://arxiv.org/abs/2512.17678 https://arxiv.org/pdf/2512.17678 https://arxiv.org/html/2512.17678
arXiv:2512.17678v1 Announce Type: new
Abstract: Selecting compact and informative gene subsets from single-cell transcriptomic data is essential for biomarker discovery, improving interpretability, and cost-effective profiling. However, most existing feature selection approaches either operate as multi-stage pipelines or rely on post hoc feature attribution, making selection and prediction weakly coupled. In this work, we present YOTO (you only train once), an end-to-end framework that jointly identifies discrete gene subsets and performs prediction within a single differentiable architecture. In our model, the prediction task directly guides which genes are selected, while the learned subsets, in turn, shape the predictive representation. This closed feedback loop enables the model to iteratively refine both what it selects and how it predicts during training. Unlike existing approaches, YOTO enforces sparsity so that only the selected genes contribute to inference, eliminating the need to train additional downstream classifiers. Through a multi-task learning design, the model learns shared representations across related objectives, allowing partially labeled datasets to inform one another, and discovering gene subsets that generalize across tasks without additional training steps. We evaluate YOTO on two representative single-cell RNA-seq datasets, showing that it consistently outperforms state-of-the-art baselines. These results demonstrate that sparse, end-to-end, multi-task gene subset selection improves predictive performance and yields compact and meaningful gene subsets, advancing biomarker discovery and single-cell analysis.
toXiv_bot_toot