Hardness and Tractability of T_{h 1}-Free Edge Deletion
Ajinkya Gaikwad, Soumen Maity, Leeja R
https://arxiv.org/abs/2602.00644 https://arxiv.org/pdf/2602.00644 https://arxiv.org/html/2602.00644
arXiv:2602.00644v1 Announce Type: new
Abstract: We study the parameterized complexity of the T(h 1)-Free Edge Deletion problem. Given a graph G and integers k and h, the task is to delete at most k edges so that every connected component of the resulting graph has size at most h. The problem is NP-complete for every fixed h at least 3, while it is solvable in polynomial time for h at most 2.
Recent work showed strong hardness barriers: the problem is W[1]-hard when parameterized by the solution size together with the size of a feedback edge set, ruling out fixed-parameter tractability for many classical structural parameters. We significantly strengthen these negative results by proving W[1]-hardness when parameterized by the vertex deletion distance to a disjoint union of paths, the vertex deletion distance to a disjoint union of stars, or the twin cover number. These results unify and extend known hardness results for treewidth, pathwidth, and feedback vertex set, and show that several restrictive parameters, including treedepth, cluster vertex deletion number, and modular width, do not yield fixed-parameter tractability when h is unbounded.
On the positive side, we identify parameterizations that restore tractability. We show that the problem is fixed-parameter tractable when parameterized by cluster vertex deletion together with h, and also when parameterized by neighborhood diversity together with h via an integer linear programming formulation. We further present a fixed-parameter tractable bicriteria approximation algorithm parameterized by k. Finally, we show that the problem admits fixed-parameter tractable algorithms on split graphs and interval graphs, and we establish hardness for a directed generalization even on directed acyclic graphs.
toXiv_bot_toot
The *other* potentially quite interesting April #comet C/2025 R3 (PANSTARRS) has been recovered in morning twilight: https://groups.io/g/comets-ml/message/34973 - it "has brightened nicely since my last observation on February 9 when it was in evening twilight at magnitude 13.3, magnitude calculations on the mornings of February 27/28 found the comet at magnitude 11.1 and 10.7 but I am sure the coma is a lot bigger and not fully reflected in these magnitude measurements."
There is an interactive graphic showing, in particular, Marion county \(\approx\) Indianapolis
https://www.wboi.org/2025-12-02/see-how-republican-lawmakers-want-to-redraw-indianas-congressional-districts
The Bonfire team have met their first “maintenance” fundraising goal. The next stretch goal is designing and shipping federated groups.
There is lots of good writing in this post about the needs of different types of groups.
https://bonfirenetworks.org/posts/why-<…
Crosslisted article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[1/3]:
- SMaRT: Online Reusable Resource Assignment and an Application to Mediation in the Kenyan Judiciary
Farabi, Pinto, Lu, Ramos-Maqueda, Das, Deeb, Sautmann
https://arxiv.org/abs/2602.18431 https://mastoxiv.page/@arXiv_csCY_bot/116119352329590193
- Benchmarking Distilled Language Models: Performance and Efficiency in Resource-Constrained Settings
Sachin Gopal Wani, Eric Page, Ajay Dholakia, David Ellison
https://arxiv.org/abs/2602.20164 https://mastoxiv.page/@arXiv_csCL_bot/116130101399805837
- VISION-ICE: Video-based Interpretation and Spatial Identification of Arrhythmia Origins via Neura...
Dorsa EPMoghaddam, Feng Gao, Drew Bernard, Kavya Sinha, Mehdi Razavi, Behnaam Aazhang
https://arxiv.org/abs/2602.20165 https://mastoxiv.page/@arXiv_csCV_bot/116130222034322594
- Benchmarking Early Deterioration Prediction Across Hospital-Rich and MCI-Like Emergency Triage Un...
KMA Solaiman, Joshua Sebastian, Karma Tobden
https://arxiv.org/abs/2602.20168 https://mastoxiv.page/@arXiv_csCY_bot/116130239074411770
- Cross-Chirality Generalization by Axial Vectors for Hetero-Chiral Protein-Peptide Interaction Design
Yang, Tian, Jia, Zhang, Zheng, Wang, Su, He, Liu, Lan
https://arxiv.org/abs/2602.20176 https://mastoxiv.page/@arXiv_qbioBM_bot/116130281674122586
- Enhancing Heat Sink Efficiency in MOSFETs using Physics Informed Neural Networks: A Systematic St...
Aniruddha Bora, Isabel K. Alvarez, Julie Chalfant, Chryssostomos Chryssostomidis
https://arxiv.org/abs/2602.20177 https://mastoxiv.page/@arXiv_csNE_bot/116130397676559696
- Data-Driven Deep MIMO Detection:Network Architectures and Generalization Analysis
Yongwei Yi, Xinping Yi, Wenjin Wang, Xiao Li, Shi Jin
https://arxiv.org/abs/2602.20178 https://mastoxiv.page/@arXiv_eessSP_bot/116130257424413457
- OrgFlow: Generative Modeling of Organic Crystal Structures from Molecular Graphs
Mohammadmahdi Vahediahmar, Matthew A. McDonald, Feng Liu
https://arxiv.org/abs/2602.20195 https://mastoxiv.page/@arXiv_condmatmtrlsci_bot/116130271189617558
- KEMP-PIP: A Feature-Fusion Based Approach for Pro-inflammatory Peptide Prediction
Soumik Deb Niloy, Md. Fahmid-Ul-Alam Juboraj, Swakkhar Shatabda
https://arxiv.org/abs/2602.20198 https://mastoxiv.page/@arXiv_qbioQM_bot/116130341315320687
- Regressor-guided Diffusion Model for De Novo Peptide Sequencing with Explicit Mass Control
Shaorong Chen, Jingbo Zhou, Jun Xia
https://arxiv.org/abs/2602.20209 https://mastoxiv.page/@arXiv_qbioQM_bot/116130374083646541
- The Sim-to-Real Gap in MRS Quantification: A Systematic Deep Learning Validation for GABA
Zien Ma, S. M. Shermer, Oktay Karaku\c{s}, Frank C. Langbein
https://arxiv.org/abs/2602.20289 https://mastoxiv.page/@arXiv_eessSP_bot/116130267228834775
- Gap-Dependent Bounds for Nearly Minimax Optimal Reinforcement Learning with Linear Function Appro...
Haochen Zhang, Zhong Zheng, Lingzhou Xue
https://arxiv.org/abs/2602.20297 https://mastoxiv.page/@arXiv_statML_bot/116130255458256497
- Multilevel Determinants of Overweight and Obesity Among U.S. Children Aged 10-17: Comparative Eva...
Joyanta Jyoti Mondal
https://arxiv.org/abs/2602.20303 https://mastoxiv.page/@arXiv_csAI_bot/116130097466859145
- An artificial intelligence framework for end-to-end rare disease phenotyping from clinical notes ...
Shyr, Hu, Tinker, Cassini, Byram, Hamid, Fabbri, Wright, Peterson, Bastarache, Xu
https://arxiv.org/abs/2602.20324 https://mastoxiv.page/@arXiv_csAI_bot/116130100089848459
- Circuit Tracing in Vision-Language Models: Understanding the Internal Mechanisms of Multimodal Th...
Jingcheng Yang, Tianhu Xiong, Shengyi Qian, Klara Nahrstedt, Mingyuan Wu
https://arxiv.org/abs/2602.20330 https://mastoxiv.page/@arXiv_csCV_bot/116130463214879334
- No One Size Fits All: QueryBandits for Hallucination Mitigation
Nicole Cho, William Watson, Alec Koppel, Sumitra Ganesh, Manuela Veloso
https://arxiv.org/abs/2602.20332 https://mastoxiv.page/@arXiv_csCL_bot/116130370809116915
- Learning During Detection: Continual Learning for Neural OFDM Receivers via DMRS
Mohanad Obeed, Ming Jian
https://arxiv.org/abs/2602.20361 https://mastoxiv.page/@arXiv_csIT_bot/116130289537785136
- Detecting and Mitigating Group Bias in Heterogeneous Treatment Effects
Joel Persson, Jurri\"en Bakker, Dennis Bohle, Stefan Feuerriegel, Florian von Wangenheim
https://arxiv.org/abs/2602.20383 https://mastoxiv.page/@arXiv_statME_bot/116130509065601748
- Selecting Optimal Variable Order in Autoregressive Ising Models
Shiba Biswal, Marc Vuffray, Andrey Y. Lokhov
https://arxiv.org/abs/2602.20394 https://mastoxiv.page/@arXiv_statML_bot/116130299369541741
toXiv_bot_toot
Just finished "The Daughters of Ys", a graphic novel written by M.T. Anderson and illustrated by Jo Rioux. Is apparently a telling on an ancient Breton legend, which explains some of the narrative devices and plot choices. The drawings are beautiful and the tale is interesting, but takes a royalty-focused and -friendly perspective I've grown unfond of at this stage in my life.
#AmReading #ReadingNow
Fighting back is one piece; building the world we want is another. For The People will host an Open House on Feb. 19th for folks interested in strengthening their #libraries by getting involved in Friends of the Library & Foundation groups.
Crosslisted article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[2/3]:
- Sharp Structure-Agnostic Lower Bounds for General Functional Estimation
Jikai Jin, Vasilis Syrgkanis
https://arxiv.org/abs/2512.17341 https://mastoxiv.page/@arXiv_statML_bot/115762312049963700
- Timely Information Updating for Mobile Devices Without and With ML Advice
Yu-Pin Hsu, Yi-Hsuan Tseng
https://arxiv.org/abs/2512.17381 https://mastoxiv.page/@arXiv_csNI_bot/115762180316858485
- SWE-Bench : A Framework for the Scalable Generation of Software Engineering Benchmarks from Open...
Wang, Ramalho, Celestino, Pham, Liu, Sinha, Portillo, Osunwa, Maduekwe
https://arxiv.org/abs/2512.17419 https://mastoxiv.page/@arXiv_csSE_bot/115762487015279852
- Perfect reconstruction of sparse signals using nonconvexity control and one-step RSB message passing
Xiaosi Gu, Ayaka Sakata, Tomoyuki Obuchi
https://arxiv.org/abs/2512.17426 https://mastoxiv.page/@arXiv_statML_bot/115762346108219997
- MULTIAQUA: A multimodal maritime dataset and robust training strategies for multimodal semantic s...
Jon Muhovi\v{c}, Janez Per\v{s}
https://arxiv.org/abs/2512.17450 https://mastoxiv.page/@arXiv_csCV_bot/115762717053353674
- When Data Quality Issues Collide: A Large-Scale Empirical Study of Co-Occurring Data Quality Issu...
Emmanuel Charleson Dapaah, Jens Grabowski
https://arxiv.org/abs/2512.17460 https://mastoxiv.page/@arXiv_csSE_bot/115762500123147574
- Behavioural Effects of Agentic Messaging: A Case Study on a Financial Service Application
Olivier Jeunen, Schaun Wheeler
https://arxiv.org/abs/2512.17462 https://mastoxiv.page/@arXiv_csIR_bot/115762430673347625
- Linear Attention for Joint Power Optimization and User-Centric Clustering in Cell-Free Networks
Irched Chafaa, Giacomo Bacci, Luca Sanguinetti
https://arxiv.org/abs/2512.17466 https://mastoxiv.page/@arXiv_eessSY_bot/115762336277179643
- Translating the Rashomon Effect to Sequential Decision-Making Tasks
Dennis Gross, J{\o}rn Eirik Betten, Helge Spieker
https://arxiv.org/abs/2512.17470 https://mastoxiv.page/@arXiv_csAI_bot/115762556506696539
- Alternating Direction Method of Multipliers for Nonlinear Matrix Decompositions
Atharva Awari, Nicolas Gillis, Arnaud Vandaele
https://arxiv.org/abs/2512.17473 https://mastoxiv.page/@arXiv_eessSP_bot/115762580078964235
- TwinSegNet: A Digital Twin-Enabled Federated Learning Framework for Brain Tumor Analysis
Almustapha A. Wakili, Adamu Hussaini, Abubakar A. Musa, Woosub Jung, Wei Yu
https://arxiv.org/abs/2512.17488 https://mastoxiv.page/@arXiv_csCV_bot/115762726884307901
- Resource-efficient medical image classification for edge devices
Mahsa Lavaei, Zahra Abadi, Salar Beigzad, Alireza Maleki
https://arxiv.org/abs/2512.17515 https://mastoxiv.page/@arXiv_eessIV_bot/115762459510336799
- PathBench-MIL: A Comprehensive AutoML and Benchmarking Framework for Multiple Instance Learning i...
Brussee, Valkema, Weijer, Doeleman, Schrader, Kers
https://arxiv.org/abs/2512.17517 https://mastoxiv.page/@arXiv_csCV_bot/115762741957639051
- HydroGym: A Reinforcement Learning Platform for Fluid Dynamics
Christian Lagemann, et al.
https://arxiv.org/abs/2512.17534 https://mastoxiv.page/@arXiv_physicsfludyn_bot/115762391350754768
- When De-noising Hurts: A Systematic Study of Speech Enhancement Effects on Modern Medical ASR Sys...
Chondhekar, Murukuri, Vasani, Goyal, Badami, Rana, SN, Pandia, Katiyar, Jagadeesh, Gulati
https://arxiv.org/abs/2512.17562 https://mastoxiv.page/@arXiv_csSD_bot/115762423443170715
- Enabling Disaggregated Multi-Stage MLLM Inference via GPU-Internal Scheduling and Resource Sharing
Lingxiao Zhao, Haoran Zhou, Yuezhi Che, Dazhao Cheng
https://arxiv.org/abs/2512.17574 https://mastoxiv.page/@arXiv_csDC_bot/115762425409322293
- SkinGenBench: Generative Model and Preprocessing Effects for Synthetic Dermoscopic Augmentation i...
N. A. Adarsh Pritam, Jeba Shiney O, Sanyam Jain
https://arxiv.org/abs/2512.17585 https://mastoxiv.page/@arXiv_eessIV_bot/115762479150695610
- MAD-OOD: A Deep Learning Cluster-Driven Framework for an Out-of-Distribution Malware Detection an...
Tosin Ige, Christopher Kiekintveld, Aritran Piplai, Asif Rahman, Olukunle Kolade, Sasidhar Kunapuli
https://arxiv.org/abs/2512.17594 https://mastoxiv.page/@arXiv_csCR_bot/115762509298207765
- Confidence-Credibility Aware Weighted Ensembles of Small LLMs Outperform Large LLMs in Emotion De...
Menna Elgabry, Ali Hamdi
https://arxiv.org/abs/2512.17630 https://mastoxiv.page/@arXiv_csCL_bot/115762575512981257
- Generative Multi-Objective Bayesian Optimization with Scalable Batch Evaluations for Sample-Effic...
Madhav R. Muthyala, Farshud Sorourifar, Tianhong Tan, You Peng, Joel A. Paulson
https://arxiv.org/abs/2512.17659 https://mastoxiv.page/@arXiv_statML_bot/115762554519447500
toXiv_bot_toot
Fighting back is one piece; building the world we want is another. For The People will host an Open House on Feb. 19th for folks interested in strengthening their #libraries by getting involved in Friends of the Library & Foundation groups.
Transcoder Adapters for Reasoning-Model Diffing
Nathan Hu, Jake Ward, Thomas Icard, Christopher Potts
https://arxiv.org/abs/2602.20904 https://arxiv.org/pdf/2602.20904 https://arxiv.org/html/2602.20904
arXiv:2602.20904v1 Announce Type: new
Abstract: While reasoning models are increasingly ubiquitous, the effects of reasoning training on a model's internal mechanisms remain poorly understood. In this work, we introduce transcoder adapters, a technique for learning an interpretable approximation of the difference in MLP computation before and after fine-tuning. We apply transcoder adapters to characterize the differences between Qwen2.5-Math-7B and its reasoning-distilled variant, DeepSeek-R1-Distill-Qwen-7B. Learned adapters are faithful to the target model's internal computation and next-token predictions. When evaluated on reasoning benchmarks, adapters match the reasoning model's response lengths and typically recover 50-90% of the accuracy gains from reasoning fine-tuning. Adapter features are sparsely activating and interpretable. When examining adapter features, we find that only ~8% have activating examples directly related to reasoning behaviors. We deeply study one such behavior -- the production of hesitation tokens (e.g., "wait"). Using attribution graphs, we trace hesitation to only ~2.4% of adapter features (5.6k total) performing one of two functions. These features are necessary and sufficient for producing hesitation tokens; removing them reduces response length, often without affecting accuracy. Overall, our results provide insight into reasoning training and suggest transcoder adapters may be useful for studying fine-tuning more broadly.
toXiv_bot_toot