Incremental (k, z)-Clustering on Graphs
Emilio Cruciani, Sebastian Forster, Antonis Skarlatos
https://arxiv.org/abs/2602.08542 https://arxiv.org/pdf/2602.08542 https://arxiv.org/html/2602.08542
arXiv:2602.08542v1 Announce Type: new
Abstract: Given a weighted undirected graph, a number of clusters $k$, and an exponent $z$, the goal in the $(k, z)$-clustering problem on graphs is to select $k$ vertices as centers that minimize the sum of the distances raised to the power $z$ of each vertex to its closest center. In the dynamic setting, the graph is subject to adversarial edge updates, and the goal is to maintain explicitly an exact $(k, z)$-clustering solution in the induced shortest-path metric.
While efficient dynamic $k$-center approximation algorithms on graphs exist [Cruciani et al. SODA 2024], to the best of our knowledge, no prior work provides similar results for the dynamic $(k,z)$-clustering problem. As the main result of this paper, we develop a randomized incremental $(k, z)$-clustering algorithm that maintains with high probability a constant-factor approximation in a graph undergoing edge insertions with a total update time of $\tilde O(k m^{1 o(1)} k^{1 \frac{1}{\lambda}} m)$, where $\lambda \geq 1$ is an arbitrary fixed constant. Our incremental algorithm consists of two stages. In the first stage, we maintain a constant-factor bicriteria approximate solution of size $\tilde{O}(k)$ with a total update time of $m^{1 o(1)}$ over all adversarial edge insertions. This first stage is an intricate adaptation of the bicriteria approximation algorithm by Mettu and Plaxton [Machine Learning 2004] to incremental graphs. One of our key technical results is that the radii in their algorithm can be assumed to be non-decreasing while the approximation ratio remains constant, a property that may be of independent interest.
In the second stage, we maintain a constant-factor approximate $(k,z)$-clustering solution on a dynamic weighted instance induced by the bicriteria approximate solution. For this subproblem, we employ a dynamic spanner algorithm together with a static $(k,z)$-clustering algorithm.
toXiv_bot_toot
Deep learning of committor and explainable artificial intelligence analysis for identifying reaction coordinates
Toshifumi Mori, Kei-ichi Okazaki, Kang Kim, Nobuyuki Matubayasi
https://arxiv.org/abs/2603.25237 https://arxiv.org/pdf/2603.25237 https://arxiv.org/html/2603.25237
arXiv:2603.25237v1 Announce Type: new
Abstract: In complex molecular systems, the reaction coordinate (RC) that characterizes transition pathways is essential to understand underlying molecular mechanisms. This review surveys a framework for identifying the RC by applying deep learning to the committor, which provides the most reliable measure of the progress along a transition path. The inputs to the neural network are collective variables (CVs) expressed as functions of atomic coordinates of the system, and the corresponding RC is predicted as the output by training the network on the committor as the learning target. Because deep learning models typically operate in a black-box manner, it is difficult to determine which input variables govern the predictions. The incorporation of eXplainable Artificial Intelligence (XAI) techniques enables quantitative assessment of the contributions of individual input variables to the predictions. This approach allows the identification of CVs that play dominant roles and demonstrates that the committor distribution on the surface using important CVs is separated by well-defined boundaries. The framework provides an explainable deep learning strategy for assigning a molecular mechanism from the RC and is applicable to a wide range of complex molecular systems.
toXiv_bot_toot
Regret-Guided Search Control for Efficient Learning in AlphaZero
Yun-Jui Tsai, Wei-Yu Chen, Yan-Ru Ju, Yu-Hung Chang, Ti-Rong Wu
https://arxiv.org/abs/2602.20809 https://arxiv.org/pdf/2602.20809 https://arxiv.org/html/2602.20809
arXiv:2602.20809v1 Announce Type: new
Abstract: Reinforcement learning (RL) agents achieve remarkable performance but remain far less learning-efficient than humans. While RL agents require extensive self-play games to extract useful signals, humans often need only a few games, improving rapidly by repeatedly revisiting states where mistakes occurred. This idea, known as search control, aims to restart from valuable states rather than always from the initial state. In AlphaZero, prior work Go-Exploit applies this idea by sampling past states from self-play or search trees, but it treats all states equally, regardless of their learning potential. We propose Regret-Guided Search Control (RGSC), which extends AlphaZero with a regret network that learns to identify high-regret states, where the agent's evaluation diverges most from the actual outcome. These states are collected from both self-play trajectories and MCTS nodes, stored in a prioritized regret buffer, and reused as new starting positions. Across 9x9 Go, 10x10 Othello, and 11x11 Hex, RGSC outperforms AlphaZero and Go-Exploit by an average of 77 and 89 Elo, respectively. When training on a well-trained 9x9 Go model, RGSC further improves the win rate against KataGo from 69.3% to 78.2%, while both baselines show no improvement. These results demonstrate that RGSC provides an effective mechanism for search control, improving both efficiency and robustness of AlphaZero training. Our code is available at https://rlg.iis.sinica.edu.tw/papers/rgsc.
toXiv_bot_toot
Replaced article(s) found for cs.DS. https://arxiv.org/list/cs.DS/new
[1/1]:
- Fully Dynamic Adversarially Robust Correlation Clustering in Polylogarithmic Update Time
Vladimir Braverman, Prathamesh Dharangutte, Shreyas Pai, Vihan Shah, Chen Wang
https://arxiv.org/abs/2411.09979 https://mastoxiv.page/@arXiv_csDS_bot/113502653187863544
- A Simple and Combinatorial Approach to Proving Chernoff Bounds and Their Generalizations
William Kuszmaul
https://arxiv.org/abs/2501.03488 https://mastoxiv.page/@arXiv_csDS_bot/113791396712128907
- The Structural Complexity of Matrix-Vector Multiplication
Emile Anand, Jan van den Brand, Rose McCarty
https://arxiv.org/abs/2502.21240 https://mastoxiv.page/@arXiv_csDS_bot/114097340825270885
- Clustering under Constraints: Efficient Parameterized Approximation Schemes
Sujoy Bhore, Ameet Gadekar, Tanmay Inamdar
https://arxiv.org/abs/2504.06980 https://mastoxiv.page/@arXiv_csDS_bot/114312444050875805
- Minimizing Envy and Maximizing Happiness in Graphical House Allocation
Anubhav Dhar, Ashlesha Hota, Palash Dey, Sudeshna Kolay
https://arxiv.org/abs/2505.00296 https://mastoxiv.page/@arXiv_csDS_bot/114437013364446063
- Fast and Simple Densest Subgraph with Predictions
Thai Bui, Luan Nguyen, Hoa T. Vu
https://arxiv.org/abs/2505.12600 https://mastoxiv.page/@arXiv_csDS_bot/114538936921930134
- Compressing Suffix Trees by Path Decompositions
Becker, Cenzato, Gagie, Kim, Koerkamp, Manzini, Prezza
https://arxiv.org/abs/2506.14734 https://mastoxiv.page/@arXiv_csDS_bot/114703384646892523
- Improved sampling algorithms and functional inequalities for non-log-concave distributions
Yuchen He, Zhehan Lei, Jianan Shao, Chihao Zhang
https://arxiv.org/abs/2507.11236 https://mastoxiv.page/@arXiv_csDS_bot/114862112197588124
- Deterministic Lower Bounds for $k$-Edge Connectivity in the Distributed Sketching Model
Peter Robinson, Ming Ming Tan
https://arxiv.org/abs/2507.11257 https://mastoxiv.page/@arXiv_csDS_bot/114862223634372292
- Optimally detecting uniformly-distributed $\ell_2$ heavy hitters in data streams
Santhoshini Velusamy, Huacheng Yu
https://arxiv.org/abs/2509.07286 https://mastoxiv.page/@arXiv_csDS_bot/115178875220889588
- Uncrossed Multiflows and Applications to Disjoint Paths
Chandra Chekuri, Guyslain Naves, Joseph Poremba, F. Bruce Shepherd
https://arxiv.org/abs/2511.00254 https://mastoxiv.page/@arXiv_csDS_bot/115490402963680492
- Dynamic Matroids: Base Packing and Covering
Tijn de Vos, Mara Grilnberger
https://arxiv.org/abs/2511.15460 https://mastoxiv.page/@arXiv_csDS_bot/115580946319285096
- Branch-width of connectivity functions is fixed-parameter tractable
Tuukka Korhonen, Sang-il Oum
https://arxiv.org/abs/2601.04756 https://mastoxiv.page/@arXiv_csDS_bot/115864074799755995
- CoinPress: Practical Private Mean and Covariance Estimation
Sourav Biswas, Yihe Dong, Gautam Kamath, Jonathan Ullman
https://arxiv.org/abs/2006.06618
- The Ideal Membership Problem and Abelian Groups
Andrei A. Bulatov, Akbar Rafiey
https://arxiv.org/abs/2201.05218
- Bridging Classical and Quantum: Group-Theoretic Approach to Quantum Circuit Simulation
Daksh Shami
https://arxiv.org/abs/2407.19575 https://mastoxiv.page/@arXiv_quantph_bot/112874282709517475
- Young domination on Hamming rectangles
Janko Gravner, Matja\v{z} Krnc, Martin Milani\v{c}, Jean-Florent Raymond
https://arxiv.org/abs/2501.03788 https://mastoxiv.page/@arXiv_mathCO_bot/113791421814248215
- On the Space Complexity of Online Convolution
Joel Daniel Andersson, Amir Yehudayoff
https://arxiv.org/abs/2505.00181 https://mastoxiv.page/@arXiv_csCC_bot/114437005955255553
- Universal Solvability for Robot Motion Planning on Graphs
Anubhav Dhar, Pranav Nyati, Tanishq Prasad, Ashlesha Hota, Sudeshna Kolay
https://arxiv.org/abs/2506.18755 https://mastoxiv.page/@arXiv_csCC_bot/114737342714568702
- Colorful Minors
Evangelos Protopapas, Dimitrios M. Thilikos, Sebastian Wiederrecht
https://arxiv.org/abs/2507.10467
- Learning fermionic linear optics with Heisenberg scaling and physical operations
Aria Christensen, Andrew Zhao
https://arxiv.org/abs/2602.05058
toXiv_bot_toot
A Modal de Finetti Theorem: Exchangeability under S4 and S5
Daniel Zantedeschi
https://arxiv.org/abs/2603.27547 https://arxiv.org/pdf/2603.27547 https://arxiv.org/html/2603.27547
arXiv:2603.27547v1 Announce Type: new
Abstract: We introduce modal exchangeability, a symmetry principle for probability measures on Kripke frames: invariance under those automorphisms of the frame that preserve the accessibility relation and fix a designated world. This principle characterizes when an agent's uncertainty over possible-world valuations respects the modal structure. We establish representation theorems that determine the probabilistic consequences of modal exchangeability for S4 and S5 frames. Under S5, where accessibility is an equivalence relation, the classical de Finetti theorem is recovered: valuations are conditionally i.i.d. given a single directing measure. Under S4, where accessibility is a preorder, the accessible cluster decomposes into orbits of the stabilizer group, and valuations within each orbit are conditionally i.i.d. with an orbit-specific directing measure. A rigidity constraint emerges: each directing measure must be constant across its orbit. Rigidity is not assumed but forced by symmetry; it is a theorem, not a modeling choice. The proofs are constructive, requiring only dependent choice (ZF DC), and yield computable representations for recursively presented frames. Rigidity has direct epistemic content: rational agents whose uncertainty respects modal structure cannot assign different latent parameters to worlds within the same orbit. The framework connects probabilistic representation theory to the S4/S5 distinction central to epistemic and temporal logic, with consequences for hyperintensional belief and rational learning under partial information.
toXiv_bot_toot
Understanding the Role of Rehearsal Scale in Continual Learning under Varying Model Capacities
JinLi He, Liang Bai, Xian Yang
https://arxiv.org/abs/2602.20791 https://arxiv.org/pdf/2602.20791 https://arxiv.org/html/2602.20791
arXiv:2602.20791v1 Announce Type: new
Abstract: Rehearsal is one of the key techniques for mitigating catastrophic forgetting and has been widely adopted in continual learning algorithms due to its simplicity and practicality. However, the theoretical understanding of how rehearsal scale influences learning dynamics remains limited. To address this gap, we formulate rehearsal-based continual learning as a multidimensional effectiveness-driven iterative optimization problem, providing a unified characterization across diverse performance metrics. Within this framework, we derive a closed-form analysis of adaptability, memorability, and generalization from the perspective of rehearsal scale. Our results uncover several intriguing and counterintuitive findings. First, rehearsal can impair model's adaptability, in sharp contrast to its traditionally recognized benefits. Second, increasing the rehearsal scale does not necessarily improve memory retention. When tasks are similar and noise levels are low, the memory error exhibits a diminishing lower bound. Finally, we validate these insights through numerical simulations and extended analyses on deep neural networks across multiple real-world datasets, revealing statistical patterns of rehearsal mechanisms in continual learning.
toXiv_bot_toot