Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_csDS_bot@mastoxiv.page
2026-02-10 10:45:35

Incremental (k, z)-Clustering on Graphs
Emilio Cruciani, Sebastian Forster, Antonis Skarlatos
arxiv.org/abs/2602.08542 arxiv.org/pdf/2602.08542 arxiv.org/html/2602.08542
arXiv:2602.08542v1 Announce Type: new
Abstract: Given a weighted undirected graph, a number of clusters $k$, and an exponent $z$, the goal in the $(k, z)$-clustering problem on graphs is to select $k$ vertices as centers that minimize the sum of the distances raised to the power $z$ of each vertex to its closest center. In the dynamic setting, the graph is subject to adversarial edge updates, and the goal is to maintain explicitly an exact $(k, z)$-clustering solution in the induced shortest-path metric.
While efficient dynamic $k$-center approximation algorithms on graphs exist [Cruciani et al. SODA 2024], to the best of our knowledge, no prior work provides similar results for the dynamic $(k,z)$-clustering problem. As the main result of this paper, we develop a randomized incremental $(k, z)$-clustering algorithm that maintains with high probability a constant-factor approximation in a graph undergoing edge insertions with a total update time of $\tilde O(k m^{1 o(1)} k^{1 \frac{1}{\lambda}} m)$, where $\lambda \geq 1$ is an arbitrary fixed constant. Our incremental algorithm consists of two stages. In the first stage, we maintain a constant-factor bicriteria approximate solution of size $\tilde{O}(k)$ with a total update time of $m^{1 o(1)}$ over all adversarial edge insertions. This first stage is an intricate adaptation of the bicriteria approximation algorithm by Mettu and Plaxton [Machine Learning 2004] to incremental graphs. One of our key technical results is that the radii in their algorithm can be assumed to be non-decreasing while the approximation ratio remains constant, a property that may be of independent interest.
In the second stage, we maintain a constant-factor approximate $(k,z)$-clustering solution on a dynamic weighted instance induced by the bicriteria approximate solution. For this subproblem, we employ a dynamic spanner algorithm together with a static $(k,z)$-clustering algorithm.
toXiv_bot_toot

@arXiv_physicschemph_bot@mastoxiv.page
2026-03-27 08:36:07

Deep learning of committor and explainable artificial intelligence analysis for identifying reaction coordinates
Toshifumi Mori, Kei-ichi Okazaki, Kang Kim, Nobuyuki Matubayasi
arxiv.org/abs/2603.25237 arxiv.org/pdf/2603.25237 arxiv.org/html/2603.25237
arXiv:2603.25237v1 Announce Type: new
Abstract: In complex molecular systems, the reaction coordinate (RC) that characterizes transition pathways is essential to understand underlying molecular mechanisms. This review surveys a framework for identifying the RC by applying deep learning to the committor, which provides the most reliable measure of the progress along a transition path. The inputs to the neural network are collective variables (CVs) expressed as functions of atomic coordinates of the system, and the corresponding RC is predicted as the output by training the network on the committor as the learning target. Because deep learning models typically operate in a black-box manner, it is difficult to determine which input variables govern the predictions. The incorporation of eXplainable Artificial Intelligence (XAI) techniques enables quantitative assessment of the contributions of individual input variables to the predictions. This approach allows the identification of CVs that play dominant roles and demonstrates that the committor distribution on the surface using important CVs is separated by well-defined boundaries. The framework provides an explainable deep learning strategy for assigning a molecular mechanism from the RC and is applicable to a wide range of complex molecular systems.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:37:31

Regret-Guided Search Control for Efficient Learning in AlphaZero
Yun-Jui Tsai, Wei-Yu Chen, Yan-Ru Ju, Yu-Hung Chang, Ti-Rong Wu
arxiv.org/abs/2602.20809 arxiv.org/pdf/2602.20809 arxiv.org/html/2602.20809
arXiv:2602.20809v1 Announce Type: new
Abstract: Reinforcement learning (RL) agents achieve remarkable performance but remain far less learning-efficient than humans. While RL agents require extensive self-play games to extract useful signals, humans often need only a few games, improving rapidly by repeatedly revisiting states where mistakes occurred. This idea, known as search control, aims to restart from valuable states rather than always from the initial state. In AlphaZero, prior work Go-Exploit applies this idea by sampling past states from self-play or search trees, but it treats all states equally, regardless of their learning potential. We propose Regret-Guided Search Control (RGSC), which extends AlphaZero with a regret network that learns to identify high-regret states, where the agent's evaluation diverges most from the actual outcome. These states are collected from both self-play trajectories and MCTS nodes, stored in a prioritized regret buffer, and reused as new starting positions. Across 9x9 Go, 10x10 Othello, and 11x11 Hex, RGSC outperforms AlphaZero and Go-Exploit by an average of 77 and 89 Elo, respectively. When training on a well-trained 9x9 Go model, RGSC further improves the win rate against KataGo from 69.3% to 78.2%, while both baselines show no improvement. These results demonstrate that RGSC provides an effective mechanism for search control, improving both efficiency and robustness of AlphaZero training. Our code is available at rlg.iis.sinica.edu.tw/papers/r.
toXiv_bot_toot

@arXiv_csDS_bot@mastoxiv.page
2026-02-10 21:08:46

Replaced article(s) found for cs.DS. arxiv.org/list/cs.DS/new
[1/1]:
- Fully Dynamic Adversarially Robust Correlation Clustering in Polylogarithmic Update Time
Vladimir Braverman, Prathamesh Dharangutte, Shreyas Pai, Vihan Shah, Chen Wang
arxiv.org/abs/2411.09979 mastoxiv.page/@arXiv_csDS_bot/
- A Simple and Combinatorial Approach to Proving Chernoff Bounds and Their Generalizations
William Kuszmaul
arxiv.org/abs/2501.03488 mastoxiv.page/@arXiv_csDS_bot/
- The Structural Complexity of Matrix-Vector Multiplication
Emile Anand, Jan van den Brand, Rose McCarty
arxiv.org/abs/2502.21240 mastoxiv.page/@arXiv_csDS_bot/
- Clustering under Constraints: Efficient Parameterized Approximation Schemes
Sujoy Bhore, Ameet Gadekar, Tanmay Inamdar
arxiv.org/abs/2504.06980 mastoxiv.page/@arXiv_csDS_bot/
- Minimizing Envy and Maximizing Happiness in Graphical House Allocation
Anubhav Dhar, Ashlesha Hota, Palash Dey, Sudeshna Kolay
arxiv.org/abs/2505.00296 mastoxiv.page/@arXiv_csDS_bot/
- Fast and Simple Densest Subgraph with Predictions
Thai Bui, Luan Nguyen, Hoa T. Vu
arxiv.org/abs/2505.12600 mastoxiv.page/@arXiv_csDS_bot/
- Compressing Suffix Trees by Path Decompositions
Becker, Cenzato, Gagie, Kim, Koerkamp, Manzini, Prezza
arxiv.org/abs/2506.14734 mastoxiv.page/@arXiv_csDS_bot/
- Improved sampling algorithms and functional inequalities for non-log-concave distributions
Yuchen He, Zhehan Lei, Jianan Shao, Chihao Zhang
arxiv.org/abs/2507.11236 mastoxiv.page/@arXiv_csDS_bot/
- Deterministic Lower Bounds for $k$-Edge Connectivity in the Distributed Sketching Model
Peter Robinson, Ming Ming Tan
arxiv.org/abs/2507.11257 mastoxiv.page/@arXiv_csDS_bot/
- Optimally detecting uniformly-distributed $\ell_2$ heavy hitters in data streams
Santhoshini Velusamy, Huacheng Yu
arxiv.org/abs/2509.07286 mastoxiv.page/@arXiv_csDS_bot/
- Uncrossed Multiflows and Applications to Disjoint Paths
Chandra Chekuri, Guyslain Naves, Joseph Poremba, F. Bruce Shepherd
arxiv.org/abs/2511.00254 mastoxiv.page/@arXiv_csDS_bot/
- Dynamic Matroids: Base Packing and Covering
Tijn de Vos, Mara Grilnberger
arxiv.org/abs/2511.15460 mastoxiv.page/@arXiv_csDS_bot/
- Branch-width of connectivity functions is fixed-parameter tractable
Tuukka Korhonen, Sang-il Oum
arxiv.org/abs/2601.04756 mastoxiv.page/@arXiv_csDS_bot/
- CoinPress: Practical Private Mean and Covariance Estimation
Sourav Biswas, Yihe Dong, Gautam Kamath, Jonathan Ullman
arxiv.org/abs/2006.06618
- The Ideal Membership Problem and Abelian Groups
Andrei A. Bulatov, Akbar Rafiey
arxiv.org/abs/2201.05218
- Bridging Classical and Quantum: Group-Theoretic Approach to Quantum Circuit Simulation
Daksh Shami
arxiv.org/abs/2407.19575 mastoxiv.page/@arXiv_quantph_b
- Young domination on Hamming rectangles
Janko Gravner, Matja\v{z} Krnc, Martin Milani\v{c}, Jean-Florent Raymond
arxiv.org/abs/2501.03788 mastoxiv.page/@arXiv_mathCO_bo
- On the Space Complexity of Online Convolution
Joel Daniel Andersson, Amir Yehudayoff
arxiv.org/abs/2505.00181 mastoxiv.page/@arXiv_csCC_bot/
- Universal Solvability for Robot Motion Planning on Graphs
Anubhav Dhar, Pranav Nyati, Tanishq Prasad, Ashlesha Hota, Sudeshna Kolay
arxiv.org/abs/2506.18755 mastoxiv.page/@arXiv_csCC_bot/
- Colorful Minors
Evangelos Protopapas, Dimitrios M. Thilikos, Sebastian Wiederrecht
arxiv.org/abs/2507.10467
- Learning fermionic linear optics with Heisenberg scaling and physical operations
Aria Christensen, Andrew Zhao
arxiv.org/abs/2602.05058
toXiv_bot_toot

@arXiv_mathLO_bot@mastoxiv.page
2026-03-31 08:15:05

A Modal de Finetti Theorem: Exchangeability under S4 and S5
Daniel Zantedeschi
arxiv.org/abs/2603.27547 arxiv.org/pdf/2603.27547 arxiv.org/html/2603.27547
arXiv:2603.27547v1 Announce Type: new
Abstract: We introduce modal exchangeability, a symmetry principle for probability measures on Kripke frames: invariance under those automorphisms of the frame that preserve the accessibility relation and fix a designated world. This principle characterizes when an agent's uncertainty over possible-world valuations respects the modal structure. We establish representation theorems that determine the probabilistic consequences of modal exchangeability for S4 and S5 frames. Under S5, where accessibility is an equivalence relation, the classical de Finetti theorem is recovered: valuations are conditionally i.i.d. given a single directing measure. Under S4, where accessibility is a preorder, the accessible cluster decomposes into orbits of the stabilizer group, and valuations within each orbit are conditionally i.i.d. with an orbit-specific directing measure. A rigidity constraint emerges: each directing measure must be constant across its orbit. Rigidity is not assumed but forced by symmetry; it is a theorem, not a modeling choice. The proofs are constructive, requiring only dependent choice (ZF DC), and yield computable representations for recursively presented frames. Rigidity has direct epistemic content: rational agents whose uncertainty respects modal structure cannot assign different latent parameters to worlds within the same orbit. The framework connects probabilistic representation theory to the S4/S5 distinction central to epistemic and temporal logic, with consequences for hyperintensional belief and rational learning under partial information.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2026-02-25 10:36:41

Understanding the Role of Rehearsal Scale in Continual Learning under Varying Model Capacities
JinLi He, Liang Bai, Xian Yang
arxiv.org/abs/2602.20791 arxiv.org/pdf/2602.20791 arxiv.org/html/2602.20791
arXiv:2602.20791v1 Announce Type: new
Abstract: Rehearsal is one of the key techniques for mitigating catastrophic forgetting and has been widely adopted in continual learning algorithms due to its simplicity and practicality. However, the theoretical understanding of how rehearsal scale influences learning dynamics remains limited. To address this gap, we formulate rehearsal-based continual learning as a multidimensional effectiveness-driven iterative optimization problem, providing a unified characterization across diverse performance metrics. Within this framework, we derive a closed-form analysis of adaptability, memorability, and generalization from the perspective of rehearsal scale. Our results uncover several intriguing and counterintuitive findings. First, rehearsal can impair model's adaptability, in sharp contrast to its traditionally recognized benefits. Second, increasing the rehearsal scale does not necessarily improve memory retention. When tasks are similar and noise levels are low, the memory error exhibits a diminishing lower bound. Finally, we validate these insights through numerical simulations and extended analyses on deep neural networks across multiple real-world datasets, revealing statistical patterns of rehearsal mechanisms in continual learning.
toXiv_bot_toot