2025-10-01 09:23:08
Flexible fixed-point iteration and its applications for nonsymmetric algebraic Riccati equations
Zhen-Chen Guo, Xin Liang
https://arxiv.org/abs/2509.25942 https://
Flexible fixed-point iteration and its applications for nonsymmetric algebraic Riccati equations
Zhen-Chen Guo, Xin Liang
https://arxiv.org/abs/2509.25942 https://
A Block-Activated Decomposition Algorithm for Multi-Stage Stochastic Variational Inequalities
Minh N. B\`ui
https://arxiv.org/abs/2509.26198 https://arxiv.…
Wohoo, QGIS Arrow support has been merged: https://github.com/qgis/QGIS/pull/63749
Thanks @… & @…
Locally Lipschitz Path Dependent FBSDEs with Unbounded Terminal Conditions in Brownian and L{\'e}vy Settings
Hannah Geiss (JYU), C\'eline Labart (LAMA), Adrien Richou (IMB), Alexander Steinicke
https://arxiv.org/abs/2509.26423
Global Optimization Algorithm for Mixed-Integer Nonlinear Programs with Trigonometric Functions
Christopher Montez, Sujeevraja Sanjeevi, Kaarthik Sundar
https://arxiv.org/abs/2509.26516
⚡ MXene current collectors could reduce size and improve recyclability of Li-ion batteries
https://techxplore.com/news/2025-10-mxene-current-collectors-size-recyclability.html
Correlating Cross-Iteration Noise for DP-SGD using Model Curvature
Xin Gu, Yingtai Xiao, Guanlin He, Jiamu Bai, Daniel Kifer, Kiwan Maeng
https://arxiv.org/abs/2510.05416 https:…
To ChatGPT: List the implant dates of human recipients of a Neuralink brain implant https://chatgpt.com/share/68f4b806-b23c-8004-a18c-30f31a4ba71f "For a company like Neuralink, whose entire value proposition hinges on rapid iteration and scale, a pause of six-plus w…
"For Levy and Newborn, the stakes were clear, and the title of the first chapter of the book says it all: “The Challenge is World Champion Kasparov”. Said chapter describes in detail the match between a first iteration of a chess supercomputer by IBM, the less well-known “Deep Thought”. It was a strong contender, having defeated quite a few grandmasters along the way (including the aforementioned Levy), but was no match for Kasparov in August 1989."
An efficient iteration method to reconstruct the drift term from the final measurement
Dakang Cen, Wenlong Zhang, Zhidong Zhang
https://arxiv.org/abs/2510.10940 https://
Digital ID – The New Chains of Capitalist Surveillance
[…] From passports to colonial passbooks, from welfare cards to border regimes, the apparatus of identification has always been tied to domination. Digital ID is simply the latest iteration of this long history, but with a scale and sophistication that makes its dangers even more profound. […]
👷
Love it when someone finally responds to me after literally months of silence, I ask a question and they respond correcting me and saying they've taken action now anyway "to avoid further iteration and delay".
«at this point, AT Proto has become essentially a sort of ideological vaporware; a way for Jay Graber et al to run a social media platform while claiming they don't run a social media platform. This is, of course, just another iteration of the Silicon Valley monoproduct: power without accountability.»
Delusions of a Protocol | Azhdarchid
#bluesky
Promptimizer: User-Led Prompt Optimization for Personal Content Classification
Leijie Wang, Kathryn Yurechko, Amy X. Zhang
https://arxiv.org/abs/2510.09009 https://
https://web.archive.org/web/20050323234233/http://www.isdickcheneydeadyet.com/
This is the last known iteration of the website “isdickcheneydeadyet.com”, from March 2005. This is when it had a blog format, and continued as a dead site until the end …
An Improved Quantum Algorithm for 3-Tuple Lattice Sieving
Lynn Engelberts, Yanlin Chen, Amin Shiraz Gilani, Maya-Iggy van Hoof, Stacey Jeffery, Ronald de Wolf
https://arxiv.org/abs/2510.08473
Low Complexity Detector for XL-MIMO Uplink: A Cross Splitting Based Information Geometry Approach
Wenjun Zhang, An-An Lu, Xiqi Gao
https://arxiv.org/abs/2510.09039 https://
Implicit Updates for Average-Reward Temporal Difference Learning
Hwanwoo Kim, Dongkyu Derek Cho, Eric Laber
https://arxiv.org/abs/2510.06149 https://arxiv.…
GAR: Generative Adversarial Reinforcement Learning for Formal Theorem Proving
Ruida Wang, Jiarui Yao, Rui Pan, Shizhe Diao, Tong Zhang
https://arxiv.org/abs/2510.11769 https://
Cocoon: A System Architecture for Differentially Private Training with Correlated Noises
Donghwan Kim, Xin Gu, Jinho Baek, Timothy Lo, Younghoon Min, Kwangsik Shin, Jongryool Kim, Jongse Park, Kiwan Maeng
https://arxiv.org/abs/2510.07304
A Frank-Wolfe Algorithm for Strongly Monotone Variational Inequalities
Reza Rahimi Baghbadorani, Peyman Mohajerin Esfahani, Sergio Grammatico
https://arxiv.org/abs/2510.03842 ht…
Outer length billiards on a large scale
Peter Albers, Lael Edwards-Costa, Serge Tabachnikov
https://arxiv.org/abs/2510.08370 https://arxiv.org/pdf/2510.083…
Harnessing the XMM-Newton data: X-ray spectral modelling of 4XMM-DR11 detections and 4XMM-DR11s sources
A. Viitanen, G. Mountrichas, H. Stiele, F. J. Carrera, A. Ruiz, J. Ballet, A. Akylas, A. Corral, M. Freyberg, A. Georgakakis, I. Georgantopoulos, S. Mateos, C. Motch, A. Nebot, H. Tranin, N. Webb
https://arxiv.org/abs/2510.03409
Improved $\ell_{p}$ Regression via Iteratively Reweighted Least Squares
Alina Ene, Ta Duy Nguyen, Adrian Vladu
https://arxiv.org/abs/2510.01729 https://arx…
Off-Policy Reinforcement Learning with Anytime Safety Guarantees via Robust Safe Gradient Flow
Pol Mestres, Arnau Marzabal, Jorge Cort\'es
https://arxiv.org/abs/2510.01492 h…
https://web.archive.org/web/20050323234233/http://www.isdickcheneydeadyet.com/
This is the last known iteration of the website “isdickcheneydeadyet.com”, from March 2005. This is when it had a blog format, and continued as a dead site until the end …
A Warm-basis Method for Bridging Learning and Iteration: a Case Study in Fluorescence Molecular Tomography
Ruchi Guo, Jiahua Jiang, Bangti Jin, Wuwei Ren, Jianru Zhang
https://arxiv.org/abs/2510.05926 …
The Potential of Second-Order Optimization for LLMs: A Study with Full Gauss-Newton
Natalie Abreu, Nikhil Vyas, Sham Kakade, Depen Morwani
https://arxiv.org/abs/2510.09378 https…
S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
https://arxiv.org/abs/2511.10133 https://arxiv.org/pdf/2511.10133 https://arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot
MM-LMPC: Multi-Modal Learning Model Predictive Control via Bandit-Based Mode Selection
Wataru Hashimoto, Kazumune Hashimoto
https://arxiv.org/abs/2510.00410 https://
Flatness-Aware Stochastic Gradient Langevin Dynamics
Stefano Bruno, Youngsik Hwang, Jaehyeon An, Sotirios Sabanis, Dong-Young Lim
https://arxiv.org/abs/2510.02174 https://
A generalized alternating NGMRES method for PDE-constrained optimization problems governed by transport equations
Yunhui He, Andreas Mang
https://arxiv.org/abs/2510.08782 https:…
Smoother-type a posteriori error estimates for finite element methods
Yuwen Li, Han Shui
https://arxiv.org/abs/2510.07677 https://arxiv.org/pdf/2510.07677
On the Benefits of Weight Normalization for Overparameterized Matrix Sensing
Yudong Wei, Liang Zhang, Bingcong Li, Niao He
https://arxiv.org/abs/2510.01175 https://
Accelerated Price Adjustment for Fisher Markets with Exact Recovery of Competitive Equilibrium
He Chen, Chonghe Jiang, Anthony Man-Cho So
https://arxiv.org/abs/2510.07759 https:…
On the Complexity of Lower-Order Implementations of Higher-Order Methods
Nikita Doikov, Geovani Nunes Grapiglia
https://arxiv.org/abs/2510.07992 https://ar…
Mixed-precision iterative refinement for low-rank Lyapunov equations
Peter Benner, Xiaobo Liu
https://arxiv.org/abs/2510.02126 https://arxiv.org/pdf/2510.0…
dHPR: A Distributed Halpern Peaceman--Rachford Method for Non-smooth Distributed Optimization Problems
Zhangcheng Feng, Defeng Sun, Yancheng Yuan, Guojun Zhang
https://arxiv.org/abs/2511.10069 https://arxiv.org/pdf/2511.10069 https://arxiv.org/html/2511.10069
arXiv:2511.10069v1 Announce Type: new
Abstract: This paper introduces the distributed Halpern Peaceman--Rachford (dHPR) method, an efficient algorithm for solving distributed convex composite optimization problems with non-smooth objectives, which achieves a non-ergodic $O(1/k)$ iteration complexity regarding Karush--Kuhn--Tucker residual. By leveraging the symmetric Gauss--Seidel decomposition, the dHPR effectively decouples the linear operators in the objective functions and consensus constraints while maintaining parallelizability and avoiding additional large proximal terms, leading to a decentralized implementation with provably fast convergence. The superior performance of dHPR is demonstrated through comprehensive numerical experiments on distributed LASSO, group LASSO, and $L_1$-regularized logistic regression problems.
toXiv_bot_toot
An inexact semismooth Newton-Krylov method for semilinear elliptic optimal control problem
Shiqi Chen, Xuesong Chen
https://arxiv.org/abs/2511.10058 https://arxiv.org/pdf/2511.10058 https://arxiv.org/html/2511.10058
arXiv:2511.10058v1 Announce Type: new
Abstract: An inexact semismooth Newton method has been proposed for solving semi-linear elliptic optimal control problems in this paper. This method incorporates the generalized minimal residual (GMRES) method, a type of Krylov subspace method, to solve the Newton equations and utilizes nonmonotonic line search to adjust the iteration step size. The original problem is reformulated into a nonlinear equation through variational inequality principles and discretized using a second-order finite difference scheme. By leveraging slanting differentiability, the algorithm constructs semismooth Newton directions and employs GMRES method to inexactly solve the Newton equations, significantly reducing computational overhead. A dynamic nonmonotonic line search strategy is introduced to adjust stepsizes adaptively, ensuring global convergence while overcoming local stagnation. Theoretical analysis demonstrates that the algorithm achieves superlinear convergence near optimal solutions when the residual control parameter $\eta_k$ approaches to 0. Numerical experiments validate the method's accuracy and efficiency in solving semilinear elliptic optimal control problems, corroborating theoretical insights.
toXiv_bot_toot
A Computationally Efficient Finite Element Method for Shape Reconstruction of Inverse Conductivity Problems
Lefu Cai, Zhixin Liu, Minghui Song, Xianchao Wang
https://arxiv.org/abs/2510.00597
(Adaptive) Scaled gradient methods beyond locally Holder smoothness: Lyapunov analysis, convergence rate and complexity
Susan Ghaderi, Morteza Rahimi, Yves Moreau, Masoud Ahookhosh
https://arxiv.org/abs/2511.10425 https://arxiv.org/pdf/2511.10425 https://arxiv.org/html/2511.10425
arXiv:2511.10425v1 Announce Type: new
Abstract: This paper addresses the unconstrained minimization of smooth convex functions whose gradients are locally Holder continuous. Building on these results, we analyze the Scaled Gradient Algorithm (SGA) under local smoothness assumptions, proving its global convergence and iteration complexity. Furthermore, under local strong convexity and the Kurdyka-Lojasiewicz (KL) inequality, we establish linear convergence rates and provide explicit complexity bounds. In particular, we show that when the gradient is locally Lipschitz continuous, SGA attains linear convergence for any KL exponent. We then introduce and analyze an adaptive variant of SGA (AdaSGA), which automatically adjusts the scaling and step-size parameters. For this method, we show global convergence, and derive local linear rates under strong convexity.
toXiv_bot_toot
Measuring dissimilarity between convex cones by means of max-min angles
Welington de Oliveira, Valentina Sessa, David Sossa
https://arxiv.org/abs/2511.10483 https://arxiv.org/pdf/2511.10483 https://arxiv.org/html/2511.10483
arXiv:2511.10483v1 Announce Type: new
Abstract: This work introduces a novel dissimilarity measure between two convex cones, based on the max-min angle between them. We demonstrate that this measure is closely related to the Pompeiu-Hausdorff distance, a well-established metric for comparing compact sets. Furthermore, we examine cone configurations where the measure admits simplified or analytic forms. For the specific case of polyhedral cones, a nonconvex cutting-plane method is deployed to compute, at least approximately, the measure between them. Our approach builds on a tailored version of Kelley's cutting-plane algorithm, which involves solving a challenging master program per iteration. When this master program is solved locally, our method yields an angle that satisfies certain necessary optimality conditions of the underlying nonconvex optimization problem yielding the dissimilarity measure between the cones. As an application of the proposed mathematical and algorithmic framework, we address the image-set classification task under limited data conditions, a task that falls within the scope of the \emph{Few-Shot Learning} paradigm. In this context, image sets belonging to the same class are modeled as polyhedral cones, and our dissimilarity measure proves useful for understanding whether two image sets belong to the same class.
toXiv_bot_toot
A Time-certified Predictor-corrector IPM Algorithm for Box-QP
Liang Wu, Yunhong Che, Richard D. Braatz, Jan Drgona
https://arxiv.org/abs/2510.04467 https://
Exponential convergence of a distributed divide-and-conquer algorithm for constrained convex optimization on networks
Nazar Emirov, Guohui Song, Qiyu Sun
https://arxiv.org/abs/2510.01511
DeMuon: A Decentralized Muon for Matrix Optimization over Graphs
Chuan He, Shuyi Ren, Jingwei Mao, Erik G. Larsson
https://arxiv.org/abs/2510.01377 https://