Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:32:40

Convergence Guarantees for Federated SARSA with Local Training and Heterogeneous Agents
Paul Mangold, Elo\"ise Berthier, Eric Moulines
arxiv.org/abs/2512.17688 arxiv.org/pdf/2512.17688 arxiv.org/html/2512.17688
arXiv:2512.17688v1 Announce Type: new
Abstract: We present a novel theoretical analysis of Federated SARSA (FedSARSA) with linear function approximation and local training. We establish convergence guarantees for FedSARSA in the presence of heterogeneity, both in local transitions and rewards, providing the first sample and communication complexity bounds in this setting. At the core of our analysis is a new, exact multi-step error expansion for single-agent SARSA, which is of independent interest. Our analysis precisely quantifies the impact of heterogeneity, demonstrating the convergence of FedSARSA with multiple local updates. Crucially, we show that FedSARSA achieves linear speed-up with respect to the number of agents, up to higher-order terms due to Markovian sampling. Numerical experiments support our theoretical findings.
toXiv_bot_toot

@cosmos4u@scicomm.xyz
2026-01-22 04:59:09

Dripping to Destruction - Exploring Salt-driven Viscous Surface Convergence in Europa’s Icy Shell: #Europa

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:37:40

Locally Linear Convergence for Nonsmooth Convex Optimization via Coupled Smoothing and Momentum
Reza Rahimi Baghbadorani, Sergio Grammatico, Peyman Mohajerin Esfahani
arxiv.org/abs/2511.10239 arxiv.org/pdf/2511.10239 arxiv.org/html/2511.10239
arXiv:2511.10239v1 Announce Type: new
Abstract: We propose an adaptive accelerated smoothing technique for a nonsmooth convex optimization problem where the smoothing update rule is coupled with the momentum parameter. We also extend the setting to the case where the objective function is the sum of two nonsmooth functions. With regard to convergence rate, we provide the global (optimal) sublinear convergence guarantees of O(1/k), which is known to be provably optimal for the studied class of functions, along with a local linear rate if the nonsmooth term fulfills a so-call locally strong convexity condition. We validate the performance of our algorithm on several problem classes, including regression with the l1-norm (the Lasso problem), sparse semidefinite programming (the MaxCut problem), Nuclear norm minimization with application in model free fault diagnosis, and l_1-regularized model predictive control to showcase the benefits of the coupling. An interesting observation is that although our global convergence result guarantees O(1/k) convergence, we consistently observe a practical transient convergence rate of O(1/k^2), followed by asymptotic linear convergence as anticipated by the theoretical result. This two-phase behavior can also be explained in view of the proposed smoothing rule.
toXiv_bot_toot

@tiotasram@kolektiva.social
2026-01-18 23:17:10
Content warning: ICE & resistance

In case anyone was wondering about the relevance of #LandBack in the current moment, via CrimeThinc an article on the Minneapolis resistance states:
"""
The Whipple, a federal building in Fort Snelling on the outskirts of the Minneapolis and St. Paul, has long been a regional headquarters for ICE, having previously housed other federal agencies. The complex is located across the street from a National Guard base, down the road from a military base, and next to the preserved fort itself. The fort sits on the sacred site of the convergence of two rivers. It was one of the earliest sites of colonization in the area; at one time, it was a concentration camp holding native Dakota people.
"""
If at any point in the past you ever felt that maybe Native soverignty was a niche issue, or so far from being realized that other causes were more important or relevant, things like this are a good reminder that that cause: overturning the colonial order, is the *same* cause as any meaningful change from the fascist status quo. Things like a "return to democracy" aren't necessarily bad, but the rot runs to the root of this nation, and any intervention that doesn't go that deep is going to leave us right back in this situation again later on.
The fact that ICE is detaining Native Americans is not at all a mistake given their white supremacist aims.
Article link: #ICE #LandBack

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:50:00

(Adaptive) Scaled gradient methods beyond locally Holder smoothness: Lyapunov analysis, convergence rate and complexity
Susan Ghaderi, Morteza Rahimi, Yves Moreau, Masoud Ahookhosh
arxiv.org/abs/2511.10425 arxiv.org/pdf/2511.10425 arxiv.org/html/2511.10425
arXiv:2511.10425v1 Announce Type: new
Abstract: This paper addresses the unconstrained minimization of smooth convex functions whose gradients are locally Holder continuous. Building on these results, we analyze the Scaled Gradient Algorithm (SGA) under local smoothness assumptions, proving its global convergence and iteration complexity. Furthermore, under local strong convexity and the Kurdyka-Lojasiewicz (KL) inequality, we establish linear convergence rates and provide explicit complexity bounds. In particular, we show that when the gradient is locally Lipschitz continuous, SGA attains linear convergence for any KL exponent. We then introduce and analyze an adaptive variant of SGA (AdaSGA), which automatically adjusts the scaling and step-size parameters. For this method, we show global convergence, and derive local linear rates under strong convexity.
toXiv_bot_toot

@seeingwithsound@mas.to
2026-01-10 09:24:46

(LinkedIn) #Neuroscience in 2026: Realistic milestones the scientific community can achieve linkedin.com/pulse/neuroscienc

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:34:40

Weighted Stochastic Differential Equation to Implement Wasserstein-Fisher-Rao Gradient Flow
Herlock Rahimi
arxiv.org/abs/2512.17878 arxiv.org/pdf/2512.17878 arxiv.org/html/2512.17878
arXiv:2512.17878v1 Announce Type: new
Abstract: Score-based diffusion models currently constitute the state of the art in continuous generative modeling. These methods are typically formulated via overdamped or underdamped Ornstein--Uhlenbeck-type stochastic differential equations, in which sampling is driven by a combination of deterministic drift and Brownian diffusion, resulting in continuous particle trajectories in the ambient space. While such dynamics enjoy exponential convergence guarantees for strongly log-concave target distributions, it is well known that their mixing rates deteriorate exponentially in the presence of nonconvex or multimodal landscapes, such as double-well potentials. Since many practical generative modeling tasks involve highly non-log-concave target distributions, considerable recent effort has been devoted to developing sampling schemes that improve exploration beyond classical diffusion dynamics.
A promising line of work leverages tools from information geometry to augment diffusion-based samplers with controlled mass reweighting mechanisms. This perspective leads naturally to Wasserstein--Fisher--Rao (WFR) geometries, which couple transport in the sample space with vertical (reaction) dynamics on the space of probability measures. In this work, we formulate such reweighting mechanisms through the introduction of explicit correction terms and show how they can be implemented via weighted stochastic differential equations using the Feynman--Kac representation. Our study provides a preliminary but rigorous investigation of WFR-based sampling dynamics, and aims to clarify their geometric and operator-theoretic structure as a foundation for future theoretical and algorithmic developments.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:19:00

Global Convergence of Four-Layer Matrix Factorization under Random Initialization
Minrui Luo, Weihang Xu, Xiang Gao, Maryam Fazel, Simon Shaolei Du
arxiv.org/abs/2511.09925 arxiv.org/pdf/2511.09925 arxiv.org/html/2511.09925
arXiv:2511.09925v1 Announce Type: new
Abstract: Gradient descent dynamics on the deep matrix factorization problem is extensively studied as a simplified theoretical model for deep neural networks. Although the convergence theory for two-layer matrix factorization is well-established, no global convergence guarantee for general deep matrix factorization under random initialization has been established to date. To address this gap, we provide a polynomial-time global convergence guarantee for randomly initialized gradient descent on four-layer matrix factorization, given certain conditions on the target matrix and a standard balanced regularization term. Our analysis employs new techniques to show saddle-avoidance properties of gradient decent dynamics, and extends previous theories to characterize the change in eigenvalues of layer weights.
toXiv_bot_toot

@arXiv_statML_bot@mastoxiv.page
2025-11-13 08:42:39

Convergence and Stability Analysis of Self-Consuming Generative Models with Heterogeneous Human Curation
Hongru Zhao, Jinwen Fu, Tuan Pham
arxiv.org/abs/2511.09002

@ginevra@hachyderm.io
2025-11-01 09:14:43

Oh! The video games I played in October all begin with 'B':
The Biggleboss INCident; Blackwell Unbound; The Blackwell Convergence; The Blackwell Deception; Backpack Battles.
Yep, #pointandclick
I've started the last in the Blackwell series - I'm not finished yet. I know they're older, but maybe I'll write a review of the series when I'm done.
I also played #SteamNextFest Demos: Goblin Sushi (many hours' worth); Nighthawks; Atomic Age; Moon Garden Optimizer; Mystery of Silence; Servant of the Lake; Uncle Lee's Cookbook
There were 5 other demos I played, but I uninstalled them after 10 minutes' play - they weren't for me

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:28:40

Convergence analysis of inexact MBA method for constrained upper-$\mathcal{C}^2$ optimization problems
Ruyu Liu, Shaohua Pan
arxiv.org/abs/2511.09940 arxiv.org/pdf/2511.09940 arxiv.org/html/2511.09940
arXiv:2511.09940v1 Announce Type: new
Abstract: This paper concerns a class of constrained optimization problems in which, the objective and constraint functions are both upper-$\mathcal{C}^2$. For such nonconvex and nonsmooth optimization problems, we develop an inexact moving balls approximation (MBA) method by a workable inexactness criterion for the solving of subproblems. By leveraging a global error bound for the strongly convex program associated with parametric optimization problems, we establish the full convergence of the iterate sequence under the partial bounded multiplier property (BMP) and the Kurdyka-{\L}ojasiewicz (KL) property of the constructed potential function, and achieve the local convergence rate of the iterate and objective value sequences if the potential function satisfies the KL property of exponent $q\in[1/2,1)$. A verifiable condition is also provided to check whether the potential function satisfies the KL property of exponent $q\in[1/2,1)$ at the given critical point. To the best of our knowledge, this is the first implementable inexact MBA method with a full convergence certificate for the constrained nonconvex and nonsmooth optimization problem.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:41:00

Minimizing smooth Kurdyka-{\L}ojasiewicz functions via generalized descent methods: Convergence rate and complexity
Masoud Ahookhosh, Susan Ghaderi, Alireza Kabgani, Morteza Rahimi
arxiv.org/abs/2511.10414 arxiv.org/pdf/2511.10414 arxiv.org/html/2511.10414
arXiv:2511.10414v1 Announce Type: new
Abstract: This paper addresses the generalized descent algorithm (DEAL) for minimizing smooth functions, which is analyzed under the Kurdyka-{\L}ojasiewicz (KL) inequality. In particular, the suggested algorithm guarantees a sufficient decrease by adapting to the cost function's geometry. We leverage the KL property to establish the global convergence, convergence rates, and complexity. A particular focus is placed on the linear convergence of generalized descent methods. We show that the constant step-size and Armijo line search strategies along a generalized descent direction satisfy our generalized descent condition. Additionally, for nonsmooth functions by leveraging the smoothing techniques such as forward-backward and high-order Moreau envelopes, we show that the boosted proximal gradient method (BPGA) and the boosted high-order proximal-point (BPPA) methods are also specific cases of DEAL, respectively. It is notable that if the order of the high-order proximal term is chosen in a certain way (depending on the KL exponent), then the sequence generated by BPPA converges linearly for an arbitrary KL exponent. Our preliminary numerical experiments on inverse problems and LASSO demonstrate the efficiency of the proposed methods, validating our theoretical findings.
toXiv_bot_toot

@arXiv_csGT_bot@mastoxiv.page
2025-12-10 08:00:50

Multi-agent learning under uncertainty: Recurrence vs. concentration
Kyriakos Lotidis, Panayotis Mertikopoulos, Nicholas Bambos, Jose Blanchet
arxiv.org/abs/2512.08132 arxiv.org/pdf/2512.08132 arxiv.org/html/2512.08132
arXiv:2512.08132v1 Announce Type: new
Abstract: In this paper, we examine the convergence landscape of multi-agent learning under uncertainty. Specifically, we analyze two stochastic models of regularized learning in continuous games -- one in continuous and one in discrete time with the aim of characterizing the long-run behavior of the induced sequence of play. In stark contrast to deterministic, full-information models of learning (or models with a vanishing learning rate), we show that the resulting dynamics do not converge in general. In lieu of this, we ask instead which actions are played more often in the long run, and by how much. We show that, in strongly monotone games, the dynamics of regularized learning may wander away from equilibrium infinitely often, but they always return to its vicinity in finite time (which we estimate), and their long-run distribution is sharply concentrated around a neighborhood thereof. We quantify the degree of this concentration, and we show that these favorable properties may all break down if the underlying game is not strongly monotone -- underscoring in this way the limits of regularized learning in the presence of persistent randomness and uncertainty.
toXiv_bot_toot

@arXiv_mathGN_bot@mastoxiv.page
2025-11-07 07:43:49

Certain results on selection principles associated with bornological structure in topological spaces
Debraj Chandra, Subhankar Das, Nur Alam
arxiv.org/abs/2511.04038 arxiv.org/pdf/2511.04038 arxiv.org/html/2511.04038
arXiv:2511.04038v1 Announce Type: new
Abstract: We study selection principles related to bornological covers in a topological space $X$ following the work of Aurichi et al., 2019, where selection principles have been investigated in the function space $C_\mathfrak{B}(X)$ endowed with the topology $\tau_\mathfrak{B}$ of uniform convergence on bornology $\mathfrak{B}$. We show equivalences among certain selection principles and present some game theoretic observations involving bornological covers. We investigate selection principles on the product space $X^n$ equipped with the product bornolgy $\mathfrak{B}^n$, $n\in \omega$. Considering the cardinal invariants such as the unbounding number ($\mathfrak{b}$), dominating numbers ($\mathfrak{d}$), pseudointersection numbers ($\mathfrak{p}$) etc., we establish connections between the cardinality of base of a bornology with certain selection principles. Finally, we investigate some variations of the tightness properties of $C_\mathfrak{B}(X)$ and present their characterizations in terms of selective bornological covering properties of $X$.
toXiv_bot_toot

@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2026-01-06 14:20:02

Crosslisted article(s) found for physics.atom-ph. arxiv.org/list/physics.atom-ph
[1/1]:
- A quadratic-scaling algorithm with guaranteed convergence for quantum coupled-channel calculations
Hubert J. J\'o\'zwiak, Md Muktadir Rahman, Timur V. Tscherbul

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:39:30

Halpern Acceleration of the Inexact Proximal Point Method of Rockafellar
Liwei Zhang, Fanli Zhuang, Ning Zhang
arxiv.org/abs/2511.10372 arxiv.org/pdf/2511.10372 arxiv.org/html/2511.10372
arXiv:2511.10372v1 Announce Type: new
Abstract: This paper investigates a Halpern acceleration of the inexact proximal point method for solving maximal monotone inclusion problems in Hilbert spaces. The proposed Halpern inexact proximal point method (HiPPM) is shown to be globally convergent, and a unified framework is developed to analyze its worst-case convergence rate. Under mild summability conditions on the inexactness tolerances, HiPPM achieves an $\mathcal{O}(1/k^{2})$ rate in terms of the squared fixed-point residual. Furthermore, under additional mild condition, the method retains a fast linear convergence rate. Building upon this framework, we further extend the acceleration technique to constrained convex optimization through the augmented Lagrangian formulation. In analogy to Rockafellar's classical results, the resulting accelerated inexact augmented Lagrangian method inherits the convergence rate and complexity guarantees of HiPPM. The analysis thus provides a unified theoretical foundation for accelerated inexact proximal algorithms and their augmented Lagrangian extensions.
toXiv_bot_toot

@arXiv_csGT_bot@mastoxiv.page
2025-12-10 08:54:21

Robust equilibria in continuous games: From strategic to dynamic robustness
Kyriakos Lotidis, Panayotis Mertikopoulos, Nicholas Bambos, Jose Blanchet
arxiv.org/abs/2512.08138 arxiv.org/pdf/2512.08138 arxiv.org/html/2512.08138
arXiv:2512.08138v1 Announce Type: new
Abstract: In this paper, we examine the robustness of Nash equilibria in continuous games, under both strategic and dynamic uncertainty. Starting with the former, we introduce the notion of a robust equilibrium as those equilibria that remain invariant to small -- but otherwise arbitrary -- perturbations to the game's payoff structure, and we provide a crisp geometric characterization thereof. Subsequently, we turn to the question of dynamic robustness, and we examine which equilibria may arise as stable limit points of the dynamics of "follow the regularized leader" (FTRL) in the presence of randomness and uncertainty. Despite their very distinct origins, we establish a structural correspondence between these two notions of robustness: strategic robustness implies dynamic robustness, and, conversely, the requirement of strategic robustness cannot be relaxed if dynamic robustness is to be maintained. Finally, we examine the rate of convergence to robust equilibria as a function of the underlying regularizer, and we show that entropically regularized learning converges at a geometric rate in games with affinely constrained action spaces.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:37:10

S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
arxiv.org/abs/2511.10133 arxiv.org/pdf/2511.10133 arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:35:40

An inexact semismooth Newton-Krylov method for semilinear elliptic optimal control problem
Shiqi Chen, Xuesong Chen
arxiv.org/abs/2511.10058 arxiv.org/pdf/2511.10058 arxiv.org/html/2511.10058
arXiv:2511.10058v1 Announce Type: new
Abstract: An inexact semismooth Newton method has been proposed for solving semi-linear elliptic optimal control problems in this paper. This method incorporates the generalized minimal residual (GMRES) method, a type of Krylov subspace method, to solve the Newton equations and utilizes nonmonotonic line search to adjust the iteration step size. The original problem is reformulated into a nonlinear equation through variational inequality principles and discretized using a second-order finite difference scheme. By leveraging slanting differentiability, the algorithm constructs semismooth Newton directions and employs GMRES method to inexactly solve the Newton equations, significantly reducing computational overhead. A dynamic nonmonotonic line search strategy is introduced to adjust stepsizes adaptively, ensuring global convergence while overcoming local stagnation. Theoretical analysis demonstrates that the algorithm achieves superlinear convergence near optimal solutions when the residual control parameter $\eta_k$ approaches to 0. Numerical experiments validate the method's accuracy and efficiency in solving semilinear elliptic optimal control problems, corroborating theoretical insights.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 13:23:10

Replaced article(s) found for math.OC. arxiv.org/list/math.OC/new
[1/1]:
- A robust BFGS algorithm for unconstrained nonlinear optimization problems
Yaguang Yang
arxiv.org/abs/1212.5929
- Quantum computing and the stable set problem
Alja\v{z} Krpan, Janez Povh, Dunja Pucher
arxiv.org/abs/2405.12845 mastoxiv.page/@arXiv_mathOC_bo
- Mean Field Game with Reflected Jump Diffusion Dynamics: A Linear Programming Approach
Zongxia Liang, Xiang Yu, Keyu Zhang
arxiv.org/abs/2508.20388 mastoxiv.page/@arXiv_mathOC_bo
- Differential Dynamic Programming for the Optimal Control Problem with an Ellipsoidal Target Set a...
Sungjun Eom, Gyunghoon Park
arxiv.org/abs/2509.07546 mastoxiv.page/@arXiv_mathOC_bo
- On the Moreau envelope properties of weakly convex functions
Marien Renaud, Arthur Leclaire, Nicolas Papadakis
arxiv.org/abs/2509.13960 mastoxiv.page/@arXiv_mathOC_bo
- Automated algorithm design via Nevanlinna-Pick interpolation
Ibrahim K. Ozaslan, Tryphon T. Georgiou, Mihailo R. Jovanovic
arxiv.org/abs/2509.21416 mastoxiv.page/@arXiv_mathOC_bo
- Optimal Control of a Bioeconomic Crop-Energy System with Energy Reinvestment
Othman Cherkaoui Dekkaki
arxiv.org/abs/2510.11381 mastoxiv.page/@arXiv_mathOC_bo
- Point Convergence Analysis of the Accelerated Gradient Method for Multiobjective Optimization: Co...
Yingdong Yin
arxiv.org/abs/2510.26382 mastoxiv.page/@arXiv_mathOC_bo
- History-Aware Adaptive High-Order Tensor Regularization
Chang He, Bo Jiang, Yuntian Jiang, Chuwen Zhang, Shuzhong Zhang
arxiv.org/abs/2511.05788
- Equivalence of entropy solutions and gradient flows for pressureless 1D Euler systems
Jos\'e Antonio Carrillo, Sondre Tesdal Galtung
arxiv.org/abs/2312.04932 mastoxiv.page/@arXiv_mathAP_bo
- Kernel Modelling of Fading Memory Systems
Yongkang Huo, Thomas Chaffey, Rodolphe Sepulchre
arxiv.org/abs/2403.11945 mastoxiv.page/@arXiv_eessSY_bo
- The Maximum Theoretical Ground Speed of the Wheeled Vehicle
Altay Zhakatayev, Mukatai Nemerebayev
arxiv.org/abs/2502.15341 mastoxiv.page/@arXiv_physicscl
- Hessian stability and convergence rates for entropic and Sinkhorn potentials via semiconcavity
Giacomo Greco, Luca Tamanini
arxiv.org/abs/2504.11133 mastoxiv.page/@arXiv_mathPR_bo
- Optimizing the ground state energy of the three-dimensional magnetic Dirichlet Laplacian with con...
Matthias Baur
arxiv.org/abs/2504.21597 mastoxiv.page/@arXiv_mathph_bo
- A localized consensus-based sampling algorithm
Arne Bouillon, Alexander Bodard, Panagiotis Patrinos, Dirk Nuyens, Giovanni Samaey
arxiv.org/abs/2505.24861 mastoxiv.page/@arXiv_mathNA_bo
- A Novel Sliced Fused Gromov-Wasserstein Distance
Moritz Piening, Robert Beinert
arxiv.org/abs/2508.02364 mastoxiv.page/@arXiv_csLG_bot/
- Minimal Regret Walras Equilibria for Combinatorial Markets via Duality, Integrality, and Sensitiv...
Alo\"is Duguet, Tobias Harks, Martin Schmidt, Julian Schwarz
arxiv.org/abs/2511.09021 mastoxiv.page/@arXiv_csGT_bot/
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 11:47:12

Crosslisted article(s) found for math.OC. arxiv.org/list/math.OC/new
[1/1]:
- Optimal control of Volterra integral diffusions and application to contract theory
Dylan Possama\"i, Mehdi Talbi
arxiv.org/abs/2511.09701 mastoxiv.page/@arXiv_mathPR_bo
- Generalized infinite dimensional Alpha-Procrustes based geometries
Salvish Goomanee, Andi Han, Pratik Jawanpuria, Bamdev Mishra
arxiv.org/abs/2511.09801 mastoxiv.page/@arXiv_statML_bo
- Sample Complexity of Quadratically Regularized Optimal Transport
Alberto Gonz\'alez-Sanz, Eustasio del Barrio, Marcel Nutz
arxiv.org/abs/2511.09807 mastoxiv.page/@arXiv_mathST_bo
- On the Convergence of Overparameterized Problems: Inherent Properties of the Compositional Struct...
Arthur Castello Branco de Oliveira, Dhruv Jatkar, Eduardo Sontag
arxiv.org/abs/2511.09810 mastoxiv.page/@arXiv_csLG_bot/
- Implicit Multiple Tensor Decomposition
Kunjing Yang, Libin Zheng, Minru Bai
arxiv.org/abs/2511.09916 mastoxiv.page/@arXiv_mathNA_bo
- Theoretical Analysis of Resource-Induced Phase Transitions in Estimation Strategies
Takehiro Tottori, Tetsuya J. Kobayashi
arxiv.org/abs/2511.10184 mastoxiv.page/@arXiv_physicsbi
- Zeroes and Extrema of Functions via Random Measures
Athanasios Christou Micheas
arxiv.org/abs/2511.10293 mastoxiv.page/@arXiv_statME_bo
- Operator Models for Continuous-Time Offline Reinforcement Learning
Nicolas Hoischen, Petar Bevanda, Max Beier, Stefan Sosnowski, Boris Houska, Sandra Hirche
arxiv.org/abs/2511.10383 mastoxiv.page/@arXiv_statML_bo
- On topological properties of closed attractors
Wouter Jongeneel
arxiv.org/abs/2511.10429 mastoxiv.page/@arXiv_mathDS_bo
- Learning parameter-dependent shear viscosity from data, with application to sea and land ice
Gonzalo G. de Diego, Georg Stadler
arxiv.org/abs/2511.10452 mastoxiv.page/@arXiv_mathNA_bo
- Formal Verification of Control Lyapunov-Barrier Functions for Safe Stabilization with Bounded Con...
Jun Liu
arxiv.org/abs/2511.10510 mastoxiv.page/@arXiv_eessSY_bo
- Direction-of-Arrival and Noise Covariance Matrix joint estimation for beamforming
Vitor Gelsleichter Probst Curtarelli
arxiv.org/abs/2511.10639 mastoxiv.page/@arXiv_eessAS_bo
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:35:50

dHPR: A Distributed Halpern Peaceman--Rachford Method for Non-smooth Distributed Optimization Problems
Zhangcheng Feng, Defeng Sun, Yancheng Yuan, Guojun Zhang
arxiv.org/abs/2511.10069 arxiv.org/pdf/2511.10069 arxiv.org/html/2511.10069
arXiv:2511.10069v1 Announce Type: new
Abstract: This paper introduces the distributed Halpern Peaceman--Rachford (dHPR) method, an efficient algorithm for solving distributed convex composite optimization problems with non-smooth objectives, which achieves a non-ergodic $O(1/k)$ iteration complexity regarding Karush--Kuhn--Tucker residual. By leveraging the symmetric Gauss--Seidel decomposition, the dHPR effectively decouples the linear operators in the objective functions and consensus constraints while maintaining parallelizability and avoiding additional large proximal terms, leading to a decentralized implementation with provably fast convergence. The superior performance of dHPR is demonstrated through comprehensive numerical experiments on distributed LASSO, group LASSO, and $L_1$-regularized logistic regression problems.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 10:10:20

Global Solutions to Non-Convex Functional Constrained Problems with Hidden Convexity
Ilyas Fatkhullin, Niao He, Guanghui Lan, Florian Wolf
arxiv.org/abs/2511.10626 arxiv.org/pdf/2511.10626 arxiv.org/html/2511.10626
arXiv:2511.10626v1 Announce Type: new
Abstract: Constrained non-convex optimization is fundamentally challenging, as global solutions are generally intractable and constraint qualifications may not hold. However, in many applications, including safe policy optimization in control and reinforcement learning, such problems possess hidden convexity, meaning they can be reformulated as convex programs via a nonlinear invertible transformation. Typically such transformations are implicit or unknown, making the direct link with the convex program impossible. On the other hand, (sub-)gradients with respect to the original variables are often accessible or can be easily estimated, which motivates algorithms that operate directly in the original (non-convex) problem space using standard (sub-)gradient oracles. In this work, we develop the first algorithms to provably solve such non-convex problems to global minima. First, using a modified inexact proximal point method, we establish global last-iterate convergence guarantees with $\widetilde{\mathcal{O}}(\varepsilon^{-3})$ oracle complexity in non-smooth setting. For smooth problems, we propose a new bundle-level type method based on linearly constrained quadratic subproblems, improving the oracle complexity to $\widetilde{\mathcal{O}}(\varepsilon^{-1})$. Surprisingly, despite non-convexity, our methodology does not require any constraint qualifications, can handle hidden convex equality constraints, and achieves complexities matching those for solving unconstrained hidden convex optimization.
toXiv_bot_toot