2025-09-30 07:32:29
Hardness and Algorithmic Results for Roman \{3\}-Domination
Sangam Balchandar Reddy
https://arxiv.org/abs/2509.23615 https://arxiv.org/pdf/2509.23615
Hardness and Algorithmic Results for Roman \{3\}-Domination
Sangam Balchandar Reddy
https://arxiv.org/abs/2509.23615 https://arxiv.org/pdf/2509.23615
Generalization Analysis for Classification on Korobov Space
Yuqing Liu
https://arxiv.org/abs/2509.22748 https://arxiv.org/pdf/2509.22748
Accelerated stochastic first-order method for convex optimization under heavy-tailed noise
Chuan He, Zhaosong Lu
https://arxiv.org/abs/2510.11676 https://a…
Phase Transitions of the Additive Uniform Noise Channel with Peak Amplitude and Cost Constraint
Jonas Stapmanns, Catarina Dias, Luke Eilers, Tobias K\"uhn, Jean-Pascal Pfister
https://arxiv.org/abs/2510.12427
On Convex Functions of Gaussian Variables
Maite Fern\'andez-Unzueta, James Melbourne, Gerardo Palafox-Castillo
https://arxiv.org/abs/2510.06676 https://
An Efficient Solution Method for Solving Convex Separable Quadratic Optimization Problems
Shaoze Li, Junhao Wu, Cheng Lu, Zhibin Deng, Shu-Cherng Fang
https://arxiv.org/abs/2510.11554
Robust Inference for Convex Pairwise Difference Estimators
Matias D. Cattaneo, Michael Jansson, Kenichi Nagasawa
https://arxiv.org/abs/2510.05991 https://a…
The weighted isoperimetric inequality and Sobolev inequality outside convex sets
Lu Chen, Jiali Lan
https://arxiv.org/abs/2510.01647 https://arxiv.org/pdf/…
Jensen convex functions and doubly stochastic matrices
Matyas Barczy, Zsolt P\'ales
https://arxiv.org/abs/2510.03715 https://arxiv.org/pdf/2510.03715…
Closed-form Single UAV-aided Emitter Localization and Trajectory Design Using Doppler and TOA Measurements
Samaneh Motie, Hadi Zayyani, Mohammad Salman, Hasan Abu Hilal
https://arxiv.org/abs/2510.01778
Upper semicontinuous valuations on convex functions of one variable
Fernanda M. Ba\^eta
https://arxiv.org/abs/2510.05796 https://arxiv.org/pdf/2510.05796…
Neural Optimal Transport Meets Multivariate Conformal Prediction
Vladimir Kondratyev, Alexander Fishkov, Nikita Kotelevskii, Mahmoud Hegazy, Remi Flamary, Maxim Panov, Eric Moulines
https://arxiv.org/abs/2509.25444
The pluricomplex Poisson kernel for convex finite type domains
Leandro Arosio, Filippo Bracci, Matteo Fiacchi
https://arxiv.org/abs/2509.26230 https://arxi…
Smooth Quasar-Convex Optimization with Constraints
David Mart\'inez-Rubio
https://arxiv.org/abs/2510.01943 https://arxiv.org/pdf/2510.01943
Relaxation of quasi-convex functionals with variable exponent growth
Giacomo Bertazzoni, Petteri Harjulehto, Peter H\"ast\"o, Elvira Zappale
https://arxiv.org/abs/2510.04672
Variational formulae for entropy-like functionals for states in von Neumann algebras
Andrzej {\L}uczak, Hanna Pods\k{e}dkowska, Rafa{\l} Wieczorek
https://arxiv.org/abs/2510.07605
Integrable Billiards and Related Topics
Misha Bialy, Andrey E. Mironov
https://arxiv.org/abs/2510.03790 https://arxiv.org/pdf/2510.03790
Approximate Bregman proximal gradient algorithm with variable metric Armijo--Wolfe line search
Kiwamu Fujiki, Shota Takahashi, Akiko Takeda
https://arxiv.org/abs/2510.06615 http…
On Hardy spaces, univalent functions and the second coefficient
Martin Chuaqui, Iason Efraimidis, Rodrigo Hern\'andez
https://arxiv.org/abs/2510.05395 https://
Characterizing Maximal Monotone Operators with Unique Representation
Sotiris Armeniakos, Aris Daniilidis
https://arxiv.org/abs/2510.09368 https://arxiv.org…
Long-Time Analysis of Stochastic Heavy Ball Dynamics for Convex Optimization and Monotone Equations
Radu Ioan Bot, Chiara Schindler
https://arxiv.org/abs/2510.02951 https://
An inverse problem for the Monge-Amp\`ere equation
Tony Liimatainen, Yi-Hsuan Lin
https://arxiv.org/abs/2510.11572 https://arxiv.org/pdf/2510.11572
New M-estimator of the leading principal component
Joni Virta, Una Radojicic, Marko Voutilainen
https://arxiv.org/abs/2510.02799 https://arxiv.org/pdf/2510…
Convergence analysis of inexact MBA method for constrained upper-$\mathcal{C}^2$ optimization problems
Ruyu Liu, Shaohua Pan
https://arxiv.org/abs/2511.09940 https://arxiv.org/pdf/2511.09940 https://arxiv.org/html/2511.09940
arXiv:2511.09940v1 Announce Type: new
Abstract: This paper concerns a class of constrained optimization problems in which, the objective and constraint functions are both upper-$\mathcal{C}^2$. For such nonconvex and nonsmooth optimization problems, we develop an inexact moving balls approximation (MBA) method by a workable inexactness criterion for the solving of subproblems. By leveraging a global error bound for the strongly convex program associated with parametric optimization problems, we establish the full convergence of the iterate sequence under the partial bounded multiplier property (BMP) and the Kurdyka-{\L}ojasiewicz (KL) property of the constructed potential function, and achieve the local convergence rate of the iterate and objective value sequences if the potential function satisfies the KL property of exponent $q\in[1/2,1)$. A verifiable condition is also provided to check whether the potential function satisfies the KL property of exponent $q\in[1/2,1)$ at the given critical point. To the best of our knowledge, this is the first implementable inexact MBA method with a full convergence certificate for the constrained nonconvex and nonsmooth optimization problem.
toXiv_bot_toot
A Nonstationary Ruelle-Perron-Frobenius Theorem
Vaughn Climenhaga, Gregory Hemenway
https://arxiv.org/abs/2509.25467 https://arxiv.org/pdf/2509.25467
Exponential convergence of a distributed divide-and-conquer algorithm for constrained convex optimization on networks
Nazar Emirov, Guohui Song, Qiyu Sun
https://arxiv.org/abs/2510.01511
On rigid $q$-plurisubharmonic functions and $q$-pseudoconvex tube domains in $\mathbb{C}^n$
Thomas Pawlaschyk
https://arxiv.org/abs/2510.05009 https://arxi…
H\"{o}lder property of the resolvent of a monotone operator in Banach spaces
Changchi Huang, Jigen Peng, Yuchao Tang
https://arxiv.org/abs/2510.03774 https://
Locally Linear Convergence for Nonsmooth Convex Optimization via Coupled Smoothing and Momentum
Reza Rahimi Baghbadorani, Sergio Grammatico, Peyman Mohajerin Esfahani
https://arxiv.org/abs/2511.10239 https://arxiv.org/pdf/2511.10239 https://arxiv.org/html/2511.10239
arXiv:2511.10239v1 Announce Type: new
Abstract: We propose an adaptive accelerated smoothing technique for a nonsmooth convex optimization problem where the smoothing update rule is coupled with the momentum parameter. We also extend the setting to the case where the objective function is the sum of two nonsmooth functions. With regard to convergence rate, we provide the global (optimal) sublinear convergence guarantees of O(1/k), which is known to be provably optimal for the studied class of functions, along with a local linear rate if the nonsmooth term fulfills a so-call locally strong convexity condition. We validate the performance of our algorithm on several problem classes, including regression with the l1-norm (the Lasso problem), sparse semidefinite programming (the MaxCut problem), Nuclear norm minimization with application in model free fault diagnosis, and l_1-regularized model predictive control to showcase the benefits of the coupling. An interesting observation is that although our global convergence result guarantees O(1/k) convergence, we consistently observe a practical transient convergence rate of O(1/k^2), followed by asymptotic linear convergence as anticipated by the theoretical result. This two-phase behavior can also be explained in view of the proposed smoothing rule.
toXiv_bot_toot
S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
https://arxiv.org/abs/2511.10133 https://arxiv.org/pdf/2511.10133 https://arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot
ProxSTORM -- A Stochastic Trust-Region Algorithm for Nonsmooth Optimization
Robert J. Baraldi, Aurya Javeed, Drew P. Kouri, Katya Scheinberg
https://arxiv.org/abs/2510.03187 htt…
The Non-Attainment Phenomenon in Robust SOCPs
Vinh Nguyen
https://arxiv.org/abs/2510.00318 https://arxiv.org/pdf/2510.00318
Non-Euclidean Broximal Point Method: A Blueprint for Geometry-Aware Optimization
Kaja Gruntkowska, Peter Richt\'arik
https://arxiv.org/abs/2510.00823 https://
A first-order method for constrained nonconvex--nonconcave minimax problems under a local Kurdyka-{\L}ojasiewicz condition
Zhaosong Lu, Xiangyuan Wang
https://arxiv.org/abs/2510.01168