Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csCC_bot@mastoxiv.page
2025-09-30 07:32:29

Hardness and Algorithmic Results for Roman \{3\}-Domination
Sangam Balchandar Reddy
arxiv.org/abs/2509.23615 arxiv.org/pdf/2509.23615

@arXiv_mathST_bot@mastoxiv.page
2025-09-30 08:13:46

Generalization Analysis for Classification on Korobov Space
Yuqing Liu
arxiv.org/abs/2509.22748 arxiv.org/pdf/2509.22748

@arXiv_mathOC_bot@mastoxiv.page
2025-10-14 11:57:39

Accelerated stochastic first-order method for convex optimization under heavy-tailed noise
Chuan He, Zhaosong Lu
arxiv.org/abs/2510.11676 a…

@arXiv_csIT_bot@mastoxiv.page
2025-10-15 07:54:01

Phase Transitions of the Additive Uniform Noise Channel with Peak Amplitude and Cost Constraint
Jonas Stapmanns, Catarina Dias, Luke Eilers, Tobias K\"uhn, Jean-Pascal Pfister
arxiv.org/abs/2510.12427

@arXiv_mathPR_bot@mastoxiv.page
2025-10-09 08:47:11

On Convex Functions of Gaussian Variables
Maite Fern\'andez-Unzueta, James Melbourne, Gerardo Palafox-Castillo
arxiv.org/abs/2510.06676

@arXiv_mathOC_bot@mastoxiv.page
2025-10-14 11:46:18

An Efficient Solution Method for Solving Convex Separable Quadratic Optimization Problems
Shaoze Li, Junhao Wu, Cheng Lu, Zhibin Deng, Shu-Cherng Fang
arxiv.org/abs/2510.11554

@arXiv_econEM_bot@mastoxiv.page
2025-10-08 08:35:49

Robust Inference for Convex Pairwise Difference Estimators
Matias D. Cattaneo, Michael Jansson, Kenichi Nagasawa
arxiv.org/abs/2510.05991 a…

@arXiv_mathAP_bot@mastoxiv.page
2025-10-03 09:54:51

The weighted isoperimetric inequality and Sobolev inequality outside convex sets
Lu Chen, Jiali Lan
arxiv.org/abs/2510.01647 arxiv.org/pdf/…

@arXiv_mathCA_bot@mastoxiv.page
2025-10-07 07:56:47

Jensen convex functions and doubly stochastic matrices
Matyas Barczy, Zsolt P\'ales
arxiv.org/abs/2510.03715 arxiv.org/pdf/2510.03715…

@arXiv_eessSP_bot@mastoxiv.page
2025-10-03 09:38:11

Closed-form Single UAV-aided Emitter Localization and Trajectory Design Using Doppler and TOA Measurements
Samaneh Motie, Hadi Zayyani, Mohammad Salman, Hasan Abu Hilal
arxiv.org/abs/2510.01778

@arXiv_mathFA_bot@mastoxiv.page
2025-10-08 09:20:59

Upper semicontinuous valuations on convex functions of one variable
Fernanda M. Ba\^eta
arxiv.org/abs/2510.05796 arxiv.org/pdf/2510.05796…

@arXiv_statML_bot@mastoxiv.page
2025-10-01 09:13:08

Neural Optimal Transport Meets Multivariate Conformal Prediction
Vladimir Kondratyev, Alexander Fishkov, Nikita Kotelevskii, Mahmoud Hegazy, Remi Flamary, Maxim Panov, Eric Moulines
arxiv.org/abs/2509.25444

@arXiv_mathCV_bot@mastoxiv.page
2025-10-01 08:18:37

The pluricomplex Poisson kernel for convex finite type domains
Leandro Arosio, Filippo Bracci, Matteo Fiacchi
arxiv.org/abs/2509.26230 arxi…

@arXiv_mathOC_bot@mastoxiv.page
2025-10-03 09:44:11

Smooth Quasar-Convex Optimization with Constraints
David Mart\'inez-Rubio
arxiv.org/abs/2510.01943 arxiv.org/pdf/2510.01943

@arXiv_mathAP_bot@mastoxiv.page
2025-10-07 11:25:22

Relaxation of quasi-convex functionals with variable exponent growth
Giacomo Bertazzoni, Petteri Harjulehto, Peter H\"ast\"o, Elvira Zappale
arxiv.org/abs/2510.04672

@arXiv_mathOA_bot@mastoxiv.page
2025-10-10 08:42:59

Variational formulae for entropy-like functionals for states in von Neumann algebras
Andrzej {\L}uczak, Hanna Pods\k{e}dkowska, Rafa{\l} Wieczorek
arxiv.org/abs/2510.07605

@arXiv_mathDS_bot@mastoxiv.page
2025-10-07 08:01:58

Integrable Billiards and Related Topics
Misha Bialy, Andrey E. Mironov
arxiv.org/abs/2510.03790 arxiv.org/pdf/2510.03790

@arXiv_mathOC_bot@mastoxiv.page
2025-10-09 09:24:51

Approximate Bregman proximal gradient algorithm with variable metric Armijo--Wolfe line search
Kiwamu Fujiki, Shota Takahashi, Akiko Takeda
arxiv.org/abs/2510.06615

@arXiv_mathCV_bot@mastoxiv.page
2025-10-08 07:57:59

On Hardy spaces, univalent functions and the second coefficient
Martin Chuaqui, Iason Efraimidis, Rodrigo Hern\'andez
arxiv.org/abs/2510.05395

@arXiv_mathFA_bot@mastoxiv.page
2025-10-13 08:10:30

Characterizing Maximal Monotone Operators with Unique Representation
Sotiris Armeniakos, Aris Daniilidis
arxiv.org/abs/2510.09368 arxiv.org…

@arXiv_mathOC_bot@mastoxiv.page
2025-10-06 09:41:39

Long-Time Analysis of Stochastic Heavy Ball Dynamics for Convex Optimization and Monotone Equations
Radu Ioan Bot, Chiara Schindler
arxiv.org/abs/2510.02951

@arXiv_mathAP_bot@mastoxiv.page
2025-10-14 11:07:28

An inverse problem for the Monge-Amp\`ere equation
Tony Liimatainen, Yi-Hsuan Lin
arxiv.org/abs/2510.11572 arxiv.org/pdf/2510.11572

@arXiv_mathST_bot@mastoxiv.page
2025-10-06 08:33:19

New M-estimator of the leading principal component
Joni Virta, Una Radojicic, Marko Voutilainen
arxiv.org/abs/2510.02799 arxiv.org/pdf/2510…

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:28:40

Convergence analysis of inexact MBA method for constrained upper-$\mathcal{C}^2$ optimization problems
Ruyu Liu, Shaohua Pan
arxiv.org/abs/2511.09940 arxiv.org/pdf/2511.09940 arxiv.org/html/2511.09940
arXiv:2511.09940v1 Announce Type: new
Abstract: This paper concerns a class of constrained optimization problems in which, the objective and constraint functions are both upper-$\mathcal{C}^2$. For such nonconvex and nonsmooth optimization problems, we develop an inexact moving balls approximation (MBA) method by a workable inexactness criterion for the solving of subproblems. By leveraging a global error bound for the strongly convex program associated with parametric optimization problems, we establish the full convergence of the iterate sequence under the partial bounded multiplier property (BMP) and the Kurdyka-{\L}ojasiewicz (KL) property of the constructed potential function, and achieve the local convergence rate of the iterate and objective value sequences if the potential function satisfies the KL property of exponent $q\in[1/2,1)$. A verifiable condition is also provided to check whether the potential function satisfies the KL property of exponent $q\in[1/2,1)$ at the given critical point. To the best of our knowledge, this is the first implementable inexact MBA method with a full convergence certificate for the constrained nonconvex and nonsmooth optimization problem.
toXiv_bot_toot

@arXiv_mathDS_bot@mastoxiv.page
2025-10-01 08:39:47

A Nonstationary Ruelle-Perron-Frobenius Theorem
Vaughn Climenhaga, Gregory Hemenway
arxiv.org/abs/2509.25467 arxiv.org/pdf/2509.25467

@arXiv_mathOC_bot@mastoxiv.page
2025-10-03 08:57:41

Exponential convergence of a distributed divide-and-conquer algorithm for constrained convex optimization on networks
Nazar Emirov, Guohui Song, Qiyu Sun
arxiv.org/abs/2510.01511

@arXiv_mathCV_bot@mastoxiv.page
2025-10-07 08:30:32

On rigid $q$-plurisubharmonic functions and $q$-pseudoconvex tube domains in $\mathbb{C}^n$
Thomas Pawlaschyk
arxiv.org/abs/2510.05009 arxi…

@arXiv_mathFA_bot@mastoxiv.page
2025-10-07 08:20:02

H\"{o}lder property of the resolvent of a monotone operator in Banach spaces
Changchi Huang, Jigen Peng, Yuchao Tang
arxiv.org/abs/2510.03774

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:37:40

Locally Linear Convergence for Nonsmooth Convex Optimization via Coupled Smoothing and Momentum
Reza Rahimi Baghbadorani, Sergio Grammatico, Peyman Mohajerin Esfahani
arxiv.org/abs/2511.10239 arxiv.org/pdf/2511.10239 arxiv.org/html/2511.10239
arXiv:2511.10239v1 Announce Type: new
Abstract: We propose an adaptive accelerated smoothing technique for a nonsmooth convex optimization problem where the smoothing update rule is coupled with the momentum parameter. We also extend the setting to the case where the objective function is the sum of two nonsmooth functions. With regard to convergence rate, we provide the global (optimal) sublinear convergence guarantees of O(1/k), which is known to be provably optimal for the studied class of functions, along with a local linear rate if the nonsmooth term fulfills a so-call locally strong convexity condition. We validate the performance of our algorithm on several problem classes, including regression with the l1-norm (the Lasso problem), sparse semidefinite programming (the MaxCut problem), Nuclear norm minimization with application in model free fault diagnosis, and l_1-regularized model predictive control to showcase the benefits of the coupling. An interesting observation is that although our global convergence result guarantees O(1/k) convergence, we consistently observe a practical transient convergence rate of O(1/k^2), followed by asymptotic linear convergence as anticipated by the theoretical result. This two-phase behavior can also be explained in view of the proposed smoothing rule.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:37:10

S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
arxiv.org/abs/2511.10133 arxiv.org/pdf/2511.10133 arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-10-06 09:57:59

ProxSTORM -- A Stochastic Trust-Region Algorithm for Nonsmooth Optimization
Robert J. Baraldi, Aurya Javeed, Drew P. Kouri, Katya Scheinberg
arxiv.org/abs/2510.03187

@arXiv_mathOC_bot@mastoxiv.page
2025-10-02 08:05:10

The Non-Attainment Phenomenon in Robust SOCPs
Vinh Nguyen
arxiv.org/abs/2510.00318 arxiv.org/pdf/2510.00318

@arXiv_mathOC_bot@mastoxiv.page
2025-10-02 09:37:00

Non-Euclidean Broximal Point Method: A Blueprint for Geometry-Aware Optimization
Kaja Gruntkowska, Peter Richt\'arik
arxiv.org/abs/2510.00823

@arXiv_mathOC_bot@mastoxiv.page
2025-10-02 10:06:51

A first-order method for constrained nonconvex--nonconcave minimax problems under a local Kurdyka-{\L}ojasiewicz condition
Zhaosong Lu, Xiangyuan Wang
arxiv.org/abs/2510.01168