Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_mathOC_bot@mastoxiv.page
2025-10-14 11:46:18

An Efficient Solution Method for Solving Convex Separable Quadratic Optimization Problems
Shaoze Li, Junhao Wu, Cheng Lu, Zhibin Deng, Shu-Cherng Fang
arxiv.org/abs/2510.11554

@arXiv_eessSY_bot@mastoxiv.page
2025-10-14 09:26:48

Computing Safe Control Inputs using Discrete-Time Matrix Control Barrier Functions via Convex Optimization
James Usevitch, Juan Augusto Paredes Salazar, Ankit Goel
arxiv.org/abs/2510.09925

@arXiv_mathOC_bot@mastoxiv.page
2025-10-14 11:57:39

Accelerated stochastic first-order method for convex optimization under heavy-tailed noise
Chuan He, Zhaosong Lu
arxiv.org/abs/2510.11676 a…

@arXiv_csGR_bot@mastoxiv.page
2025-10-14 09:21:28

MATStruct: High-Quality Medial Mesh Computation via Structure-aware Variational Optimization
Ningna Wang, Rui Xu, Yibo Yin, Zichun Zhong, Taku Komura, Wenping Wang, Xiaohu Guo
arxiv.org/abs/2510.10751

@arXiv_csLG_bot@mastoxiv.page
2025-09-10 10:42:11

A Modular Algorithm for Non-Stationary Online Convex-Concave Optimization
Qing-xin Meng, Xia Lei, Jian-wei Liu
arxiv.org/abs/2509.07901 arx…

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 10:10:20

Global Solutions to Non-Convex Functional Constrained Problems with Hidden Convexity
Ilyas Fatkhullin, Niao He, Guanghui Lan, Florian Wolf
arxiv.org/abs/2511.10626 arxiv.org/pdf/2511.10626 arxiv.org/html/2511.10626
arXiv:2511.10626v1 Announce Type: new
Abstract: Constrained non-convex optimization is fundamentally challenging, as global solutions are generally intractable and constraint qualifications may not hold. However, in many applications, including safe policy optimization in control and reinforcement learning, such problems possess hidden convexity, meaning they can be reformulated as convex programs via a nonlinear invertible transformation. Typically such transformations are implicit or unknown, making the direct link with the convex program impossible. On the other hand, (sub-)gradients with respect to the original variables are often accessible or can be easily estimated, which motivates algorithms that operate directly in the original (non-convex) problem space using standard (sub-)gradient oracles. In this work, we develop the first algorithms to provably solve such non-convex problems to global minima. First, using a modified inexact proximal point method, we establish global last-iterate convergence guarantees with $\widetilde{\mathcal{O}}(\varepsilon^{-3})$ oracle complexity in non-smooth setting. For smooth problems, we propose a new bundle-level type method based on linearly constrained quadratic subproblems, improving the oracle complexity to $\widetilde{\mathcal{O}}(\varepsilon^{-1})$. Surprisingly, despite non-convexity, our methodology does not require any constraint qualifications, can handle hidden convex equality constraints, and achieves complexities matching those for solving unconstrained hidden convex optimization.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 10:04:30

Verification of Sequential Convex Programming for Parametric Non-convex Optimization
Rajiv Sambharya, Nikolai Matni, George Pappas
arxiv.org/abs/2511.10622 arxiv.org/pdf/2511.10622 arxiv.org/html/2511.10622
arXiv:2511.10622v1 Announce Type: new
Abstract: We introduce a verification framework to exactly verify the worst-case performance of sequential convex programming (SCP) algorithms for parametric non-convex optimization. The verification problem is formulated as an optimization problem that maximizes a performance metric (e.g., the suboptimality after a given number of iterations) over parameters constrained to be in a parameter set and iterate sequences consistent with the SCP update rules. Our framework is general, extending the notion of SCP to include both conventional variants such as trust-region, convex-concave, and prox-linear methods, and algorithms that combine convex subproblems with rounding steps, as in relaxing and rounding schemes. Unlike existing analyses that may only provide local guarantees under limited conditions, our framework delivers global worst-case guarantees--quantifying how well an SCP algorithm performs across all problem instances in the specified family. Applications in control, signal processing, and operations research demonstrate that our framework provides, for the first time, global worst-case guarantees for SCP algorithms in the parametric setting.
toXiv_bot_toot

@arXiv_physicsoptics_bot@mastoxiv.page
2025-10-14 07:58:44

Neuro-inspired automated lens design
Yao Gao, Lei Sun, Shaohua Gao, Qi Jiang, Kailun Yang, Weijian Hu, Xiaolong Qian, Wenyong Li, Luc Van Gool, Kaiwei Wang
arxiv.org/abs/2510.09979

@arXiv_quantph_bot@mastoxiv.page
2025-09-09 11:39:02

Efficient Convex Optimization for Bosonic State Tomography
Shengyong Li, Yanjin Yue, Ying Hu, Rui-Yang Gong, Qianchuan Zhao, Zhihui Peng, Pengtao Song, Zeliang Xiang, Jing Zhang
arxiv.org/abs/2509.06305

@arXiv_mathCO_bot@mastoxiv.page
2025-09-09 10:47:42

Separable convex optimization over indegree polytopes
N\'ora A. Borsik, P\'eter Madarasi
arxiv.org/abs/2509.06182 arxiv.org/pdf/250…

@arXiv_mathOC_bot@mastoxiv.page
2025-10-15 08:44:02

Linear Convergence of a Unified Primal--Dual Algorithm for Convex--Concave Saddle Point Problems with Quadratic Growth
Cody Melcher, Afrooz Jalilzadeh, Erfan Yazdandoost Hamedani
arxiv.org/abs/2510.11990

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:28:40

Convergence analysis of inexact MBA method for constrained upper-$\mathcal{C}^2$ optimization problems
Ruyu Liu, Shaohua Pan
arxiv.org/abs/2511.09940 arxiv.org/pdf/2511.09940 arxiv.org/html/2511.09940
arXiv:2511.09940v1 Announce Type: new
Abstract: This paper concerns a class of constrained optimization problems in which, the objective and constraint functions are both upper-$\mathcal{C}^2$. For such nonconvex and nonsmooth optimization problems, we develop an inexact moving balls approximation (MBA) method by a workable inexactness criterion for the solving of subproblems. By leveraging a global error bound for the strongly convex program associated with parametric optimization problems, we establish the full convergence of the iterate sequence under the partial bounded multiplier property (BMP) and the Kurdyka-{\L}ojasiewicz (KL) property of the constructed potential function, and achieve the local convergence rate of the iterate and objective value sequences if the potential function satisfies the KL property of exponent $q\in[1/2,1)$. A verifiable condition is also provided to check whether the potential function satisfies the KL property of exponent $q\in[1/2,1)$ at the given critical point. To the best of our knowledge, this is the first implementable inexact MBA method with a full convergence certificate for the constrained nonconvex and nonsmooth optimization problem.
toXiv_bot_toot

@arXiv_csCC_bot@mastoxiv.page
2025-09-01 07:38:02

Some Applications and Limitations of Convex Optimization Hierarchies for Discrete and Continuous Optimization Problems
Mrinalkanti Ghosh
arxiv.org/abs/2508.21327

@arXiv_quantph_bot@mastoxiv.page
2025-10-03 10:32:41

A quantum analogue of convex optimization
Eunou Lee
arxiv.org/abs/2510.02151 arxiv.org/pdf/2510.02151

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:37:10

S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
arxiv.org/abs/2511.10133 arxiv.org/pdf/2511.10133 arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot

@arXiv_mathNA_bot@mastoxiv.page
2025-09-01 09:30:52

Low-Rank Regularized Convex-Non-Convex Problems for Image Segmentation or Completion
Mohamed El Guide, Anas El Hachimi, Khalide Jbilou, Lothar Reichel
arxiv.org/abs/2508.21765

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:37:40

Locally Linear Convergence for Nonsmooth Convex Optimization via Coupled Smoothing and Momentum
Reza Rahimi Baghbadorani, Sergio Grammatico, Peyman Mohajerin Esfahani
arxiv.org/abs/2511.10239 arxiv.org/pdf/2511.10239 arxiv.org/html/2511.10239
arXiv:2511.10239v1 Announce Type: new
Abstract: We propose an adaptive accelerated smoothing technique for a nonsmooth convex optimization problem where the smoothing update rule is coupled with the momentum parameter. We also extend the setting to the case where the objective function is the sum of two nonsmooth functions. With regard to convergence rate, we provide the global (optimal) sublinear convergence guarantees of O(1/k), which is known to be provably optimal for the studied class of functions, along with a local linear rate if the nonsmooth term fulfills a so-call locally strong convexity condition. We validate the performance of our algorithm on several problem classes, including regression with the l1-norm (the Lasso problem), sparse semidefinite programming (the MaxCut problem), Nuclear norm minimization with application in model free fault diagnosis, and l_1-regularized model predictive control to showcase the benefits of the coupling. An interesting observation is that although our global convergence result guarantees O(1/k) convergence, we consistently observe a practical transient convergence rate of O(1/k^2), followed by asymptotic linear convergence as anticipated by the theoretical result. This two-phase behavior can also be explained in view of the proposed smoothing rule.
toXiv_bot_toot

@arXiv_eessSP_bot@mastoxiv.page
2025-10-10 08:28:09

Rate Maximization for UAV-assisted ISAC System with Fluid Antennas
Xingtao Yang, Zhenghe Guo, Siyun Liang, Zhaohui Yang, Chen Zhu, Zhaoyang Zhang
arxiv.org/abs/2510.07668

@arXiv_csIT_bot@mastoxiv.page
2025-09-30 09:36:41

Post-disaster Max-Min Rate Optimization for Multi-UAV RSMA Network in Obstacle Environments
Qingyang Wang, Zhuohui Yao, Wenchi Cheng, Xiao Zheng
arxiv.org/abs/2509.23908

@arXiv_csDC_bot@mastoxiv.page
2025-09-03 09:38:23

A Continuous Energy Ising Machine Leveraging Difference-of-Convex Programming
Debraj Banerjee, Santanu Mahapatra, Kunal Narayan Chaudhury
arxiv.org/abs/2509.01928

@arXiv_mathOC_bot@mastoxiv.page
2025-10-13 08:44:40

CoNeT-GIANT: A compressed Newton-type fully distributed optimization algorithm
Souvik Das, Subhrakanti Dey
arxiv.org/abs/2510.08806 arxiv.o…

@arXiv_quantph_bot@mastoxiv.page
2025-10-08 10:32:49

Self-concordant Schr\"odinger operators: spectral gaps and optimization without condition numbers
Sander Gribling, Simon Apers, Harold Nieuwboer, Michael Walter
arxiv.org/abs/2510.06115

@arXiv_econEM_bot@mastoxiv.page
2025-09-03 08:11:13

On the Estimation of Multinomial Logit and Nested Logit Models: A Conic Optimization Approach
Hoang Giang Pham, Tien Mai, Minh Ha Hoang
arxiv.org/abs/2509.01562

@arXiv_csDS_bot@mastoxiv.page
2025-08-27 07:34:32

Integral Online Algorithms for Set Cover and Load Balancing with Convex Objectives
Thomas Kesselheim, Marco Molinaro, Kalen Patton, Sahil Singla
arxiv.org/abs/2508.18383

@arXiv_csGR_bot@mastoxiv.page
2025-09-05 07:57:31

Memory Optimization for Convex Hull Support Point Queries
Michael Greer
arxiv.org/abs/2509.03753 arxiv.org/pdf/2509.03753

@arXiv_statML_bot@mastoxiv.page
2025-10-01 09:13:08

Neural Optimal Transport Meets Multivariate Conformal Prediction
Vladimir Kondratyev, Alexander Fishkov, Nikita Kotelevskii, Mahmoud Hegazy, Remi Flamary, Maxim Panov, Eric Moulines
arxiv.org/abs/2509.25444

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:58:00

Measuring dissimilarity between convex cones by means of max-min angles
Welington de Oliveira, Valentina Sessa, David Sossa
arxiv.org/abs/2511.10483 arxiv.org/pdf/2511.10483 arxiv.org/html/2511.10483
arXiv:2511.10483v1 Announce Type: new
Abstract: This work introduces a novel dissimilarity measure between two convex cones, based on the max-min angle between them. We demonstrate that this measure is closely related to the Pompeiu-Hausdorff distance, a well-established metric for comparing compact sets. Furthermore, we examine cone configurations where the measure admits simplified or analytic forms. For the specific case of polyhedral cones, a nonconvex cutting-plane method is deployed to compute, at least approximately, the measure between them. Our approach builds on a tailored version of Kelley's cutting-plane algorithm, which involves solving a challenging master program per iteration. When this master program is solved locally, our method yields an angle that satisfies certain necessary optimality conditions of the underlying nonconvex optimization problem yielding the dissimilarity measure between the cones. As an application of the proposed mathematical and algorithmic framework, we address the image-set classification task under limited data conditions, a task that falls within the scope of the \emph{Few-Shot Learning} paradigm. In this context, image sets belonging to the same class are modeled as polyhedral cones, and our dissimilarity measure proves useful for understanding whether two image sets belong to the same class.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:35:50

dHPR: A Distributed Halpern Peaceman--Rachford Method for Non-smooth Distributed Optimization Problems
Zhangcheng Feng, Defeng Sun, Yancheng Yuan, Guojun Zhang
arxiv.org/abs/2511.10069 arxiv.org/pdf/2511.10069 arxiv.org/html/2511.10069
arXiv:2511.10069v1 Announce Type: new
Abstract: This paper introduces the distributed Halpern Peaceman--Rachford (dHPR) method, an efficient algorithm for solving distributed convex composite optimization problems with non-smooth objectives, which achieves a non-ergodic $O(1/k)$ iteration complexity regarding Karush--Kuhn--Tucker residual. By leveraging the symmetric Gauss--Seidel decomposition, the dHPR effectively decouples the linear operators in the objective functions and consensus constraints while maintaining parallelizability and avoiding additional large proximal terms, leading to a decentralized implementation with provably fast convergence. The superior performance of dHPR is demonstrated through comprehensive numerical experiments on distributed LASSO, group LASSO, and $L_1$-regularized logistic regression problems.
toXiv_bot_toot

@arXiv_statME_bot@mastoxiv.page
2025-10-06 08:18:09

Bridging the Prediction Error Method and Subspace Identification: A Weighted Null Space Fitting Method
Jiabao He, S. Joe Qin, H\r{a}kan Hjalmarsson
arxiv.org/abs/2510.02529

@arXiv_mathOC_bot@mastoxiv.page
2025-10-13 09:06:30

On the Strength of Linear Relaxations in Ordered Optimization
V\'ictor Blanco, Diego Laborda, Miguel Mart\'inez-Ant\'on
arxiv.org/abs/2510.09166

@arXiv_csLG_bot@mastoxiv.page
2025-09-23 12:44:40

Global Optimization via Softmin Energy Minimization
Andrea Agazzi, Vittorio Carlei, Marco Romito, Samuele Saviozzi
arxiv.org/abs/2509.17815

@arXiv_eessSY_bot@mastoxiv.page
2025-10-10 09:44:59

General formulation of an analytic, Lipschitz continuous control allocation for thrust-vectored controlled rigid-bodies
Frank Mukwege, Tam Willy Nguyen, Emanuele Garone
arxiv.org/abs/2510.08119

@arXiv_csCG_bot@mastoxiv.page
2025-09-05 10:30:52

Crosslisted article(s) found for cs.CG. arxiv.org/list/cs.CG/new
[1/1]:
- Memory Optimization for Convex Hull Support Point Queries
Michael Greer

@arXiv_eessSP_bot@mastoxiv.page
2025-10-07 08:12:01

Pinching Antenna Systems (PASS) for Cell-Free Communications
Haochen Li
arxiv.org/abs/2510.03628 arxiv.org/pdf/2510.03628

@arXiv_mathOC_bot@mastoxiv.page
2025-10-14 08:49:38

Quantum Alternating Direction Method of Multipliers for Semidefinite Programming
Hantao Nie, Dong An, Zaiwen Wen
arxiv.org/abs/2510.10056 a…

@arXiv_mathOC_bot@mastoxiv.page
2025-10-15 10:03:51

Heuristic Bundle Upper Bound Based Polyhedral Bundle Method for Semidefinite Programming
Zilong Cui, Ran Gu
arxiv.org/abs/2510.12374 arxiv.…

@arXiv_mathOC_bot@mastoxiv.page
2025-09-11 09:51:53

An Inexact Proximal Framework for Nonsmooth Riemannian Difference-of-Convex Optimization
Bo Jiang, Meng Xu, Xingju Cai, Ya-Feng Liu
arxiv.org/abs/2509.08561

@arXiv_mathOC_bot@mastoxiv.page
2025-09-12 07:56:29

Convexity of Optimization Curves: Local Sharp Thresholds, Robustness Impossibility, and New Counterexamples
Le Duc Hieu
arxiv.org/abs/2509.08954

@arXiv_mathOC_bot@mastoxiv.page
2025-09-12 09:10:19

A preconditioned third-order implicit-explicit algorithm with a difference of varying convex functions and extrapolation
Kelin Wu, Hongpeng Sun
arxiv.org/abs/2509.09391

@arXiv_statML_bot@mastoxiv.page
2025-09-30 11:37:11

On Spectral Learning for Odeco Tensors: Perturbation, Initialization, and Algorithms
Arnab Auddy, Ming Yuan
arxiv.org/abs/2509.25126 arxiv.…

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 13:23:10

Replaced article(s) found for math.OC. arxiv.org/list/math.OC/new
[1/1]:
- A robust BFGS algorithm for unconstrained nonlinear optimization problems
Yaguang Yang
arxiv.org/abs/1212.5929
- Quantum computing and the stable set problem
Alja\v{z} Krpan, Janez Povh, Dunja Pucher
arxiv.org/abs/2405.12845 mastoxiv.page/@arXiv_mathOC_bo
- Mean Field Game with Reflected Jump Diffusion Dynamics: A Linear Programming Approach
Zongxia Liang, Xiang Yu, Keyu Zhang
arxiv.org/abs/2508.20388 mastoxiv.page/@arXiv_mathOC_bo
- Differential Dynamic Programming for the Optimal Control Problem with an Ellipsoidal Target Set a...
Sungjun Eom, Gyunghoon Park
arxiv.org/abs/2509.07546 mastoxiv.page/@arXiv_mathOC_bo
- On the Moreau envelope properties of weakly convex functions
Marien Renaud, Arthur Leclaire, Nicolas Papadakis
arxiv.org/abs/2509.13960 mastoxiv.page/@arXiv_mathOC_bo
- Automated algorithm design via Nevanlinna-Pick interpolation
Ibrahim K. Ozaslan, Tryphon T. Georgiou, Mihailo R. Jovanovic
arxiv.org/abs/2509.21416 mastoxiv.page/@arXiv_mathOC_bo
- Optimal Control of a Bioeconomic Crop-Energy System with Energy Reinvestment
Othman Cherkaoui Dekkaki
arxiv.org/abs/2510.11381 mastoxiv.page/@arXiv_mathOC_bo
- Point Convergence Analysis of the Accelerated Gradient Method for Multiobjective Optimization: Co...
Yingdong Yin
arxiv.org/abs/2510.26382 mastoxiv.page/@arXiv_mathOC_bo
- History-Aware Adaptive High-Order Tensor Regularization
Chang He, Bo Jiang, Yuntian Jiang, Chuwen Zhang, Shuzhong Zhang
arxiv.org/abs/2511.05788
- Equivalence of entropy solutions and gradient flows for pressureless 1D Euler systems
Jos\'e Antonio Carrillo, Sondre Tesdal Galtung
arxiv.org/abs/2312.04932 mastoxiv.page/@arXiv_mathAP_bo
- Kernel Modelling of Fading Memory Systems
Yongkang Huo, Thomas Chaffey, Rodolphe Sepulchre
arxiv.org/abs/2403.11945 mastoxiv.page/@arXiv_eessSY_bo
- The Maximum Theoretical Ground Speed of the Wheeled Vehicle
Altay Zhakatayev, Mukatai Nemerebayev
arxiv.org/abs/2502.15341 mastoxiv.page/@arXiv_physicscl
- Hessian stability and convergence rates for entropic and Sinkhorn potentials via semiconcavity
Giacomo Greco, Luca Tamanini
arxiv.org/abs/2504.11133 mastoxiv.page/@arXiv_mathPR_bo
- Optimizing the ground state energy of the three-dimensional magnetic Dirichlet Laplacian with con...
Matthias Baur
arxiv.org/abs/2504.21597 mastoxiv.page/@arXiv_mathph_bo
- A localized consensus-based sampling algorithm
Arne Bouillon, Alexander Bodard, Panagiotis Patrinos, Dirk Nuyens, Giovanni Samaey
arxiv.org/abs/2505.24861 mastoxiv.page/@arXiv_mathNA_bo
- A Novel Sliced Fused Gromov-Wasserstein Distance
Moritz Piening, Robert Beinert
arxiv.org/abs/2508.02364 mastoxiv.page/@arXiv_csLG_bot/
- Minimal Regret Walras Equilibria for Combinatorial Markets via Duality, Integrality, and Sensitiv...
Alo\"is Duguet, Tobias Harks, Martin Schmidt, Julian Schwarz
arxiv.org/abs/2511.09021 mastoxiv.page/@arXiv_csGT_bot/
toXiv_bot_toot

@arXiv_mathNA_bot@mastoxiv.page
2025-09-03 12:38:03

User Manual for Model-based Imaging Inverse Problem
Xiaodong Wang
arxiv.org/abs/2509.01572 arxiv.org/pdf/2509.01572

@arXiv_csLG_bot@mastoxiv.page
2025-09-23 12:46:50

GaussianPSL: A novel framework based on Gaussian Splatting for exploring the Pareto frontier in multi-criteria optimization
Phuong Mai Dinh, Van-Nam Huynh
arxiv.org/abs/2509.17889

@arXiv_quantph_bot@mastoxiv.page
2025-10-09 10:39:31

A Duality Theorem for Classical-Quantum States with Applications to Complete Relational Program Logics
Gilles Barthe, Minbo Gao, Jam Kabeer Ali Khan, Matthijs Muis, Ivan Renison, Keiya Sakabe, Michael Walter, Yingte Xu, Li Zhou
arxiv.org/abs/2510.07051

@arXiv_csIT_bot@mastoxiv.page
2025-09-23 09:04:00

Communication over LQG Control Systems: A Convex Optimization Approach to Capacity
Aharon Rips, Oron Sabag
arxiv.org/abs/2509.17002 arxiv.o…

@arXiv_mathOC_bot@mastoxiv.page
2025-09-10 09:02:31

First-order SDSOS-convex semi-algebraic optimization and exact SOCP relaxations
Chengmiao Yang, Liguo Jiao, Jae Hyoung Lee
arxiv.org/abs/2509.07418

@arXiv_mathOC_bot@mastoxiv.page
2025-09-12 08:05:39

Regularization in Data-driven Predictive Control: A Convex Relaxation Perspective
Xu Shang, Yang Zheng
arxiv.org/abs/2509.09027 arxiv.org/p…

@arXiv_mathOC_bot@mastoxiv.page
2025-09-10 08:00:51

Inertial accelerated primal-dual algorithms for non-smooth convex optimization problems with linear equality constraints
Huan Zhang, Xiangkai Sun, Shengjie Li, Kok Lay Teo
arxiv.org/abs/2509.07306

@arXiv_eessSY_bot@mastoxiv.page
2025-09-04 09:07:01

Hidden Convexity in Active Learning: A Convexified Online Input Design for ARX Systems
Nicolas Chatzikiriakos, Bowen Song, Philipp Rank, Andrea Iannelli
arxiv.org/abs/2509.03257

@arXiv_mathOC_bot@mastoxiv.page
2025-10-13 08:15:40

Re$^3$MCN: Cubic Newton Variance Reduction Momentum Quadratic Regularization for Finite-sum Non-convex Problems
Dmitry Pasechnyuk-Vilensky, Dmitry Kamzolov, Martin Tak\'a\v{c}
arxiv.org/abs/2510.08714

@arXiv_csLG_bot@mastoxiv.page
2025-08-29 10:30:11

Fast Convergence Rates for Subsampled Natural Gradient Algorithms on Quadratic Model Problems
Gil Goldshlager, Jiang Hu, Lin Lin
arxiv.org/abs/2508.21022

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:39:30

Halpern Acceleration of the Inexact Proximal Point Method of Rockafellar
Liwei Zhang, Fanli Zhuang, Ning Zhang
arxiv.org/abs/2511.10372 arxiv.org/pdf/2511.10372 arxiv.org/html/2511.10372
arXiv:2511.10372v1 Announce Type: new
Abstract: This paper investigates a Halpern acceleration of the inexact proximal point method for solving maximal monotone inclusion problems in Hilbert spaces. The proposed Halpern inexact proximal point method (HiPPM) is shown to be globally convergent, and a unified framework is developed to analyze its worst-case convergence rate. Under mild summability conditions on the inexactness tolerances, HiPPM achieves an $\mathcal{O}(1/k^{2})$ rate in terms of the squared fixed-point residual. Furthermore, under additional mild condition, the method retains a fast linear convergence rate. Building upon this framework, we further extend the acceleration technique to constrained convex optimization through the augmented Lagrangian formulation. In analogy to Rockafellar's classical results, the resulting accelerated inexact augmented Lagrangian method inherits the convergence rate and complexity guarantees of HiPPM. The analysis thus provides a unified theoretical foundation for accelerated inexact proximal algorithms and their augmented Lagrangian extensions.
toXiv_bot_toot

@arXiv_statML_bot@mastoxiv.page
2025-10-01 09:53:28

Sharpness of Minima in Deep Matrix Factorization: Exact Expressions
Anil Kamber, Rahul Parhi
arxiv.org/abs/2509.25783 arxiv.org/pdf/2509.25…

@arXiv_mathOC_bot@mastoxiv.page
2025-09-11 09:18:53

Nesterov acceleration for strongly convex-strongly concave bilinear saddle point problems: discrete and continuous-time approaches
Xin He, Ya-Ping Fang
arxiv.org/abs/2509.08258

@arXiv_mathNA_bot@mastoxiv.page
2025-09-04 09:06:11

Convergence for adaptive resampling of random Fourier features
Xin Huang, Aku Kammonen, Anamika Pandey, Mattias Sandberg, Erik von Schwerin, Anders Szepessy, Ra\'ul Tempone
arxiv.org/abs/2509.03151

@arXiv_mathOC_bot@mastoxiv.page
2025-09-08 08:28:46

Universal Representation of Generalized Convex Functions and their Gradients
Moeen Nehzati
arxiv.org/abs/2509.04477 arxiv.org/pdf/2509.0447…

@arXiv_mathOC_bot@mastoxiv.page
2025-09-10 09:44:21

A Monte Carlo Approach to Nonsmooth Convex Optimization via Proximal Splitting Algorithms
Nicholas Di, Eric C. Chi, Samy Wu Fung
arxiv.org/abs/2509.07914

@arXiv_mathOC_bot@mastoxiv.page
2025-09-10 09:41:21

Decentralized Online Riemannian Optimization Beyond Hadamard Manifolds
Emre Sahinoglu, Shahin Shahrampour
arxiv.org/abs/2509.07779 arxiv.or…

@arXiv_mathOC_bot@mastoxiv.page
2025-09-11 11:55:19

Replaced article(s) found for math.OC. arxiv.org/list/math.OC/new
[1/1]:
- Damped Proximal Augmented Lagrangian Method for weakly-Convex Problems with Convex Constraints
Hari Dahal, Wei Liu, Yangyang Xu

@arXiv_mathOC_bot@mastoxiv.page
2025-10-03 09:44:11

Smooth Quasar-Convex Optimization with Constraints
David Mart\'inez-Rubio
arxiv.org/abs/2510.01943 arxiv.org/pdf/2510.01943

@arXiv_mathOC_bot@mastoxiv.page
2025-10-06 09:41:39

Long-Time Analysis of Stochastic Heavy Ball Dynamics for Convex Optimization and Monotone Equations
Radu Ioan Bot, Chiara Schindler
arxiv.org/abs/2510.02951

@arXiv_mathOC_bot@mastoxiv.page
2025-09-04 08:15:31

A Proximal Descent Method for Minimizing Weakly Convex Optimization
Feng-Yi Liao, Yang Zheng
arxiv.org/abs/2509.02804 arxiv.org/pdf/2509.02…

@arXiv_mathOC_bot@mastoxiv.page
2025-10-01 10:21:48

The Trajectory Bundle Method: Unifying Sequential-Convex Programming and Sampling-Based Trajectory Optimization
Kevin Tracy, John Z. Zhang, Jon Arrizabalaga, Stefan Schaal, Yuval Tassa, Tom Erez, Zachary Manchester
arxiv.org/abs/2509.26575

@arXiv_mathOC_bot@mastoxiv.page
2025-09-10 08:37:41

Reinforcement learning for online hyperparameter tuning in convex quadratic programming
Jeremy Bertoncini, Alberto De Marchi, Matthias Gerdts, Simon Gottschalk
arxiv.org/abs/2509.07404

@arXiv_mathOC_bot@mastoxiv.page
2025-10-06 09:41:59

Estimating Sequences with Memory for Minimizing Convex Non-smooth Composite Functions
Endrit Dosti, Sergiy A. Vorobyov, Themistoklis Charalambous
arxiv.org/abs/2510.02965

@arXiv_mathOC_bot@mastoxiv.page
2025-10-07 10:45:42

Convex Pollution Control of Wastewater Treatment Systems
Joshua Taylor
arxiv.org/abs/2510.03918 arxiv.org/pdf/2510.03918

@arXiv_mathOC_bot@mastoxiv.page
2025-10-03 08:57:41

Exponential convergence of a distributed divide-and-conquer algorithm for constrained convex optimization on networks
Nazar Emirov, Guohui Song, Qiyu Sun
arxiv.org/abs/2510.01511

@arXiv_mathOC_bot@mastoxiv.page
2025-10-02 08:53:51

A primal-dual splitting algorithm with convex combination and larger step sizes for composite monotone inclusion problems
Xiaokai Chang, Junfeng Yang, Jianchao Bai, Jianxiong Cao
arxiv.org/abs/2510.00437

@arXiv_mathOC_bot@mastoxiv.page
2025-10-06 09:57:59

ProxSTORM -- A Stochastic Trust-Region Algorithm for Nonsmooth Optimization
Robert J. Baraldi, Aurya Javeed, Drew P. Kouri, Katya Scheinberg
arxiv.org/abs/2510.03187

@arXiv_mathOC_bot@mastoxiv.page
2025-09-24 08:50:34

Convergence, Duality and Well-Posedness in Convex Bilevel Optimization
Khanh-Hung Giang-Tran, Nam Ho-Nguyen, Fatma K{\i}l{\i}n\c{c}-Karzan, Lingqing Shen
arxiv.org/abs/2509.18304

@arXiv_mathOC_bot@mastoxiv.page
2025-09-30 11:34:01

Simplex Frank-Wolfe: Linear Convergence and Its Numerical Efficiency for Convex Optimization over Polytopes
Haoning Wang, Houduo Qi, Liping Zhang
arxiv.org/abs/2509.24279

@arXiv_mathOC_bot@mastoxiv.page
2025-09-04 09:25:11

Faster Gradient Methods for Highly-smooth Stochastic Bilevel Optimization
Lesi Chen, Junru Li, Jingzhao Zhang
arxiv.org/abs/2509.02937 arxi…

@arXiv_mathOC_bot@mastoxiv.page
2025-08-29 09:11:11

Revisit Stochastic Gradient Descent for Strongly Convex Objectives: Tight Uniform-in-Time Bounds
Kang Chen, Yasong Feng, Tianyu Wang
arxiv.org/abs/2508.20823

@arXiv_mathOC_bot@mastoxiv.page
2025-10-01 08:17:27

Policy Optimization in Robust Control: Weak Convexity and Subgradient Methods
Yuto Watanabe, Feng-Yi Liao, Yang Zheng
arxiv.org/abs/2509.25633

@arXiv_mathOC_bot@mastoxiv.page
2025-09-08 11:56:15

Replaced article(s) found for math.OC. arxiv.org/list/math.OC/new
[1/1]:
- Conditions for representation of a function of many arguments as the difference of convex functions
Igor Proudnikov

@arXiv_mathOC_bot@mastoxiv.page
2025-09-18 09:09:21

On the Moreau envelope properties of weakly convex functions
Marien Renaud, Arthur Leclaire, Nicolas Papadakis
arxiv.org/abs/2509.13960 arx…

@arXiv_mathOC_bot@mastoxiv.page
2025-10-02 09:37:00

Non-Euclidean Broximal Point Method: A Blueprint for Geometry-Aware Optimization
Kaja Gruntkowska, Peter Richt\'arik
arxiv.org/abs/2510.00823

@arXiv_mathOC_bot@mastoxiv.page
2025-09-29 09:59:17

A dynamical formulation of multi-marginal optimal transport
Brendan Pass, Yair Shenfeld
arxiv.org/abs/2509.22494 arxiv.org/pdf/2509.22494…

@arXiv_mathOC_bot@mastoxiv.page
2025-09-01 09:14:52

An Optimistic Gradient Tracking Method for Distributed Minimax Optimization
Yan Huang, Jinming Xu, Jiming Chen, Karl Henrik Johansson
arxiv.org/abs/2508.21431

@arXiv_mathOC_bot@mastoxiv.page
2025-09-26 07:49:51

Automated algorithm design for convex optimization problems with linear equality constraints
Ibrahim K. Ozaslan, Wuwei Wu, Jie Chen, Tryphon T. Georgiou, Mihailo R. Jovanovic
arxiv.org/abs/2509.20746

@arXiv_mathOC_bot@mastoxiv.page
2025-09-05 08:45:01

Duality between polyhedral approximation of value functions and optimal quantization of measures
Abdellah Bulaich Mehamdi, Wim van Ackooij, Luce Brotcorne, St\'ephane Gaubert, Quentin Jacquet
arxiv.org/abs/2509.04101

@arXiv_mathOC_bot@mastoxiv.page
2025-08-20 08:25:50

First Order Algorithm on an Optimization Problem with Improved Convergence when Problem is Convex
Chee-Khian Sim
arxiv.org/abs/2508.13302 a…

@arXiv_mathOC_bot@mastoxiv.page
2025-09-08 08:40:10

Provably data-driven projection method for quadratic programming
Anh Tuan Nguyen, Viet Anh Nguyen
arxiv.org/abs/2509.04524 arxiv.org/pdf/25…

@arXiv_mathOC_bot@mastoxiv.page
2025-09-18 08:59:31

Complexity Bounds for Smooth Convex Multiobjective Optimization
Phillipe R. Sampaio
arxiv.org/abs/2509.13550 arxiv.org/pdf/2509.13550

@arXiv_mathOC_bot@mastoxiv.page
2025-10-02 14:03:26

Replaced article(s) found for math.OC. arxiv.org/list/math.OC/new
[1/2]:
- Gauges and Accelerated Optimization over Smooth and/or Strongly Convex Sets
Ning Liu, Benjamin Grimmer

@arXiv_mathOC_bot@mastoxiv.page
2025-09-25 09:52:52

An Alternating Direction Method of Multipliers for Topology Optimization
Harsh Choudhary, Sven Leyffer, Dominic Yang
arxiv.org/abs/2509.19888

@arXiv_mathOC_bot@mastoxiv.page
2025-09-25 09:49:02

Sparse Regularization by Smooth Non-separable Non-convex Penalty Function Based on Ultra-discretization Formula
Natsuki Akaishi, Koki Yamada, Kohei Yatabe
arxiv.org/abs/2509.19886

@arXiv_mathOC_bot@mastoxiv.page
2025-08-22 08:05:11

Differential Stochastic Variational Inequalities with Parametric Optimization
Xiaojun Chen, Jian Guo, Guan Wang
arxiv.org/abs/2508.15241 ar…

@arXiv_mathOC_bot@mastoxiv.page
2025-08-25 08:31:40

A unified vertical alignment and earthwork model in road design with a new convex optimization model for road networks
Sayan Sadhukhan, Warren Hare, Yves Lucet
arxiv.org/abs/2508.15953

@arXiv_mathOC_bot@mastoxiv.page
2025-08-21 08:19:40

Sequential Convex Programming with Filtering-Based Warm-Starting for Continuous-Time Multiagent Quadrotor Trajectory Optimization
Minsen Yuan, Yue Yu
arxiv.org/abs/2508.14299

@arXiv_mathOC_bot@mastoxiv.page
2025-09-29 09:16:07

A Riemannian Accelerated Proximal Gradient Method
Shuailing Feng, Yuhang Jiang, Wen Huang, Shihui Ying
arxiv.org/abs/2509.21897 arxiv.org/p…

@arXiv_mathOC_bot@mastoxiv.page
2025-09-23 09:44:40

Minimization of Nonsmooth Weakly Convex Function over Prox-regular Set for Robust Low-rank Matrix Recovery
Keita Kume, Isao Yamada
arxiv.org/abs/2509.17549

@arXiv_mathOC_bot@mastoxiv.page
2025-09-16 11:52:27

SSNCVX: A primal-dual semismooth Newton method for convex composite optimization problem
Zhanwang Deng, Tao Wei, Jirui Ma, Zaiwen Wen
arxiv.org/abs/2509.11995

@arXiv_mathOC_bot@mastoxiv.page
2025-10-02 08:05:10

The Non-Attainment Phenomenon in Robust SOCPs
Vinh Nguyen
arxiv.org/abs/2510.00318 arxiv.org/pdf/2510.00318

@arXiv_mathOC_bot@mastoxiv.page
2025-09-25 08:48:02

Inexact and Stochastic Gradient Optimization Algorithms with Inertia and Hessian Driven Damping
Harsh Choudhary, Jalal Fadili, Vyachelav Kungurtsev
arxiv.org/abs/2509.19561

@arXiv_mathOC_bot@mastoxiv.page
2025-08-26 10:52:46

Policy Optimization in the Linear Quadratic Gaussian Problem: A Frequency Domain Perspective
Haoran Li, Xun Li, Yuan-Hua Ni, Xuebo Zhang
arxiv.org/abs/2508.17252

@arXiv_mathOC_bot@mastoxiv.page
2025-09-30 12:10:51

Bundle Network: a Machine Learning-Based Bundle Method
Francesca Demelas, Joseph Le Roux, Antonio Frangioni, Mathieu Lacroix, Emiliano Traversi, Roberto Wolfler Calvo
arxiv.org/abs/2509.24736

@arXiv_mathOC_bot@mastoxiv.page
2025-09-17 09:55:40

Consensus-Based Optimization Beyond Finite-Time Analysis
Pascal Bianchi (IP Paris, S2A), Alexandru-Radu Dragomir (IP Paris, S2A), Victor Priser (IP Paris, S2A)
arxiv.org/abs/2509.12907

@arXiv_mathOC_bot@mastoxiv.page
2025-09-16 11:49:07

Fractional-Order Nesterov Dynamics for Convex Optimization
Tumelo Ranoto
arxiv.org/abs/2509.11987 arxiv.org/pdf/2509.11987

@arXiv_mathOC_bot@mastoxiv.page
2025-08-22 08:32:20

A smoothed proximal trust-region algorithm for nonconvex optimization problems with $L^p$-regularization, $p\in (0,1)$
Harbir Antil, Anna Lentz
arxiv.org/abs/2508.15446