2025-10-14 11:46:18
An Efficient Solution Method for Solving Convex Separable Quadratic Optimization Problems
Shaoze Li, Junhao Wu, Cheng Lu, Zhibin Deng, Shu-Cherng Fang
https://arxiv.org/abs/2510.11554
An Efficient Solution Method for Solving Convex Separable Quadratic Optimization Problems
Shaoze Li, Junhao Wu, Cheng Lu, Zhibin Deng, Shu-Cherng Fang
https://arxiv.org/abs/2510.11554
Computing Safe Control Inputs using Discrete-Time Matrix Control Barrier Functions via Convex Optimization
James Usevitch, Juan Augusto Paredes Salazar, Ankit Goel
https://arxiv.org/abs/2510.09925
Accelerated stochastic first-order method for convex optimization under heavy-tailed noise
Chuan He, Zhaosong Lu
https://arxiv.org/abs/2510.11676 https://a…
MATStruct: High-Quality Medial Mesh Computation via Structure-aware Variational Optimization
Ningna Wang, Rui Xu, Yibo Yin, Zichun Zhong, Taku Komura, Wenping Wang, Xiaohu Guo
https://arxiv.org/abs/2510.10751
A Modular Algorithm for Non-Stationary Online Convex-Concave Optimization
Qing-xin Meng, Xia Lei, Jian-wei Liu
https://arxiv.org/abs/2509.07901 https://arx…
Global Solutions to Non-Convex Functional Constrained Problems with Hidden Convexity
Ilyas Fatkhullin, Niao He, Guanghui Lan, Florian Wolf
https://arxiv.org/abs/2511.10626 https://arxiv.org/pdf/2511.10626 https://arxiv.org/html/2511.10626
arXiv:2511.10626v1 Announce Type: new
Abstract: Constrained non-convex optimization is fundamentally challenging, as global solutions are generally intractable and constraint qualifications may not hold. However, in many applications, including safe policy optimization in control and reinforcement learning, such problems possess hidden convexity, meaning they can be reformulated as convex programs via a nonlinear invertible transformation. Typically such transformations are implicit or unknown, making the direct link with the convex program impossible. On the other hand, (sub-)gradients with respect to the original variables are often accessible or can be easily estimated, which motivates algorithms that operate directly in the original (non-convex) problem space using standard (sub-)gradient oracles. In this work, we develop the first algorithms to provably solve such non-convex problems to global minima. First, using a modified inexact proximal point method, we establish global last-iterate convergence guarantees with $\widetilde{\mathcal{O}}(\varepsilon^{-3})$ oracle complexity in non-smooth setting. For smooth problems, we propose a new bundle-level type method based on linearly constrained quadratic subproblems, improving the oracle complexity to $\widetilde{\mathcal{O}}(\varepsilon^{-1})$. Surprisingly, despite non-convexity, our methodology does not require any constraint qualifications, can handle hidden convex equality constraints, and achieves complexities matching those for solving unconstrained hidden convex optimization.
toXiv_bot_toot
Verification of Sequential Convex Programming for Parametric Non-convex Optimization
Rajiv Sambharya, Nikolai Matni, George Pappas
https://arxiv.org/abs/2511.10622 https://arxiv.org/pdf/2511.10622 https://arxiv.org/html/2511.10622
arXiv:2511.10622v1 Announce Type: new
Abstract: We introduce a verification framework to exactly verify the worst-case performance of sequential convex programming (SCP) algorithms for parametric non-convex optimization. The verification problem is formulated as an optimization problem that maximizes a performance metric (e.g., the suboptimality after a given number of iterations) over parameters constrained to be in a parameter set and iterate sequences consistent with the SCP update rules. Our framework is general, extending the notion of SCP to include both conventional variants such as trust-region, convex-concave, and prox-linear methods, and algorithms that combine convex subproblems with rounding steps, as in relaxing and rounding schemes. Unlike existing analyses that may only provide local guarantees under limited conditions, our framework delivers global worst-case guarantees--quantifying how well an SCP algorithm performs across all problem instances in the specified family. Applications in control, signal processing, and operations research demonstrate that our framework provides, for the first time, global worst-case guarantees for SCP algorithms in the parametric setting.
toXiv_bot_toot
Neuro-inspired automated lens design
Yao Gao, Lei Sun, Shaohua Gao, Qi Jiang, Kailun Yang, Weijian Hu, Xiaolong Qian, Wenyong Li, Luc Van Gool, Kaiwei Wang
https://arxiv.org/abs/2510.09979
Efficient Convex Optimization for Bosonic State Tomography
Shengyong Li, Yanjin Yue, Ying Hu, Rui-Yang Gong, Qianchuan Zhao, Zhihui Peng, Pengtao Song, Zeliang Xiang, Jing Zhang
https://arxiv.org/abs/2509.06305
Separable convex optimization over indegree polytopes
N\'ora A. Borsik, P\'eter Madarasi
https://arxiv.org/abs/2509.06182 https://arxiv.org/pdf/250…
Linear Convergence of a Unified Primal--Dual Algorithm for Convex--Concave Saddle Point Problems with Quadratic Growth
Cody Melcher, Afrooz Jalilzadeh, Erfan Yazdandoost Hamedani
https://arxiv.org/abs/2510.11990
Convergence analysis of inexact MBA method for constrained upper-$\mathcal{C}^2$ optimization problems
Ruyu Liu, Shaohua Pan
https://arxiv.org/abs/2511.09940 https://arxiv.org/pdf/2511.09940 https://arxiv.org/html/2511.09940
arXiv:2511.09940v1 Announce Type: new
Abstract: This paper concerns a class of constrained optimization problems in which, the objective and constraint functions are both upper-$\mathcal{C}^2$. For such nonconvex and nonsmooth optimization problems, we develop an inexact moving balls approximation (MBA) method by a workable inexactness criterion for the solving of subproblems. By leveraging a global error bound for the strongly convex program associated with parametric optimization problems, we establish the full convergence of the iterate sequence under the partial bounded multiplier property (BMP) and the Kurdyka-{\L}ojasiewicz (KL) property of the constructed potential function, and achieve the local convergence rate of the iterate and objective value sequences if the potential function satisfies the KL property of exponent $q\in[1/2,1)$. A verifiable condition is also provided to check whether the potential function satisfies the KL property of exponent $q\in[1/2,1)$ at the given critical point. To the best of our knowledge, this is the first implementable inexact MBA method with a full convergence certificate for the constrained nonconvex and nonsmooth optimization problem.
toXiv_bot_toot
Some Applications and Limitations of Convex Optimization Hierarchies for Discrete and Continuous Optimization Problems
Mrinalkanti Ghosh
https://arxiv.org/abs/2508.21327 https:/…
A quantum analogue of convex optimization
Eunou Lee
https://arxiv.org/abs/2510.02151 https://arxiv.org/pdf/2510.02151…
S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
https://arxiv.org/abs/2511.10133 https://arxiv.org/pdf/2511.10133 https://arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot
Low-Rank Regularized Convex-Non-Convex Problems for Image Segmentation or Completion
Mohamed El Guide, Anas El Hachimi, Khalide Jbilou, Lothar Reichel
https://arxiv.org/abs/2508.21765
Locally Linear Convergence for Nonsmooth Convex Optimization via Coupled Smoothing and Momentum
Reza Rahimi Baghbadorani, Sergio Grammatico, Peyman Mohajerin Esfahani
https://arxiv.org/abs/2511.10239 https://arxiv.org/pdf/2511.10239 https://arxiv.org/html/2511.10239
arXiv:2511.10239v1 Announce Type: new
Abstract: We propose an adaptive accelerated smoothing technique for a nonsmooth convex optimization problem where the smoothing update rule is coupled with the momentum parameter. We also extend the setting to the case where the objective function is the sum of two nonsmooth functions. With regard to convergence rate, we provide the global (optimal) sublinear convergence guarantees of O(1/k), which is known to be provably optimal for the studied class of functions, along with a local linear rate if the nonsmooth term fulfills a so-call locally strong convexity condition. We validate the performance of our algorithm on several problem classes, including regression with the l1-norm (the Lasso problem), sparse semidefinite programming (the MaxCut problem), Nuclear norm minimization with application in model free fault diagnosis, and l_1-regularized model predictive control to showcase the benefits of the coupling. An interesting observation is that although our global convergence result guarantees O(1/k) convergence, we consistently observe a practical transient convergence rate of O(1/k^2), followed by asymptotic linear convergence as anticipated by the theoretical result. This two-phase behavior can also be explained in view of the proposed smoothing rule.
toXiv_bot_toot
Rate Maximization for UAV-assisted ISAC System with Fluid Antennas
Xingtao Yang, Zhenghe Guo, Siyun Liang, Zhaohui Yang, Chen Zhu, Zhaoyang Zhang
https://arxiv.org/abs/2510.07668
Post-disaster Max-Min Rate Optimization for Multi-UAV RSMA Network in Obstacle Environments
Qingyang Wang, Zhuohui Yao, Wenchi Cheng, Xiao Zheng
https://arxiv.org/abs/2509.23908
A Continuous Energy Ising Machine Leveraging Difference-of-Convex Programming
Debraj Banerjee, Santanu Mahapatra, Kunal Narayan Chaudhury
https://arxiv.org/abs/2509.01928 https:…
CoNeT-GIANT: A compressed Newton-type fully distributed optimization algorithm
Souvik Das, Subhrakanti Dey
https://arxiv.org/abs/2510.08806 https://arxiv.o…
Self-concordant Schr\"odinger operators: spectral gaps and optimization without condition numbers
Sander Gribling, Simon Apers, Harold Nieuwboer, Michael Walter
https://arxiv.org/abs/2510.06115
On the Estimation of Multinomial Logit and Nested Logit Models: A Conic Optimization Approach
Hoang Giang Pham, Tien Mai, Minh Ha Hoang
https://arxiv.org/abs/2509.01562 https://…
Integral Online Algorithms for Set Cover and Load Balancing with Convex Objectives
Thomas Kesselheim, Marco Molinaro, Kalen Patton, Sahil Singla
https://arxiv.org/abs/2508.18383
Memory Optimization for Convex Hull Support Point Queries
Michael Greer
https://arxiv.org/abs/2509.03753 https://arxiv.org/pdf/2509.03753
Neural Optimal Transport Meets Multivariate Conformal Prediction
Vladimir Kondratyev, Alexander Fishkov, Nikita Kotelevskii, Mahmoud Hegazy, Remi Flamary, Maxim Panov, Eric Moulines
https://arxiv.org/abs/2509.25444
Measuring dissimilarity between convex cones by means of max-min angles
Welington de Oliveira, Valentina Sessa, David Sossa
https://arxiv.org/abs/2511.10483 https://arxiv.org/pdf/2511.10483 https://arxiv.org/html/2511.10483
arXiv:2511.10483v1 Announce Type: new
Abstract: This work introduces a novel dissimilarity measure between two convex cones, based on the max-min angle between them. We demonstrate that this measure is closely related to the Pompeiu-Hausdorff distance, a well-established metric for comparing compact sets. Furthermore, we examine cone configurations where the measure admits simplified or analytic forms. For the specific case of polyhedral cones, a nonconvex cutting-plane method is deployed to compute, at least approximately, the measure between them. Our approach builds on a tailored version of Kelley's cutting-plane algorithm, which involves solving a challenging master program per iteration. When this master program is solved locally, our method yields an angle that satisfies certain necessary optimality conditions of the underlying nonconvex optimization problem yielding the dissimilarity measure between the cones. As an application of the proposed mathematical and algorithmic framework, we address the image-set classification task under limited data conditions, a task that falls within the scope of the \emph{Few-Shot Learning} paradigm. In this context, image sets belonging to the same class are modeled as polyhedral cones, and our dissimilarity measure proves useful for understanding whether two image sets belong to the same class.
toXiv_bot_toot
dHPR: A Distributed Halpern Peaceman--Rachford Method for Non-smooth Distributed Optimization Problems
Zhangcheng Feng, Defeng Sun, Yancheng Yuan, Guojun Zhang
https://arxiv.org/abs/2511.10069 https://arxiv.org/pdf/2511.10069 https://arxiv.org/html/2511.10069
arXiv:2511.10069v1 Announce Type: new
Abstract: This paper introduces the distributed Halpern Peaceman--Rachford (dHPR) method, an efficient algorithm for solving distributed convex composite optimization problems with non-smooth objectives, which achieves a non-ergodic $O(1/k)$ iteration complexity regarding Karush--Kuhn--Tucker residual. By leveraging the symmetric Gauss--Seidel decomposition, the dHPR effectively decouples the linear operators in the objective functions and consensus constraints while maintaining parallelizability and avoiding additional large proximal terms, leading to a decentralized implementation with provably fast convergence. The superior performance of dHPR is demonstrated through comprehensive numerical experiments on distributed LASSO, group LASSO, and $L_1$-regularized logistic regression problems.
toXiv_bot_toot
Bridging the Prediction Error Method and Subspace Identification: A Weighted Null Space Fitting Method
Jiabao He, S. Joe Qin, H\r{a}kan Hjalmarsson
https://arxiv.org/abs/2510.02529
On the Strength of Linear Relaxations in Ordered Optimization
V\'ictor Blanco, Diego Laborda, Miguel Mart\'inez-Ant\'on
https://arxiv.org/abs/2510.09166 https://
Global Optimization via Softmin Energy Minimization
Andrea Agazzi, Vittorio Carlei, Marco Romito, Samuele Saviozzi
https://arxiv.org/abs/2509.17815 https://
General formulation of an analytic, Lipschitz continuous control allocation for thrust-vectored controlled rigid-bodies
Frank Mukwege, Tam Willy Nguyen, Emanuele Garone
https://arxiv.org/abs/2510.08119
Crosslisted article(s) found for cs.CG. https://arxiv.org/list/cs.CG/new
[1/1]:
- Memory Optimization for Convex Hull Support Point Queries
Michael Greer
https://…
Pinching Antenna Systems (PASS) for Cell-Free Communications
Haochen Li
https://arxiv.org/abs/2510.03628 https://arxiv.org/pdf/2510.03628
Quantum Alternating Direction Method of Multipliers for Semidefinite Programming
Hantao Nie, Dong An, Zaiwen Wen
https://arxiv.org/abs/2510.10056 https://a…
Heuristic Bundle Upper Bound Based Polyhedral Bundle Method for Semidefinite Programming
Zilong Cui, Ran Gu
https://arxiv.org/abs/2510.12374 https://arxiv.…
An Inexact Proximal Framework for Nonsmooth Riemannian Difference-of-Convex Optimization
Bo Jiang, Meng Xu, Xingju Cai, Ya-Feng Liu
https://arxiv.org/abs/2509.08561 https://
Convexity of Optimization Curves: Local Sharp Thresholds, Robustness Impossibility, and New Counterexamples
Le Duc Hieu
https://arxiv.org/abs/2509.08954 https://
A preconditioned third-order implicit-explicit algorithm with a difference of varying convex functions and extrapolation
Kelin Wu, Hongpeng Sun
https://arxiv.org/abs/2509.09391 …
On Spectral Learning for Odeco Tensors: Perturbation, Initialization, and Algorithms
Arnab Auddy, Ming Yuan
https://arxiv.org/abs/2509.25126 https://arxiv.…
Replaced article(s) found for math.OC. https://arxiv.org/list/math.OC/new
[1/1]:
- A robust BFGS algorithm for unconstrained nonlinear optimization problems
Yaguang Yang
https://arxiv.org/abs/1212.5929
- Quantum computing and the stable set problem
Alja\v{z} Krpan, Janez Povh, Dunja Pucher
https://arxiv.org/abs/2405.12845 https://mastoxiv.page/@arXiv_mathOC_bot/112483516437815686
- Mean Field Game with Reflected Jump Diffusion Dynamics: A Linear Programming Approach
Zongxia Liang, Xiang Yu, Keyu Zhang
https://arxiv.org/abs/2508.20388 https://mastoxiv.page/@arXiv_mathOC_bot/115111048711698998
- Differential Dynamic Programming for the Optimal Control Problem with an Ellipsoidal Target Set a...
Sungjun Eom, Gyunghoon Park
https://arxiv.org/abs/2509.07546 https://mastoxiv.page/@arXiv_mathOC_bot/115179281556444440
- On the Moreau envelope properties of weakly convex functions
Marien Renaud, Arthur Leclaire, Nicolas Papadakis
https://arxiv.org/abs/2509.13960 https://mastoxiv.page/@arXiv_mathOC_bot/115224514482363803
- Automated algorithm design via Nevanlinna-Pick interpolation
Ibrahim K. Ozaslan, Tryphon T. Georgiou, Mihailo R. Jovanovic
https://arxiv.org/abs/2509.21416 https://mastoxiv.page/@arXiv_mathOC_bot/115286533597711930
- Optimal Control of a Bioeconomic Crop-Energy System with Energy Reinvestment
Othman Cherkaoui Dekkaki
https://arxiv.org/abs/2510.11381 https://mastoxiv.page/@arXiv_mathOC_bot/115372322896073250
- Point Convergence Analysis of the Accelerated Gradient Method for Multiobjective Optimization: Co...
Yingdong Yin
https://arxiv.org/abs/2510.26382 https://mastoxiv.page/@arXiv_mathOC_bot/115468018035252078
- History-Aware Adaptive High-Order Tensor Regularization
Chang He, Bo Jiang, Yuntian Jiang, Chuwen Zhang, Shuzhong Zhang
https://arxiv.org/abs/2511.05788
- Equivalence of entropy solutions and gradient flows for pressureless 1D Euler systems
Jos\'e Antonio Carrillo, Sondre Tesdal Galtung
https://arxiv.org/abs/2312.04932 https://mastoxiv.page/@arXiv_mathAP_bot/111560077272113052
- Kernel Modelling of Fading Memory Systems
Yongkang Huo, Thomas Chaffey, Rodolphe Sepulchre
https://arxiv.org/abs/2403.11945 https://mastoxiv.page/@arXiv_eessSY_bot/112121123836064435
- The Maximum Theoretical Ground Speed of the Wheeled Vehicle
Altay Zhakatayev, Mukatai Nemerebayev
https://arxiv.org/abs/2502.15341 https://mastoxiv.page/@arXiv_physicsclassph_bot/114057765769441123
- Hessian stability and convergence rates for entropic and Sinkhorn potentials via semiconcavity
Giacomo Greco, Luca Tamanini
https://arxiv.org/abs/2504.11133 https://mastoxiv.page/@arXiv_mathPR_bot/114346453424694503
- Optimizing the ground state energy of the three-dimensional magnetic Dirichlet Laplacian with con...
Matthias Baur
https://arxiv.org/abs/2504.21597 https://mastoxiv.page/@arXiv_mathph_bot/114431404740241516
- A localized consensus-based sampling algorithm
Arne Bouillon, Alexander Bodard, Panagiotis Patrinos, Dirk Nuyens, Giovanni Samaey
https://arxiv.org/abs/2505.24861 https://mastoxiv.page/@arXiv_mathNA_bot/114612580684567066
- A Novel Sliced Fused Gromov-Wasserstein Distance
Moritz Piening, Robert Beinert
https://arxiv.org/abs/2508.02364 https://mastoxiv.page/@arXiv_csLG_bot/114976243138728278
- Minimal Regret Walras Equilibria for Combinatorial Markets via Duality, Integrality, and Sensitiv...
Alo\"is Duguet, Tobias Harks, Martin Schmidt, Julian Schwarz
https://arxiv.org/abs/2511.09021 https://mastoxiv.page/@arXiv_csGT_bot/115541243299714775
toXiv_bot_toot
User Manual for Model-based Imaging Inverse Problem
Xiaodong Wang
https://arxiv.org/abs/2509.01572 https://arxiv.org/pdf/2509.01572
GaussianPSL: A novel framework based on Gaussian Splatting for exploring the Pareto frontier in multi-criteria optimization
Phuong Mai Dinh, Van-Nam Huynh
https://arxiv.org/abs/2509.17889
A Duality Theorem for Classical-Quantum States with Applications to Complete Relational Program Logics
Gilles Barthe, Minbo Gao, Jam Kabeer Ali Khan, Matthijs Muis, Ivan Renison, Keiya Sakabe, Michael Walter, Yingte Xu, Li Zhou
https://arxiv.org/abs/2510.07051
Communication over LQG Control Systems: A Convex Optimization Approach to Capacity
Aharon Rips, Oron Sabag
https://arxiv.org/abs/2509.17002 https://arxiv.o…
First-order SDSOS-convex semi-algebraic optimization and exact SOCP relaxations
Chengmiao Yang, Liguo Jiao, Jae Hyoung Lee
https://arxiv.org/abs/2509.07418 https://
Regularization in Data-driven Predictive Control: A Convex Relaxation Perspective
Xu Shang, Yang Zheng
https://arxiv.org/abs/2509.09027 https://arxiv.org/p…
Inertial accelerated primal-dual algorithms for non-smooth convex optimization problems with linear equality constraints
Huan Zhang, Xiangkai Sun, Shengjie Li, Kok Lay Teo
https://arxiv.org/abs/2509.07306
Hidden Convexity in Active Learning: A Convexified Online Input Design for ARX Systems
Nicolas Chatzikiriakos, Bowen Song, Philipp Rank, Andrea Iannelli
https://arxiv.org/abs/2509.03257
Re$^3$MCN: Cubic Newton Variance Reduction Momentum Quadratic Regularization for Finite-sum Non-convex Problems
Dmitry Pasechnyuk-Vilensky, Dmitry Kamzolov, Martin Tak\'a\v{c}
https://arxiv.org/abs/2510.08714
Fast Convergence Rates for Subsampled Natural Gradient Algorithms on Quadratic Model Problems
Gil Goldshlager, Jiang Hu, Lin Lin
https://arxiv.org/abs/2508.21022 https://…
Halpern Acceleration of the Inexact Proximal Point Method of Rockafellar
Liwei Zhang, Fanli Zhuang, Ning Zhang
https://arxiv.org/abs/2511.10372 https://arxiv.org/pdf/2511.10372 https://arxiv.org/html/2511.10372
arXiv:2511.10372v1 Announce Type: new
Abstract: This paper investigates a Halpern acceleration of the inexact proximal point method for solving maximal monotone inclusion problems in Hilbert spaces. The proposed Halpern inexact proximal point method (HiPPM) is shown to be globally convergent, and a unified framework is developed to analyze its worst-case convergence rate. Under mild summability conditions on the inexactness tolerances, HiPPM achieves an $\mathcal{O}(1/k^{2})$ rate in terms of the squared fixed-point residual. Furthermore, under additional mild condition, the method retains a fast linear convergence rate. Building upon this framework, we further extend the acceleration technique to constrained convex optimization through the augmented Lagrangian formulation. In analogy to Rockafellar's classical results, the resulting accelerated inexact augmented Lagrangian method inherits the convergence rate and complexity guarantees of HiPPM. The analysis thus provides a unified theoretical foundation for accelerated inexact proximal algorithms and their augmented Lagrangian extensions.
toXiv_bot_toot
Sharpness of Minima in Deep Matrix Factorization: Exact Expressions
Anil Kamber, Rahul Parhi
https://arxiv.org/abs/2509.25783 https://arxiv.org/pdf/2509.25…
Nesterov acceleration for strongly convex-strongly concave bilinear saddle point problems: discrete and continuous-time approaches
Xin He, Ya-Ping Fang
https://arxiv.org/abs/2509.08258
Convergence for adaptive resampling of random Fourier features
Xin Huang, Aku Kammonen, Anamika Pandey, Mattias Sandberg, Erik von Schwerin, Anders Szepessy, Ra\'ul Tempone
https://arxiv.org/abs/2509.03151
Universal Representation of Generalized Convex Functions and their Gradients
Moeen Nehzati
https://arxiv.org/abs/2509.04477 https://arxiv.org/pdf/2509.0447…
A Monte Carlo Approach to Nonsmooth Convex Optimization via Proximal Splitting Algorithms
Nicholas Di, Eric C. Chi, Samy Wu Fung
https://arxiv.org/abs/2509.07914 https://…
Decentralized Online Riemannian Optimization Beyond Hadamard Manifolds
Emre Sahinoglu, Shahin Shahrampour
https://arxiv.org/abs/2509.07779 https://arxiv.or…
Replaced article(s) found for math.OC. https://arxiv.org/list/math.OC/new
[1/1]:
- Damped Proximal Augmented Lagrangian Method for weakly-Convex Problems with Convex Constraints
Hari Dahal, Wei Liu, Yangyang Xu
Smooth Quasar-Convex Optimization with Constraints
David Mart\'inez-Rubio
https://arxiv.org/abs/2510.01943 https://arxiv.org/pdf/2510.01943
Long-Time Analysis of Stochastic Heavy Ball Dynamics for Convex Optimization and Monotone Equations
Radu Ioan Bot, Chiara Schindler
https://arxiv.org/abs/2510.02951 https://
A Proximal Descent Method for Minimizing Weakly Convex Optimization
Feng-Yi Liao, Yang Zheng
https://arxiv.org/abs/2509.02804 https://arxiv.org/pdf/2509.02…
The Trajectory Bundle Method: Unifying Sequential-Convex Programming and Sampling-Based Trajectory Optimization
Kevin Tracy, John Z. Zhang, Jon Arrizabalaga, Stefan Schaal, Yuval Tassa, Tom Erez, Zachary Manchester
https://arxiv.org/abs/2509.26575
Reinforcement learning for online hyperparameter tuning in convex quadratic programming
Jeremy Bertoncini, Alberto De Marchi, Matthias Gerdts, Simon Gottschalk
https://arxiv.org/abs/2509.07404
Estimating Sequences with Memory for Minimizing Convex Non-smooth Composite Functions
Endrit Dosti, Sergiy A. Vorobyov, Themistoklis Charalambous
https://arxiv.org/abs/2510.02965
Convex Pollution Control of Wastewater Treatment Systems
Joshua Taylor
https://arxiv.org/abs/2510.03918 https://arxiv.org/pdf/2510.03918
Exponential convergence of a distributed divide-and-conquer algorithm for constrained convex optimization on networks
Nazar Emirov, Guohui Song, Qiyu Sun
https://arxiv.org/abs/2510.01511
A primal-dual splitting algorithm with convex combination and larger step sizes for composite monotone inclusion problems
Xiaokai Chang, Junfeng Yang, Jianchao Bai, Jianxiong Cao
https://arxiv.org/abs/2510.00437
ProxSTORM -- A Stochastic Trust-Region Algorithm for Nonsmooth Optimization
Robert J. Baraldi, Aurya Javeed, Drew P. Kouri, Katya Scheinberg
https://arxiv.org/abs/2510.03187 htt…
Convergence, Duality and Well-Posedness in Convex Bilevel Optimization
Khanh-Hung Giang-Tran, Nam Ho-Nguyen, Fatma K{\i}l{\i}n\c{c}-Karzan, Lingqing Shen
https://arxiv.org/abs/2509.18304
Simplex Frank-Wolfe: Linear Convergence and Its Numerical Efficiency for Convex Optimization over Polytopes
Haoning Wang, Houduo Qi, Liping Zhang
https://arxiv.org/abs/2509.24279
Faster Gradient Methods for Highly-smooth Stochastic Bilevel Optimization
Lesi Chen, Junru Li, Jingzhao Zhang
https://arxiv.org/abs/2509.02937 https://arxi…
Revisit Stochastic Gradient Descent for Strongly Convex Objectives: Tight Uniform-in-Time Bounds
Kang Chen, Yasong Feng, Tianyu Wang
https://arxiv.org/abs/2508.20823 https://
Policy Optimization in Robust Control: Weak Convexity and Subgradient Methods
Yuto Watanabe, Feng-Yi Liao, Yang Zheng
https://arxiv.org/abs/2509.25633 https://
Replaced article(s) found for math.OC. https://arxiv.org/list/math.OC/new
[1/1]:
- Conditions for representation of a function of many arguments as the difference of convex functions
Igor Proudnikov
On the Moreau envelope properties of weakly convex functions
Marien Renaud, Arthur Leclaire, Nicolas Papadakis
https://arxiv.org/abs/2509.13960 https://arx…
Non-Euclidean Broximal Point Method: A Blueprint for Geometry-Aware Optimization
Kaja Gruntkowska, Peter Richt\'arik
https://arxiv.org/abs/2510.00823 https://
A dynamical formulation of multi-marginal optimal transport
Brendan Pass, Yair Shenfeld
https://arxiv.org/abs/2509.22494 https://arxiv.org/pdf/2509.22494…
An Optimistic Gradient Tracking Method for Distributed Minimax Optimization
Yan Huang, Jinming Xu, Jiming Chen, Karl Henrik Johansson
https://arxiv.org/abs/2508.21431 https://…
Automated algorithm design for convex optimization problems with linear equality constraints
Ibrahim K. Ozaslan, Wuwei Wu, Jie Chen, Tryphon T. Georgiou, Mihailo R. Jovanovic
https://arxiv.org/abs/2509.20746
Duality between polyhedral approximation of value functions and optimal quantization of measures
Abdellah Bulaich Mehamdi, Wim van Ackooij, Luce Brotcorne, St\'ephane Gaubert, Quentin Jacquet
https://arxiv.org/abs/2509.04101
First Order Algorithm on an Optimization Problem with Improved Convergence when Problem is Convex
Chee-Khian Sim
https://arxiv.org/abs/2508.13302 https://a…
Provably data-driven projection method for quadratic programming
Anh Tuan Nguyen, Viet Anh Nguyen
https://arxiv.org/abs/2509.04524 https://arxiv.org/pdf/25…
Complexity Bounds for Smooth Convex Multiobjective Optimization
Phillipe R. Sampaio
https://arxiv.org/abs/2509.13550 https://arxiv.org/pdf/2509.13550
Replaced article(s) found for math.OC. https://arxiv.org/list/math.OC/new
[1/2]:
- Gauges and Accelerated Optimization over Smooth and/or Strongly Convex Sets
Ning Liu, Benjamin Grimmer
An Alternating Direction Method of Multipliers for Topology Optimization
Harsh Choudhary, Sven Leyffer, Dominic Yang
https://arxiv.org/abs/2509.19888 https://
Sparse Regularization by Smooth Non-separable Non-convex Penalty Function Based on Ultra-discretization Formula
Natsuki Akaishi, Koki Yamada, Kohei Yatabe
https://arxiv.org/abs/2509.19886
Differential Stochastic Variational Inequalities with Parametric Optimization
Xiaojun Chen, Jian Guo, Guan Wang
https://arxiv.org/abs/2508.15241 https://ar…
A unified vertical alignment and earthwork model in road design with a new convex optimization model for road networks
Sayan Sadhukhan, Warren Hare, Yves Lucet
https://arxiv.org/abs/2508.15953
Sequential Convex Programming with Filtering-Based Warm-Starting for Continuous-Time Multiagent Quadrotor Trajectory Optimization
Minsen Yuan, Yue Yu
https://arxiv.org/abs/2508.14299
A Riemannian Accelerated Proximal Gradient Method
Shuailing Feng, Yuhang Jiang, Wen Huang, Shihui Ying
https://arxiv.org/abs/2509.21897 https://arxiv.org/p…
Minimization of Nonsmooth Weakly Convex Function over Prox-regular Set for Robust Low-rank Matrix Recovery
Keita Kume, Isao Yamada
https://arxiv.org/abs/2509.17549 https://
SSNCVX: A primal-dual semismooth Newton method for convex composite optimization problem
Zhanwang Deng, Tao Wei, Jirui Ma, Zaiwen Wen
https://arxiv.org/abs/2509.11995 https://…
The Non-Attainment Phenomenon in Robust SOCPs
Vinh Nguyen
https://arxiv.org/abs/2510.00318 https://arxiv.org/pdf/2510.00318
Inexact and Stochastic Gradient Optimization Algorithms with Inertia and Hessian Driven Damping
Harsh Choudhary, Jalal Fadili, Vyachelav Kungurtsev
https://arxiv.org/abs/2509.19561
Policy Optimization in the Linear Quadratic Gaussian Problem: A Frequency Domain Perspective
Haoran Li, Xun Li, Yuan-Hua Ni, Xuebo Zhang
https://arxiv.org/abs/2508.17252 https:/…
Bundle Network: a Machine Learning-Based Bundle Method
Francesca Demelas, Joseph Le Roux, Antonio Frangioni, Mathieu Lacroix, Emiliano Traversi, Roberto Wolfler Calvo
https://arxiv.org/abs/2509.24736
Consensus-Based Optimization Beyond Finite-Time Analysis
Pascal Bianchi (IP Paris, S2A), Alexandru-Radu Dragomir (IP Paris, S2A), Victor Priser (IP Paris, S2A)
https://arxiv.org/abs/2509.12907
Fractional-Order Nesterov Dynamics for Convex Optimization
Tumelo Ranoto
https://arxiv.org/abs/2509.11987 https://arxiv.org/pdf/2509.11987
A smoothed proximal trust-region algorithm for nonconvex optimization problems with $L^p$-regularization, $p\in (0,1)$
Harbir Antil, Anna Lentz
https://arxiv.org/abs/2508.15446 …