Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:51:50

Riccati-ZORO: An efficient algorithm for heuristic online optimization of internal feedback laws in robust and stochastic model predictive control
Florian Messerer, Yunfan Gao, Jonathan Frey, Moritz Diehl
arxiv.org/abs/2511.10473 arxiv.org/pdf/2511.10473 arxiv.org/html/2511.10473
arXiv:2511.10473v1 Announce Type: new
Abstract: We present Riccati-ZORO, an algorithm for tube-based optimal control problems (OCP). Tube OCPs predict a tube of trajectories in order to capture predictive uncertainty. The tube induces a constraint tightening via additional backoff terms. This backoff can significantly affect the performance, and thus implicitly defines a cost of uncertainty. Optimizing the feedback law used to predict the tube can significantly reduce the backoffs, but its online computation is challenging.
Riccati-ZORO jointly optimizes the nominal trajectory and uncertainty tube based on a heuristic uncertainty cost design. The algorithm alternates between two subproblems: (i) a nominal OCP with fixed backoffs, (ii) an unconstrained tube OCP, which optimizes the feedback gains for a fixed nominal trajectory. For the tube optimization, we propose a cost function informed by the proximity of the nominal trajectory to constraints, prioritizing reduction of the corresponding backoffs. These ideas are developed in detail for ellipsoidal tubes under linear state feedback. In this case, the decomposition into the two subproblems yields a substantial reduction of the computational complexity with respect to the state dimension from $\mathcal{O}(n_x^6)$ to $\mathcal{O}(n_x^3)$, i.e., the complexity of a nominal OCP.
We investigate the algorithm in numerical experiments, and provide two open-source implementations: a prototyping version in CasADi and a high-performance implementation integrated into the acados OCP solver.
toXiv_bot_toot

@Techmeme@techhub.social
2026-01-09 10:20:57

How Craigslist has stayed relevant for users as a place to find jobs, housing, and personal connections without relying on algorithmic feeds or public profiles (Jennifer Swann/Wired)
wired.com/story/is-craigslist-

Few know the lengths to which the Trump administration is paving the way -- and the part it's playing
-- in deregulating a highly regulated industry
to ensure that AI data centers have the energy they need to shape the future of America and the world
To say the nuclear people are bullish on AI is an understatement.
“I call this not just a partnership but a structural alliance.
Atoms for algorithms. Artificial intelligence is not just powered by nuclear ene…

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:37:40

Locally Linear Convergence for Nonsmooth Convex Optimization via Coupled Smoothing and Momentum
Reza Rahimi Baghbadorani, Sergio Grammatico, Peyman Mohajerin Esfahani
arxiv.org/abs/2511.10239 arxiv.org/pdf/2511.10239 arxiv.org/html/2511.10239
arXiv:2511.10239v1 Announce Type: new
Abstract: We propose an adaptive accelerated smoothing technique for a nonsmooth convex optimization problem where the smoothing update rule is coupled with the momentum parameter. We also extend the setting to the case where the objective function is the sum of two nonsmooth functions. With regard to convergence rate, we provide the global (optimal) sublinear convergence guarantees of O(1/k), which is known to be provably optimal for the studied class of functions, along with a local linear rate if the nonsmooth term fulfills a so-call locally strong convexity condition. We validate the performance of our algorithm on several problem classes, including regression with the l1-norm (the Lasso problem), sparse semidefinite programming (the MaxCut problem), Nuclear norm minimization with application in model free fault diagnosis, and l_1-regularized model predictive control to showcase the benefits of the coupling. An interesting observation is that although our global convergence result guarantees O(1/k) convergence, we consistently observe a practical transient convergence rate of O(1/k^2), followed by asymptotic linear convergence as anticipated by the theoretical result. This two-phase behavior can also be explained in view of the proposed smoothing rule.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:58:00

Measuring dissimilarity between convex cones by means of max-min angles
Welington de Oliveira, Valentina Sessa, David Sossa
arxiv.org/abs/2511.10483 arxiv.org/pdf/2511.10483 arxiv.org/html/2511.10483
arXiv:2511.10483v1 Announce Type: new
Abstract: This work introduces a novel dissimilarity measure between two convex cones, based on the max-min angle between them. We demonstrate that this measure is closely related to the Pompeiu-Hausdorff distance, a well-established metric for comparing compact sets. Furthermore, we examine cone configurations where the measure admits simplified or analytic forms. For the specific case of polyhedral cones, a nonconvex cutting-plane method is deployed to compute, at least approximately, the measure between them. Our approach builds on a tailored version of Kelley's cutting-plane algorithm, which involves solving a challenging master program per iteration. When this master program is solved locally, our method yields an angle that satisfies certain necessary optimality conditions of the underlying nonconvex optimization problem yielding the dissimilarity measure between the cones. As an application of the proposed mathematical and algorithmic framework, we address the image-set classification task under limited data conditions, a task that falls within the scope of the \emph{Few-Shot Learning} paradigm. In this context, image sets belonging to the same class are modeled as polyhedral cones, and our dissimilarity measure proves useful for understanding whether two image sets belong to the same class.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 13:23:10

Replaced article(s) found for math.OC. arxiv.org/list/math.OC/new
[1/1]:
- A robust BFGS algorithm for unconstrained nonlinear optimization problems
Yaguang Yang
arxiv.org/abs/1212.5929
- Quantum computing and the stable set problem
Alja\v{z} Krpan, Janez Povh, Dunja Pucher
arxiv.org/abs/2405.12845 mastoxiv.page/@arXiv_mathOC_bo
- Mean Field Game with Reflected Jump Diffusion Dynamics: A Linear Programming Approach
Zongxia Liang, Xiang Yu, Keyu Zhang
arxiv.org/abs/2508.20388 mastoxiv.page/@arXiv_mathOC_bo
- Differential Dynamic Programming for the Optimal Control Problem with an Ellipsoidal Target Set a...
Sungjun Eom, Gyunghoon Park
arxiv.org/abs/2509.07546 mastoxiv.page/@arXiv_mathOC_bo
- On the Moreau envelope properties of weakly convex functions
Marien Renaud, Arthur Leclaire, Nicolas Papadakis
arxiv.org/abs/2509.13960 mastoxiv.page/@arXiv_mathOC_bo
- Automated algorithm design via Nevanlinna-Pick interpolation
Ibrahim K. Ozaslan, Tryphon T. Georgiou, Mihailo R. Jovanovic
arxiv.org/abs/2509.21416 mastoxiv.page/@arXiv_mathOC_bo
- Optimal Control of a Bioeconomic Crop-Energy System with Energy Reinvestment
Othman Cherkaoui Dekkaki
arxiv.org/abs/2510.11381 mastoxiv.page/@arXiv_mathOC_bo
- Point Convergence Analysis of the Accelerated Gradient Method for Multiobjective Optimization: Co...
Yingdong Yin
arxiv.org/abs/2510.26382 mastoxiv.page/@arXiv_mathOC_bo
- History-Aware Adaptive High-Order Tensor Regularization
Chang He, Bo Jiang, Yuntian Jiang, Chuwen Zhang, Shuzhong Zhang
arxiv.org/abs/2511.05788
- Equivalence of entropy solutions and gradient flows for pressureless 1D Euler systems
Jos\'e Antonio Carrillo, Sondre Tesdal Galtung
arxiv.org/abs/2312.04932 mastoxiv.page/@arXiv_mathAP_bo
- Kernel Modelling of Fading Memory Systems
Yongkang Huo, Thomas Chaffey, Rodolphe Sepulchre
arxiv.org/abs/2403.11945 mastoxiv.page/@arXiv_eessSY_bo
- The Maximum Theoretical Ground Speed of the Wheeled Vehicle
Altay Zhakatayev, Mukatai Nemerebayev
arxiv.org/abs/2502.15341 mastoxiv.page/@arXiv_physicscl
- Hessian stability and convergence rates for entropic and Sinkhorn potentials via semiconcavity
Giacomo Greco, Luca Tamanini
arxiv.org/abs/2504.11133 mastoxiv.page/@arXiv_mathPR_bo
- Optimizing the ground state energy of the three-dimensional magnetic Dirichlet Laplacian with con...
Matthias Baur
arxiv.org/abs/2504.21597 mastoxiv.page/@arXiv_mathph_bo
- A localized consensus-based sampling algorithm
Arne Bouillon, Alexander Bodard, Panagiotis Patrinos, Dirk Nuyens, Giovanni Samaey
arxiv.org/abs/2505.24861 mastoxiv.page/@arXiv_mathNA_bo
- A Novel Sliced Fused Gromov-Wasserstein Distance
Moritz Piening, Robert Beinert
arxiv.org/abs/2508.02364 mastoxiv.page/@arXiv_csLG_bot/
- Minimal Regret Walras Equilibria for Combinatorial Markets via Duality, Integrality, and Sensitiv...
Alo\"is Duguet, Tobias Harks, Martin Schmidt, Julian Schwarz
arxiv.org/abs/2511.09021 mastoxiv.page/@arXiv_csGT_bot/
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:37:10

S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
arxiv.org/abs/2511.10133 arxiv.org/pdf/2511.10133 arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:35:40

An inexact semismooth Newton-Krylov method for semilinear elliptic optimal control problem
Shiqi Chen, Xuesong Chen
arxiv.org/abs/2511.10058 arxiv.org/pdf/2511.10058 arxiv.org/html/2511.10058
arXiv:2511.10058v1 Announce Type: new
Abstract: An inexact semismooth Newton method has been proposed for solving semi-linear elliptic optimal control problems in this paper. This method incorporates the generalized minimal residual (GMRES) method, a type of Krylov subspace method, to solve the Newton equations and utilizes nonmonotonic line search to adjust the iteration step size. The original problem is reformulated into a nonlinear equation through variational inequality principles and discretized using a second-order finite difference scheme. By leveraging slanting differentiability, the algorithm constructs semismooth Newton directions and employs GMRES method to inexactly solve the Newton equations, significantly reducing computational overhead. A dynamic nonmonotonic line search strategy is introduced to adjust stepsizes adaptively, ensuring global convergence while overcoming local stagnation. Theoretical analysis demonstrates that the algorithm achieves superlinear convergence near optimal solutions when the residual control parameter $\eta_k$ approaches to 0. Numerical experiments validate the method's accuracy and efficiency in solving semilinear elliptic optimal control problems, corroborating theoretical insights.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 10:10:20

Global Solutions to Non-Convex Functional Constrained Problems with Hidden Convexity
Ilyas Fatkhullin, Niao He, Guanghui Lan, Florian Wolf
arxiv.org/abs/2511.10626 arxiv.org/pdf/2511.10626 arxiv.org/html/2511.10626
arXiv:2511.10626v1 Announce Type: new
Abstract: Constrained non-convex optimization is fundamentally challenging, as global solutions are generally intractable and constraint qualifications may not hold. However, in many applications, including safe policy optimization in control and reinforcement learning, such problems possess hidden convexity, meaning they can be reformulated as convex programs via a nonlinear invertible transformation. Typically such transformations are implicit or unknown, making the direct link with the convex program impossible. On the other hand, (sub-)gradients with respect to the original variables are often accessible or can be easily estimated, which motivates algorithms that operate directly in the original (non-convex) problem space using standard (sub-)gradient oracles. In this work, we develop the first algorithms to provably solve such non-convex problems to global minima. First, using a modified inexact proximal point method, we establish global last-iterate convergence guarantees with $\widetilde{\mathcal{O}}(\varepsilon^{-3})$ oracle complexity in non-smooth setting. For smooth problems, we propose a new bundle-level type method based on linearly constrained quadratic subproblems, improving the oracle complexity to $\widetilde{\mathcal{O}}(\varepsilon^{-1})$. Surprisingly, despite non-convexity, our methodology does not require any constraint qualifications, can handle hidden convex equality constraints, and achieves complexities matching those for solving unconstrained hidden convex optimization.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 10:04:30

Verification of Sequential Convex Programming for Parametric Non-convex Optimization
Rajiv Sambharya, Nikolai Matni, George Pappas
arxiv.org/abs/2511.10622 arxiv.org/pdf/2511.10622 arxiv.org/html/2511.10622
arXiv:2511.10622v1 Announce Type: new
Abstract: We introduce a verification framework to exactly verify the worst-case performance of sequential convex programming (SCP) algorithms for parametric non-convex optimization. The verification problem is formulated as an optimization problem that maximizes a performance metric (e.g., the suboptimality after a given number of iterations) over parameters constrained to be in a parameter set and iterate sequences consistent with the SCP update rules. Our framework is general, extending the notion of SCP to include both conventional variants such as trust-region, convex-concave, and prox-linear methods, and algorithms that combine convex subproblems with rounding steps, as in relaxing and rounding schemes. Unlike existing analyses that may only provide local guarantees under limited conditions, our framework delivers global worst-case guarantees--quantifying how well an SCP algorithm performs across all problem instances in the specified family. Applications in control, signal processing, and operations research demonstrate that our framework provides, for the first time, global worst-case guarantees for SCP algorithms in the parametric setting.
toXiv_bot_toot