2025-09-22 10:33:31
DIVEBATCH: Accelerating Model Training Through Gradient-Diversity Aware Batch Size Adaptation
Yuen Chen, Yian Wang, Hari Sundaram
https://arxiv.org/abs/2509.16173 https://
DIVEBATCH: Accelerating Model Training Through Gradient-Diversity Aware Batch Size Adaptation
Yuen Chen, Yian Wang, Hari Sundaram
https://arxiv.org/abs/2509.16173 https://
Scalable Hessian-free Proximal Conjugate Gradient Method for Nonconvex and Nonsmooth Optimization
Yiming Zhou, Wei Dai
https://arxiv.org/abs/2509.15973 https://
Training thermodynamic computers by gradient descent
Stephen Whitelam
https://arxiv.org/abs/2509.15324 https://arxiv.org/pdf/2509.15324
Assessment of the Gradient Jump Penalisation in Large-Eddy Simulations of Turbulence
Shiyu Du, Manuel M\"unsch, Niclas Jansson, Philipp Schlatter
https://arxiv.org/abs/2509.16013
Variable-preconditioned transformed primal-dual method for generalized Wasserstein Gradient Flows
Jin Zeng, Dawei Zhan, Ruchi Guo, Chaozhen Wei
https://arxiv.org/abs/2509.15385 …
Sparse-Autoencoder-Guided Internal Representation Unlearning for Large Language Models
Tomoya Yamashita, Akira Ito, Yuuki Yamanaka, Masanori Yamada, Takayuki Miura, Toshiki Shibahara
https://arxiv.org/abs/2509.15631
Training Variational Quantum Circuits Using Particle Swarm Optimization
Marco Mordacci, Michele Amoretti
https://arxiv.org/abs/2509.15726 https://arxiv.org…
Escaping saddle points without Lipschitz smoothness: the power of nonlinear preconditioning
Alexander Bodard, Panagiotis Patrinos
https://arxiv.org/abs/2509.15817 https://
Analysis Plug-and-Play Methods for Imaging Inverse Problems
Edward P. Chandler, Shirin Shoushtari, Brendt Wohlberg, Ulugbek S. Kamilov
https://arxiv.org/abs/2509.15422 https://
Replaced article(s) found for cs.CE. https://arxiv.org/list/cs.CE/new
[1/1]:
- A comparative analysis for different finite element types in strain-gradient elasticity simulatio...
B. Cagri Sarar, M. Erden Yildizdag, Francesco Fabbrocino, B. Emek Abali
Differentiable Acoustic Radiance Transfer
Sungho Lee, Matteo Scerbo, Seungu Han, Min Jun Choi, Kyogu Lee, Enzo De Sena
https://arxiv.org/abs/2509.15946 https://
Recursive polygon subdivision inspired by thin-section mineralogy...
(The area of each polygon is mapped to a color from a gradient. Made with https://thi.ng/geom, see next message for example & source code...)
1/2
The critical role of substrates in mitigating the power-efficiency trade-off in near-field thermophotovoltaics
Kartika N. Nimje, Julien Legendre, Michela F. Picardi, Alejandro W. Rodriguez, Georgia T. Papadakis
https://arxiv.org/abs/2509.16048
Google rolls out its new gradient "G" icon company-wide, saying it "now represents all of Google ... and visually reflects our evolution in the AI era" (Abner Li/9to5Google)
https://9to5google.com/2025/09/29/google-g-gradient-company-icon/
Inverting Trojans in LLMs
Zhengxing Li, Guangmingmei Yang, Jayaram Raghuram, David J. Miller, George Kesidis
https://arxiv.org/abs/2509.16203 https://arxiv…
Particle in cell simulation on mode conversion of Saturn's 20 kHz narrowband radio emission
Zhoufan Mu, Yao Chen, Tangmu Li, Sulan Ni, Zilong Zhang, Hao Ning
https://arxiv.org/abs/2509.15542
Ionospheric gradient estimation using ground-based GEO observations for monitoring multi-scale ionospheric dynamics: #Ionosphere in motion - a new way to track space weather in real time: https://www.eurekalert.org/news-releases/1110143
Statistical Guarantees for High-Dimensional Stochastic Gradient Descent
Jiaqi Li, Zhipeng Lou, Johannes Schmidt-Hieber, Wei Biao Wu
https://arxiv.org/abs/2510.12013 https://
A generalized canonical metric for optimization on the indefinite Stiefel manifold
Dinh Van Tiep, Duong Thi Viet An, Nguyen Thi Ngoc Oanh, Nguyen Thanh Son
https://arxiv.org/abs/2509.16113
An Invitation to Obstruction Bundle Gluing Through Morse Flow Lines
Ipsita Datta, Yuan Yao
https://arxiv.org/abs/2510.10393 https://arxiv.org/pdf/2510.1039…
Dynamic Classifier-Free Diffusion Guidance via Online Feedback
Pinelopi Papalampidi, Olivia Wiles, Ira Ktena, Aleksandar Shtedritski, Emanuele Bugliarello, Ivana Kajic, Isabela Albuquerque, Aida Nematzadeh
https://arxiv.org/abs/2509.16131
Gradient-flowed operator product expansion without IR renormalons
Martin Beneke (TU Munich), Hiromasa Takaura (Kyoto University)
https://arxiv.org/abs/2510.12193 https://…
Reverse Engineering of Music Mixing Graphs with Differentiable Processors and Iterative Pruning
Sungho Lee, Marco Mart\'inez-Ram\'irez, Wei-Hsiang Liao, Stefan Uhlich, Giorgio Fabbro, Kyogu Lee, Yuki Mitsufuji
https://arxiv.org/abs/2509.15948
Optimal gradient estimates for conductivity problems with imperfect low-conductivity interfaces
Hongjie Dong, Haigang Li, Yan Zhao
https://arxiv.org/abs/2510.10615 https://
Building Gradient by Gradient: Decentralised Energy Functions for Bimanual Robot Assembly
Alexander L. Mitchell, Joe Watson, Ingmar Posner
https://arxiv.org/abs/2510.04696 https…
On curvature estimates for four-dimensional gradient Ricci solitons
Huai-Dong Cao
https://arxiv.org/abs/2510.06059 https://arxiv.org/pdf/2510.06059
Introducing the method of ellipcenters, a new first order technique for unconstrained optimization
Roger Behling, Ramyro Aquines Correa, Eduarda Ferreira Zanatta, Vincent Guigues
https://arxiv.org/abs/2509.15471
Replaced article(s) found for stat.CO. https://arxiv.org/list/stat.CO/new
[1/1]:
- Gradient-Free Sequential Bayesian Experimental Design via Interacting Particle Systems
Robert Gruhlke, Matei Hanu, Claudia Schillings, Philipp Wacker
SPG: Sandwiched Policy Gradient for Masked Diffusion Language Models
Chengyu Wang, Paria Rashidinejad, DiJia Su, Song Jiang, Sid Wang, Siyan Zhao, Cai Zhou, Shannon Zejiang Shen, Feiyu Chen, Tommi Jaakkola, Yuandong Tian, Bo Liu
https://arxiv.org/abs/2510.09541
Reliability Sensitivity with Response Gradient
Siu-Kui Au, Zi-Jun Cao
https://arxiv.org/abs/2510.09315 https://arxiv.org/pdf/2510.09315
Thermal gradient-driven skyrmion dynamics with near-zero skyrmion Hall angle
Yogesh Kumar, Hurmal Saren, Pintu Das
https://arxiv.org/abs/2510.07020 https://
Forward and backward error bounds for a mixed precision preconditioned conjugate gradient algorithm
Thomas Bake, Erin Carson, Yuxin Ma
https://arxiv.org/abs/2510.11379 https://
Gradient-Guided Furthest Point Sampling for Robust Training Set Selection
Morris Trestman, Stefan Gugler, Felix A. Faber, O. A. von Lilienfeld
https://arxiv.org/abs/2510.08906 h…
Small-Covariance Noise-to-State Stability of Stochastic Systems and Its Applications to Stochastic Gradient Dynamics
Leilei Cui, Zhong-Ping Jiang, Eduardo D. Sontag
https://arxiv.org/abs/2509.24277
On the $O(1/T)$ Convergence of Alternating Gradient Descent-Ascent in Bilinear Games
Tianlong Nan, Shuvomoy Das Gupta, Garud Iyengar, Christian Kroer
https://arxiv.org/abs/2510.03855
Reading Between the Lines: Towards Reliable Black-box LLM Fingerprinting via Zeroth-order Gradient Estimation
Shuo Shao, Yiming Li, Hongwei Yao, Yifei Chen, Yuchen Yang, Zhan Qin
https://arxiv.org/abs/2510.06605
Replaced article(s) found for cs.CV. https://arxiv.org/list/cs.CV/new
[4/8]:
- Boosting Adversarial Transferability via Commonality-Oriented Gradient Optimization
Yanting Gao, Yepeng Liu, Junming Liu, Qi Zhang, Hongyun Zhang, Duoqian Miao, Cairong Zhao
Liouville results for $(p,q)$-Laplacian elliptic equations with source terms involving gradient nonlinearities
Mousomi Bhakta, Anup Biswas, Roberta Filippucci
https://arxiv.org/abs/2510.12486
Human brain high-resolution diffusion MRI with optimized slice-by-slice B0 field shimming in head-only high-performance gradient MRI systems
Patricia Lan, Sherry S. Huang, Chitresh Bhushan, Xinzeng Wang, Seung-Kyun Lee, Raymond Y. Huang, Jerome J. Maller, Jennifer A. McNab, Ante Zhu
https://arxiv.org/abs/2510.03586
Nonlinearly Preconditioned Gradient Methods: Momentum and Stochastic Analysis
Konstantinos Oikonomidis, Jan Quan, Panagiotis Patrinos
https://arxiv.org/abs/2510.11312 https://…
Gradient of White Matter Functional Variability via fALFF Differential Identifiability
Xinle Chang, Yang Yang, Yueran Li, Zhengcen Li, Haijin Zeng, Jingyong Su
https://arxiv.org/abs/2510.06914
A gradient boosting and broadband approach to finding Lyman-{\alpha} emitting galaxies beyond narrowband surveys
A. Vale, A. Paulino-Afonso, A. Humphrey, P. A. C. Cunha, B. Ribeiro, B. Cerqueira, R. Carvajal, J. Fonseca
https://arxiv.org/abs/2509.22915
Stability of asymptotically conical gradient K\"ahler-Ricci expanders
Longteng Chen
https://arxiv.org/abs/2510.06850 https://arxiv.org/pdf/2510.06850
Statistical Benchmarking of Optimization Methods for Variational Quantum Eigensolver under Quantum Noise
Silvie Ill\'esov\'a, Tom\'a\v{s} Bezd\v{e}k, Vojt\v{e}ch Nov\'ak, Bruno Senjean, Martin Beseda
https://arxiv.org/abs/2510.08727
Temporal Variabilities Limit Convergence Rates in Gradient-Based Online Optimization
Bryan Van Scoy, Gianluca Bianchin
https://arxiv.org/abs/2510.12512 https://
Thermodynamically Consistent Continuum Theory of Magnetic Particles in High-Gradient Fields
Marko Tesanovic, Daniel M. Markiewitz, Marcus L. Popp, Martin Z. Bazant, Sonja Berensmeier
https://arxiv.org/abs/2510.07552
ALMA Reveals an Eccentricity Gradient in the #Fomalhaut Debris Disk: https://iopscience.iop.org/article/10.3847/1538-4357/adfadc -> A Planet Carving the Fomalhaut Debris Disk? https://aasnova.org/2025/12/09/michelangelo-in-space-a-planet-carving-the-fomalhaut-debris-disk/
Statistical Inference for Gradient Boosting Regression
Haimo Fang, Kevin Tan, Giles Hooker
https://arxiv.org/abs/2509.23127 https://arxiv.org/pdf/2509.2312…
Flatness-Aware Stochastic Gradient Langevin Dynamics
Stefano Bruno, Youngsik Hwang, Jaehyeon An, Sotirios Sabanis, Dong-Young Lim
https://arxiv.org/abs/2510.02174 https://
From Morse Functions to Lefschetz Fibrations on Cotangent Bundles
Emmanuel Giroux
https://arxiv.org/abs/2510.10669 https://arxiv.org/pdf/2510.10669
Global Convergence of Policy Gradient for Entropy Regularized Linear-Quadratic Control with multiplicative noise
Gabriel Diaz, Lucky Li, Wenhao Zhang
https://arxiv.org/abs/2510.02896
Adaptive Conditional Gradient Descent
Abbas Khademi, Antonio Silveti-Falls
https://arxiv.org/abs/2510.11440 https://arxiv.org/pdf/2510.11440
Crosslisted article(s) found for cs.CV. https://arxiv.org/list/cs.CV/new
[1/3]:
- Gradient-Sign Masking for Task Vector Transport Across Pre-Trained Models
Rinaldi, Panariello, Salici, Liu, Ciccone, Porrello, Calderara
Curvature pinching of asymptotically conical gradient expanding Ricci solitons
Huai-Dong Cao, Junming Xie
https://arxiv.org/abs/2510.05075 https://arxiv.or…
Untargeted Jailbreak Attack
Xinzhe Huang, Wenjing Hu, Tianhang Zheng, Kedong Xiu, Xiaojun Jia, Di Wang, Zhan Qin, Kui Ren
https://arxiv.org/abs/2510.02999 https://
Stochastic Gradient Descent for Incomplete Tensor Linear Systems
Anna Ma, Deanna Needell, Alexander Xue
https://arxiv.org/abs/2510.07630 https://arxiv.org/…
On the optimization dynamics of RLVR: Gradient gap and step size thresholds
Joe Suk, Yaqi Duan
https://arxiv.org/abs/2510.08539 https://arxiv.org/pdf/2510.…
A Gradient Guided Diffusion Framework for Chance Constrained Programming
Boyang Zhang, Zhiguo Wang, Ya-Feng Liu
https://arxiv.org/abs/2510.12238 https://ar…
On the Theory of Continual Learning with Gradient Descent for Neural Networks
Hossein Taheri, Avishek Ghosh, Arya Mazumdar
https://arxiv.org/abs/2510.05573 https://
Hybrid Quantum-Classical Policy Gradient for Adaptive Control of Cyber-Physical Systems: A Comparative Study of VQC vs. MLP
Aueaphum Aueawatthanaphisut, Nyi Wunna Tun
https://arxiv.org/abs/2510.06010
PGMEL: Policy Gradient-based Generative Adversarial Network for Multimodal Entity Linking
KM Pooja, Cheng Long, Aixin Sun
https://arxiv.org/abs/2510.02726 https://
Riesz fractional gradient functionals defined on partitions: nonlocal-to-local variational limits
Almi Stefano, Maicol Caponi, Manuel Friedrich, Francesco Solombrino
https://arxiv.org/abs/2510.04881
New Classes of Non-monotone Variational Inequality Problems Solvable via Proximal Gradient on Smooth Gap Functions
Lei Zhao, Daoli Zhu, Shuzhong Zhang
https://arxiv.org/abs/2510.12105
Four-dimensional Gradient Ricci Solitons Gradient shrinking Ricci Solitons and Modified Sectional Curvature
Xiaodong Cao, Ernani Ribeiro Jr, Hosea Wondo
https://arxiv.org/abs/2509.20669
NeST-BO: Fast Local Bayesian Optimization via Newton-Step Targeting of Gradient and Hessian Information
Wei-Ting Tang, Akshay Kudva, Joel A. Paulson
https://arxiv.org/abs/2510.05516
Computing Wasserstein Barycenters through Gradient Flows
Eduardo Fernandes Montesuma, Yassir Bendou, Mike Gartrell
https://arxiv.org/abs/2510.04602 https://
Off-Policy Reinforcement Learning with Anytime Safety Guarantees via Robust Safe Gradient Flow
Pol Mestres, Arnau Marzabal, Jorge Cort\'es
https://arxiv.org/abs/2510.01492 h…
SMEC: Rethinking Matryoshka Representation Learning for Retrieval Embedding Compression
Biao Zhang, Lixin Chen, Tong Liu, Bo Zheng
https://arxiv.org/abs/2510.12474 https://
AdaBet: Gradient-free Layer Selection for Efficient Training of Deep Neural Networks
Irene Tenison, Soumyajit Chatterjee, Fahim Kawsar, Mohammad Malekzadeh
https://arxiv.org/abs/2510.03101
Gradient regularity for widely degenerate parabolic equations
Michael Strunk
https://arxiv.org/abs/2510.07999 https://arxiv.org/pdf/2510.07999
Adaptive Kernel Selection for Stein Variational Gradient Descent
Moritz Melcher, Simon Weissmann, Ashia C. Wilson, Jakob Zech
https://arxiv.org/abs/2510.02067 https://
(Adaptive) Scaled gradient methods beyond locally Holder smoothness: Lyapunov analysis, convergence rate and complexity
Susan Ghaderi, Morteza Rahimi, Yves Moreau, Masoud Ahookhosh
https://arxiv.org/abs/2511.10425 https://arxiv.org/pdf/2511.10425 https://arxiv.org/html/2511.10425
arXiv:2511.10425v1 Announce Type: new
Abstract: This paper addresses the unconstrained minimization of smooth convex functions whose gradients are locally Holder continuous. Building on these results, we analyze the Scaled Gradient Algorithm (SGA) under local smoothness assumptions, proving its global convergence and iteration complexity. Furthermore, under local strong convexity and the Kurdyka-Lojasiewicz (KL) inequality, we establish linear convergence rates and provide explicit complexity bounds. In particular, we show that when the gradient is locally Lipschitz continuous, SGA attains linear convergence for any KL exponent. We then introduce and analyze an adaptive variant of SGA (AdaSGA), which automatically adjusts the scaling and step-size parameters. For this method, we show global convergence, and derive local linear rates under strong convexity.
toXiv_bot_toot
Asymptotic behaviour of the weak inverse anisotropic mean curvature flow
Chaoqun Gao, Yong Wei, Rong Zhou
https://arxiv.org/abs/2510.08168 https://arxiv.or…
Robust Tangent Space Estimation via Laplacian Eigenvector Gradient Orthogonalization
Dhruv Kohli, Sawyer J. Robertson, Gal Mishne, Alexander Cloninger
https://arxiv.org/abs/2510.02308
Quantitative Convergence Analysis of Projected Stochastic Gradient Descent for Non-Convex Losses via the Goldstein Subdifferential
Yuping Zheng, Andrew Lamperski
https://arxiv.org/abs/2510.02735
Active Subspaces in Infinite Dimension
Poorbita Kundu, Nathan Wycoff
https://arxiv.org/abs/2510.11871 https://arxiv.org/pdf/2510.11871
Inductive inference of gradient-boosted decision trees on graphs for insurance fraud detection
F\'elix Vandervorst, Bruno Deprez, Wouter Verbeke, Tim Verdonck
https://arxiv.org/abs/2510.05676
Approximate Bregman proximal gradient algorithm with variable metric Armijo--Wolfe line search
Kiwamu Fujiki, Shota Takahashi, Akiko Takeda
https://arxiv.org/abs/2510.06615 http…
CurES: From Gradient Analysis to Efficient Curriculum Learning for Reasoning LLMs
Yongcheng Zeng, Zexu Sun, Bokai Ji, Erxue Min, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Haifeng Zhang, Xu Chen, Jun Wang
https://arxiv.org/abs/2510.01037
When Langevin Monte Carlo Meets Randomization: Non-asymptotic Error Bounds beyond Log-Concavity and Gradient Lipschitzness
Xiaojie Wang, Bin Yang
https://arxiv.org/abs/2509.25630
Global Convergence of Four-Layer Matrix Factorization under Random Initialization
Minrui Luo, Weihang Xu, Xiang Gao, Maryam Fazel, Simon Shaolei Du
https://arxiv.org/abs/2511.09925 https://arxiv.org/pdf/2511.09925 https://arxiv.org/html/2511.09925
arXiv:2511.09925v1 Announce Type: new
Abstract: Gradient descent dynamics on the deep matrix factorization problem is extensively studied as a simplified theoretical model for deep neural networks. Although the convergence theory for two-layer matrix factorization is well-established, no global convergence guarantee for general deep matrix factorization under random initialization has been established to date. To address this gap, we provide a polynomial-time global convergence guarantee for randomly initialized gradient descent on four-layer matrix factorization, given certain conditions on the target matrix and a standard balanced regularization term. Our analysis employs new techniques to show saddle-avoidance properties of gradient decent dynamics, and extends previous theories to characterize the change in eigenvalues of layer weights.
toXiv_bot_toot
Sample-Efficient Differentially Private Fine-Tuning via Gradient Matrix Denoising
Ali Dadsetan, Frank Rudzicz
https://arxiv.org/abs/2510.01137 https://arxi…
Differentially Private Two-Stage Gradient Descent for Instrumental Variable Regression
Haodong Liang, Yanhao Jin, Krishnakumar Balasubramanian, Lifeng Lai
https://arxiv.org/abs/2509.22794
Correlating Cross-Iteration Noise for DP-SGD using Model Curvature
Xin Gu, Yingtai Xiao, Guanlin He, Jiamu Bai, Daniel Kifer, Kiwan Maeng
https://arxiv.org/abs/2510.05416 https:…
Low-Discrepancy Set Post-Processing via Gradient Descent
Fran\c{c}ois Cl\'ement, Linhang Huang, Woorim Lee, Cole Smidt, Braeden Sodt, Xuan Zhang
https://arxiv.org/abs/2511.10496 https://arxiv.org/pdf/2511.10496 https://arxiv.org/html/2511.10496
arXiv:2511.10496v1 Announce Type: new
Abstract: The construction of low-discrepancy sets, used for uniform sampling and numerical integration, has recently seen great improvements based on optimization and machine learning techniques. However, these methods are computationally expensive, often requiring days of computation or access to GPU clusters. We show that simple gradient descent-based techniques allow for comparable results when starting with a reasonably uniform point set. Not only is this method much more efficient and accessible, but it can be applied as post-processing to any low-discrepancy set generation method for a variety of standard discrepancy measures.
toXiv_bot_toot
Reinforce-Ada: An Adaptive Sampling Framework for Reinforce-Style LLM Training
Wei Xiong, Chenlu Ye, Baohao Liao, Hanze Dong, Xinxing Xu, Christof Monz, Jiang Bian, Nan Jiang, Tong Zhang
https://arxiv.org/abs/2510.04996
A Riemannian Accelerated Proximal Gradient Method
Shuailing Feng, Yuhang Jiang, Wen Huang, Shihui Ying
https://arxiv.org/abs/2509.21897 https://arxiv.org/p…
Adaptive Memory Momentum via a Model-Based Framework for Deep Learning Optimization
Kristi Topollai, Anna Choromanska
https://arxiv.org/abs/2510.04988 https://
S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
https://arxiv.org/abs/2511.10133 https://arxiv.org/pdf/2511.10133 https://arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot
Weight Initialization and Variance Dynamics in Deep Neural Networks and Large Language Models
Yankun Han
https://arxiv.org/abs/2510.09423 https://arxiv.org…
Boundary-Guided Policy Optimization for Memory-efficient RL of Diffusion Large Language Models
Nianyi Lin, Jiajie Zhang, Lei Hou, Juanzi Li
https://arxiv.org/abs/2510.11683 http…
Inexact and Stochastic Gradient Optimization Algorithms with Inertia and Hessian Driven Damping
Harsh Choudhary, Jalal Fadili, Vyachelav Kungurtsev
https://arxiv.org/abs/2509.19561
Proximal gradient methods in Banach spaces
Gerd Wachsmuth, Daniel Walter
https://arxiv.org/abs/2509.24685 https://arxiv.org/pdf/2509.24685
Gated X-TFC: Soft Domain Decomposition for Forward and Inverse Problems in Sharp-Gradient PDEs
Vikas Dwivedi, Enrico Schiassi, Monica Sigovan, Bruno Sixou
https://arxiv.org/abs/2510.01039
On the (almost) Global Exponential Convergence of the Overparameterized Policy Optimization for the LQR Problem
Moh Kamalul Wafi, Arthur Castello B. de Oliveira, Eduardo D. Sontag
https://arxiv.org/abs/2510.02140
Unveiling m-Sharpness Through the Structure of Stochastic Gradient Noise
Haocheng Luo, Mehrtash Harandi, Dinh Phung, Trung Le
https://arxiv.org/abs/2509.18001 https://
A Single-Loop Gradient Algorithm for Pessimistic Bilevel Optimization via Smooth Approximation
Cao Qichao, Zeng Shangzhi, Zhang jin
https://arxiv.org/abs/2509.26240 https://
Hessian-guided Perturbed Wasserstein Gradient Flows for Escaping Saddle Points
Naoya Yamamoto, Juno Kim, Taiji Suzuki
https://arxiv.org/abs/2509.16974 https://
Replaced article(s) found for math.OC. https://arxiv.org/list/math.OC/new
[1/1]:
- A robust BFGS algorithm for unconstrained nonlinear optimization problems
Yaguang Yang
https://arxiv.org/abs/1212.5929
- Quantum computing and the stable set problem
Alja\v{z} Krpan, Janez Povh, Dunja Pucher
https://arxiv.org/abs/2405.12845 https://mastoxiv.page/@arXiv_mathOC_bot/112483516437815686
- Mean Field Game with Reflected Jump Diffusion Dynamics: A Linear Programming Approach
Zongxia Liang, Xiang Yu, Keyu Zhang
https://arxiv.org/abs/2508.20388 https://mastoxiv.page/@arXiv_mathOC_bot/115111048711698998
- Differential Dynamic Programming for the Optimal Control Problem with an Ellipsoidal Target Set a...
Sungjun Eom, Gyunghoon Park
https://arxiv.org/abs/2509.07546 https://mastoxiv.page/@arXiv_mathOC_bot/115179281556444440
- On the Moreau envelope properties of weakly convex functions
Marien Renaud, Arthur Leclaire, Nicolas Papadakis
https://arxiv.org/abs/2509.13960 https://mastoxiv.page/@arXiv_mathOC_bot/115224514482363803
- Automated algorithm design via Nevanlinna-Pick interpolation
Ibrahim K. Ozaslan, Tryphon T. Georgiou, Mihailo R. Jovanovic
https://arxiv.org/abs/2509.21416 https://mastoxiv.page/@arXiv_mathOC_bot/115286533597711930
- Optimal Control of a Bioeconomic Crop-Energy System with Energy Reinvestment
Othman Cherkaoui Dekkaki
https://arxiv.org/abs/2510.11381 https://mastoxiv.page/@arXiv_mathOC_bot/115372322896073250
- Point Convergence Analysis of the Accelerated Gradient Method for Multiobjective Optimization: Co...
Yingdong Yin
https://arxiv.org/abs/2510.26382 https://mastoxiv.page/@arXiv_mathOC_bot/115468018035252078
- History-Aware Adaptive High-Order Tensor Regularization
Chang He, Bo Jiang, Yuntian Jiang, Chuwen Zhang, Shuzhong Zhang
https://arxiv.org/abs/2511.05788
- Equivalence of entropy solutions and gradient flows for pressureless 1D Euler systems
Jos\'e Antonio Carrillo, Sondre Tesdal Galtung
https://arxiv.org/abs/2312.04932 https://mastoxiv.page/@arXiv_mathAP_bot/111560077272113052
- Kernel Modelling of Fading Memory Systems
Yongkang Huo, Thomas Chaffey, Rodolphe Sepulchre
https://arxiv.org/abs/2403.11945 https://mastoxiv.page/@arXiv_eessSY_bot/112121123836064435
- The Maximum Theoretical Ground Speed of the Wheeled Vehicle
Altay Zhakatayev, Mukatai Nemerebayev
https://arxiv.org/abs/2502.15341 https://mastoxiv.page/@arXiv_physicsclassph_bot/114057765769441123
- Hessian stability and convergence rates for entropic and Sinkhorn potentials via semiconcavity
Giacomo Greco, Luca Tamanini
https://arxiv.org/abs/2504.11133 https://mastoxiv.page/@arXiv_mathPR_bot/114346453424694503
- Optimizing the ground state energy of the three-dimensional magnetic Dirichlet Laplacian with con...
Matthias Baur
https://arxiv.org/abs/2504.21597 https://mastoxiv.page/@arXiv_mathph_bot/114431404740241516
- A localized consensus-based sampling algorithm
Arne Bouillon, Alexander Bodard, Panagiotis Patrinos, Dirk Nuyens, Giovanni Samaey
https://arxiv.org/abs/2505.24861 https://mastoxiv.page/@arXiv_mathNA_bot/114612580684567066
- A Novel Sliced Fused Gromov-Wasserstein Distance
Moritz Piening, Robert Beinert
https://arxiv.org/abs/2508.02364 https://mastoxiv.page/@arXiv_csLG_bot/114976243138728278
- Minimal Regret Walras Equilibria for Combinatorial Markets via Duality, Integrality, and Sensitiv...
Alo\"is Duguet, Tobias Harks, Martin Schmidt, Julian Schwarz
https://arxiv.org/abs/2511.09021 https://mastoxiv.page/@arXiv_csGT_bot/115541243299714775
toXiv_bot_toot