Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_mathNA_bot@mastoxiv.page
2025-06-11 08:43:15

Stochastic gradient descent based variational inference for infinite-dimensional inverse problems
Jiaming Sui, Junxiong Jia, Jinglai Li
arxiv.org/abs/2506.08380

@arXiv_quantph_bot@mastoxiv.page
2025-07-11 09:47:11

Online Quantum State Tomography via Stochastic Gradient Descent
Jian-Feng Cai, Yuling Jiao, Yinan Li, Xiliang Lu, Jerry Zhijian Yang, Juntao You
arxiv.org/abs/2507.07601

@arXiv_mathOC_bot@mastoxiv.page
2025-07-11 09:00:31

Almost Sure Convergence for the Last Iterate of Stochastic Gradient Descent Schemes
Marcel Hudiani
arxiv.org/abs/2507.07281

@arXiv_mathOC_bot@mastoxiv.page
2025-07-11 09:23:51

An Adaptive Order Caputo Fractional Gradient Descent Method for Multi-objective Optimization Problems
Barsha Shaw, Md Abu Talhamainuddin Ansary
arxiv.org/abs/2507.07674

@arXiv_mathST_bot@mastoxiv.page
2025-06-10 17:26:59

This arxiv.org/abs/2409.08469 has been replaced.
initial toot: mastoxiv.page/@arXiv_mat…

@arXiv_csLG_bot@mastoxiv.page
2025-06-09 10:11:22

Gradient Similarity Surgery in Multi-Task Deep Learning
Thomas Borsani, Andrea Rosani, Giuseppe Nicosia, Giuseppe Di Fatta
arxiv.org/abs/2506.06130

@arXiv_statME_bot@mastoxiv.page
2025-07-10 09:25:31

Non-Asymptotic Analysis of Online Local Private Learning with SGD
Enze Shi, Jinhan Xie, Bei Jiang, Linglong Kong, Xuming He
arxiv.org/abs/2507.07041

@arXiv_nlincd_bot@mastoxiv.page
2025-06-10 08:59:02

Stochastic Gradient-Descent Calibration of Pyragas Delayed-Feedback Control for Chaos Suppression in the Sprott Circuit
Adib Kabir, Onil Morshed, Oishi Kabir
arxiv.org/abs/2506.06639

@arXiv_csIT_bot@mastoxiv.page
2025-06-10 16:58:39

This arxiv.org/abs/2504.14730 has been replaced.
initial toot: mastoxiv.page/@arXiv_csIT_…

@arXiv_eessSY_bot@mastoxiv.page
2025-06-10 09:21:12

Decentralized Optimization with Amplified Privacy via Efficient Communication
Wei Huo, Changxin Liu, Kemi Ding, Karl Henrik Johansson, Ling Shi
arxiv.org/abs/2506.07102

@arXiv_csNE_bot@mastoxiv.page
2025-06-10 08:21:02

Can Biologically Plausible Temporal Credit Assignment Rules Match BPTT for Neural Similarity? E-prop as an Example
Yuhan Helena Liu, Guangyu Robert Yang, Christopher J. Cueva
arxiv.org/abs/2506.06904

@arXiv_csLG_bot@mastoxiv.page
2025-07-09 10:24:52

Simple Convergence Proof of Adam From a Sign-like Descent Perspective
Hanyang Peng, Shuang Qin, Yue Yu, Fangqing Jiang, Hui Wang, Zhouchen Lin
arxiv.org/abs/2507.05966

@arXiv_statCO_bot@mastoxiv.page
2025-06-10 10:28:53

Linear Discriminant Analysis with Gradient Optimization on Covariance Inverse
Cencheng Shen, Yuexiao Dong
arxiv.org/abs/2506.06845

@arXiv_mathOC_bot@mastoxiv.page
2025-06-10 18:06:00

This arxiv.org/abs/2503.16398 has been replaced.
initial toot: mastoxiv.page/@arXiv_mat…

@arXiv_mathOC_bot@mastoxiv.page
2025-08-11 09:03:49

Kahan's Automatic Step-Size Control for Unconstrained Optimization
Yifeng Meng, Chungen Shen, Linuo Xue, Lei-Hong Zhang
arxiv.org/abs/2508.06002

@arXiv_csRO_bot@mastoxiv.page
2025-06-03 07:57:51

Constrained Stein Variational Gradient Descent for Robot Perception, Planning, and Identification
Griffin Tabor, Tucker Hermans
arxiv.org/abs/2506.00589

@arXiv_csLG_bot@mastoxiv.page
2025-06-10 19:20:05

This arxiv.org/abs/2505.17282 has been replaced.
initial toot: mastoxiv.page/@arXiv_csLG_…

@pre@boing.world
2025-06-06 16:14:26

This month's newsletter/digest is on it's way to the email box of all the amazing friendly good people who asked for it.
The rest of you bitchy vicious augmentative people in public spats can read it here:
#newsletter #digest

@arXiv_mathNA_bot@mastoxiv.page
2025-08-08 08:03:22

Toroidal area-preserving parameterizations of genus-one closed surfaces
Marco Sutti, Mei-Heng Yueh
arxiv.org/abs/2508.05111 arxiv.org/pdf/2…

@arXiv_statML_bot@mastoxiv.page
2025-07-29 09:42:31

Statistical Inference for Differentially Private Stochastic Gradient Descent
Xintao Xia, Linjun Zhang, Zhanrui Cai
arxiv.org/abs/2507.20560

@arXiv_eessSP_bot@mastoxiv.page
2025-06-04 13:43:17

This arxiv.org/abs/2503.14353 has been replaced.
initial toot: mastoxiv.page/@arXiv_ees…

@arXiv_mathDS_bot@mastoxiv.page
2025-07-08 10:26:50

A Dynamical Systems Perspective on the Analysis of Neural Networks
Dennis Chemnitz, Maximilian Engel, Christian Kuehn, Sara-Viola Kuntz
arxiv.org/abs/2507.05164

@arXiv_csIR_bot@mastoxiv.page
2025-08-07 09:14:53

Comparative Analysis of Novel NIRMAL Optimizer Against Adam and SGD with Momentum
Nirmal Gaud, Surej Mouli, Preeti Katiyar, Vaduguru Venkata Ramya
arxiv.org/abs/2508.04293

@arXiv_mathOC_bot@mastoxiv.page
2025-07-09 08:29:12

On the Inherent Privacy of Zeroth Order Projected Gradient Descent
Devansh Gupta, Meisam Razaviyayn, Vatsal Sharan
arxiv.org/abs/2507.05610

@arXiv_condmatquantgas_bot@mastoxiv.page
2025-07-08 10:08:41

Solving the Gross-Pitaevskii Equation with Quantic Tensor Trains: Ground States and Nonlinear Dynamics
Qian-Can Chen, I-Kang Liu, Jheng-Wei Li, Chia-Min Chung
arxiv.org/abs/2507.04279

@arXiv_mathOC_bot@mastoxiv.page
2025-07-08 12:13:41

Riemannian Inexact Gradient Descent for Quadratic Discrimination
Uday Talwar, Meredith K. Kupinski, Afrooz Jalilzadeh
arxiv.org/abs/2507.04670

@arXiv_mathAP_bot@mastoxiv.page
2025-06-30 08:59:00

A gradient flow that is none: Heat flow with Wentzell boundary condition
Marie Bormann, L\'eonard Monsaingeon, D. R. Michiel Renger, Max von Renesse
arxiv.org/abs/2506.22093

@arXiv_csIT_bot@mastoxiv.page
2025-07-08 12:31:41

Fast and Provable Hankel Tensor Completion for Multi-measurement Spectral Compressed Sensing
Jinsheng Li, Xu Zhang, Shuang Wu, Wei Cui
arxiv.org/abs/2507.04847

@arXiv_csCE_bot@mastoxiv.page
2025-07-23 07:37:52

Multi-objective Portfolio Optimization Via Gradient Descent
Christian Oliva, Pedro R. Ventura, Luis F. Lago-Fern\'andez
arxiv.org/abs/2507.16717

@arXiv_mathFA_bot@mastoxiv.page
2025-06-26 08:37:20

On gradient descent-ascent flows in metric spaces
Noboru Isobe, Sho Shimoyama
arxiv.org/abs/2506.20258 arxiv.org/pdf/…

@arXiv_mathST_bot@mastoxiv.page
2025-06-04 07:40:34

On the Benefits of Accelerated Optimization in Robust and Private Estimation
Laurentiu Andrei Marchis, Po-Ling Loh
arxiv.org/abs/2506.03044

@arXiv_csLG_bot@mastoxiv.page
2025-06-03 08:21:56

Generalized Gradient Norm Clipping & Non-Euclidean $(L_0,L_1)$-Smoothness
Thomas Pethick, Wanyun Xie, Mete Erdogan, Kimon Antonakopoulos, Tony Silveti-Falls, Volkan Cevher
arxiv.org/abs/2506.01913

@arXiv_nlinCG_bot@mastoxiv.page
2025-08-07 07:50:43

The Glider Equation for Asymptotic Lenia
Hiroki Kojima, Ivan Yevenko, Takashi Ikegami
arxiv.org/abs/2508.04167 arxiv.org/pdf/2508.04167

@arXiv_qbioMN_bot@mastoxiv.page
2025-06-25 08:55:30

Enhancing Biosecurity in Tamper-Resistant Large Language Models With Quantum Gradient Descent
Fahmida Hai, Saif Nirzhor, Rubayat Khan, Don Roosan
arxiv.org/abs/2506.19086

@arXiv_mathOC_bot@mastoxiv.page
2025-06-04 07:46:14

Multilevel Stochastic Gradient Descent for Optimal Control Under Uncertainty
Niklas Baumgarten, David Schneiderhan
arxiv.org/abs/2506.02647

@arXiv_mathOC_bot@mastoxiv.page
2025-06-09 08:38:32

A Proximal Variable Smoothing for Minimization of Nonlinearly Composite Nonsmooth Function -- Maxmin Dispersion and MIMO Applications
Keita Kume, Isao Yamada
arxiv.org/abs/2506.05974

@arXiv_physicscompph_bot@mastoxiv.page
2025-06-03 07:47:48

Antenna Q-Factor Topology Optimization with Auxiliary Edge Resistivities
Stepan Bosak, Miloslav Capek, Jiri Matas
arxiv.org/abs/2506.00595

@arXiv_csGT_bot@mastoxiv.page
2025-07-16 08:00:21

A Parallelizable Approach for Characterizing NE in Zero-Sum Games After a Linear Number of Iterations of Gradient Descent
Taemin Kim, James P. Bailey
arxiv.org/abs/2507.11366

@arXiv_physicsappph_bot@mastoxiv.page
2025-07-04 08:02:31

Experimental Multiport-Network Parameter Estimation and Optimization for Multi-Bit RIS
Philipp del Hougne
arxiv.org/abs/2507.02168

@arXiv_statML_bot@mastoxiv.page
2025-07-03 13:16:44

Replaced article(s) found for stat.ML. arxiv.org/list/stat.ML/new
[1/1]:
- Rank-1 Matrix Completion with Gradient Descent and Small Random Initialization
Daesung Kim, Hye Won Chung

@arXiv_csCL_bot@mastoxiv.page
2025-06-23 08:16:40

Rethinking LLM Training through Information Geometry and Quantum Metrics
Riccardo Di Sipio
arxiv.org/abs/2506.15830 a…

@arXiv_mathOC_bot@mastoxiv.page
2025-07-04 08:37:41

Perturbed Gradient Descent Algorithms are Small-Disturbance Input-to-State Stable
Leilei Cui, Zhong-Ping Jiang, Eduardo D. Sontag, Richard D. Braatz
arxiv.org/abs/2507.02131

@arXiv_condmatsoft_bot@mastoxiv.page
2025-05-30 07:30:09

Emergent universal long-range structure in random-organizing systems
Satyam Anand, Guanming Zhang, Stefano Martiniani
arxiv.org/abs/2505.22933

@arXiv_mathOC_bot@mastoxiv.page
2025-06-05 07:27:16

Multilevel Bregman Proximal Gradient Descent
Yara Elshiaty, Stefania Petra
arxiv.org/abs/2506.03950 arxiv.org/pdf/250…

@arXiv_astrophEP_bot@mastoxiv.page
2025-06-25 09:24:00

Extreme Learning Machines for Exoplanet Simulations: A Faster, Lightweight Alternative to Deep Learning
Tara P. A. Tahseen, Lu\'is F. Sim\~oes, Kai Hou Yip, Nikolaos Nikolaou, Jo\~ao M. Mendon\c{c}a, Ingo P. Waldmann
arxiv.org/abs/2506.19679

@arXiv_mathOC_bot@mastoxiv.page
2025-06-04 13:57:44

This arxiv.org/abs/2502.03701 has been replaced.
initial toot: mastoxiv.page/@arXiv_mat…

@arXiv_statML_bot@mastoxiv.page
2025-08-01 08:25:01

A Smoothing Newton Method for Rank-one Matrix Recovery
Tyler Maunu, Gabriel Abreu
arxiv.org/abs/2507.23017 arxiv.org/pdf/2507.23017

@arXiv_mathOC_bot@mastoxiv.page
2025-08-08 08:18:02

Can SGD Handle Heavy-Tailed Noise?
Ilyas Fatkhullin, Florian H\"ubler, Guanghui Lan
arxiv.org/abs/2508.04860 arxiv.org/pdf/2508.04860

@arXiv_csLG_bot@mastoxiv.page
2025-07-02 14:33:44

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[4/5]:
- SPGD: Steepest Perturbed Gradient Descent Optimization
Amir M. Vahedi, Horea T. Ilies

@arXiv_mathOC_bot@mastoxiv.page
2025-06-04 13:58:47

This arxiv.org/abs/2502.16492 has been replaced.
initial toot: mastoxiv.page/@arXiv_mat…

@arXiv_eessAS_bot@mastoxiv.page
2025-07-24 09:23:40

SLASH: Self-Supervised Speech Pitch Estimation Leveraging DSP-derived Absolute Pitch
Ryo Terashima, Yuma Shirahata, Masaya Kawamura
arxiv.org/abs/2507.17208

@arXiv_statML_bot@mastoxiv.page
2025-07-30 09:06:22

From Sublinear to Linear: Fast Convergence in Deep Networks via Locally Polyak-Lojasiewicz Regions
Agnideep Aich, Ashit Baran Aich, Bruce Wade
arxiv.org/abs/2507.21429

@arXiv_eessSY_bot@mastoxiv.page
2025-06-25 09:56:10

Learning to Solve Parametric Mixed-Integer Optimal Control Problems via Differentiable Predictive Control
J\'an Boldock\'y, Shahriar Dadras Javan, Martin Gulan, Martin M\"onnigmann, J\'an Drgo\v{n}a
arxiv.org/abs/2506.19646

@arXiv_csNE_bot@mastoxiv.page
2025-06-23 08:24:10

A Study of Hybrid and Evolutionary Metaheuristics for Single Hidden Layer Feedforward Neural Network Architecture
Gautam Siddharth Kashyap, Md Tabrez Nafis, Samar Wazir
arxiv.org/abs/2506.15737

@arXiv_mathST_bot@mastoxiv.page
2025-07-01 09:59:03

On Universality of Non-Separable Approximate Message Passing Algorithms
Max Lovig, Tianhao Wang, Zhou Fan
arxiv.org/abs/2506.23010

@arXiv_mathOC_bot@mastoxiv.page
2025-06-23 10:54:00

Worst-case convergence analysis of relatively inexact gradient descent on smooth convex functions
Pierre Vernimmen, Fran\c{c}ois Glineur
arxiv.org/abs/2506.17145

@arXiv_mathNA_bot@mastoxiv.page
2025-06-17 11:45:14

Faithful-Newton Framework: Bridging Inner and Outer Solvers for Enhanced Optimization
Alexander Lim, Fred Roosta
arxiv.org/abs/2506.13154

@arXiv_csLG_bot@mastoxiv.page
2025-07-31 13:34:44

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[2/4]:
- Convergence Properties of Natural Gradient Descent for Minimizing KL Divergence
Adwait Datar, Nihat Ay

@arXiv_csIT_bot@mastoxiv.page
2025-06-26 08:23:10

Efficient Feedback Design for Unsourced Random Access with Integrated Sensing and Communication
Mohammad Javad Ahmadi, Mohammad Kazemi, Rafael F. Schaefer
arxiv.org/abs/2506.20262

@arXiv_statML_bot@mastoxiv.page
2025-06-27 08:49:49

Stable Minima of ReLU Neural Networks Suffer from the Curse of Dimensionality: The Neural Shattering Phenomenon
Tongtong Liang, Dan Qiao, Yu-Xiang Wang, Rahul Parhi
arxiv.org/abs/2506.20779

@arXiv_csNE_bot@mastoxiv.page
2025-07-23 07:43:52

Beyond Rate Coding: Surrogate Gradients Enable Spike Timing Learning in Spiking Neural Networks
Ziqiao Yu, Pengfei Sun, Dan F. M. Goodman
arxiv.org/abs/2507.16043

@arXiv_csIT_bot@mastoxiv.page
2025-07-24 08:24:20

Information Entropy-Based Scheduling for Communication-Efficient Decentralized Learning
Jaiprakash Nagar, Zheng Chen, Marios Kountouris, Photios A. Stavrou
arxiv.org/abs/2507.17426

@arXiv_mathOC_bot@mastoxiv.page
2025-07-29 11:25:11

Stochastic gradient with least-squares control variates
Fabio Nobile, Matteo Raviola, Nathan Schaeffer
arxiv.org/abs/2507.20981 arxiv.org/p…

@arXiv_mathOC_bot@mastoxiv.page
2025-07-21 08:59:50

Gradient descent avoids strict saddles with a simple line-search method too
Andreea-Alexandra Mu\c{s}at, Nicolas Boumal
arxiv.org/abs/2507.13804

@arXiv_mathOC_bot@mastoxiv.page
2025-07-22 11:29:20

Power-Constrained Policy Gradient Methods for LQR
Ashwin Verma, Aritra Mitra, Lintao Ye, Vijay Gupta
arxiv.org/abs/2507.15806

@arXiv_statML_bot@mastoxiv.page
2025-06-23 09:27:59

Random feature approximation for general spectral methods
Mike Nguyen, Nicole M\"ucke
arxiv.org/abs/2506.16283 a…

@arXiv_csLG_bot@mastoxiv.page
2025-07-14 07:41:42

An Enhanced Privacy-preserving Federated Few-shot Learning Framework for Respiratory Disease Diagnosis
Ming Wang, Zhaoyang Duan, Dong Xue, Fangzhou Liu, Zhongheng Zhang
arxiv.org/abs/2507.08050 arxiv.org/pdf/2507.08050 arxiv.org/html/2507.08050
arXiv:2507.08050v1 Announce Type: new
Abstract: The labor-intensive nature of medical data annotation presents a significant challenge for respiratory disease diagnosis, resulting in a scarcity of high-quality labeled datasets in resource-constrained settings. Moreover, patient privacy concerns complicate the direct sharing of local medical data across institutions, and existing centralized data-driven approaches, which rely on amounts of available data, often compromise data privacy. This study proposes a federated few-shot learning framework with privacy-preserving mechanisms to address the issues of limited labeled data and privacy protection in diagnosing respiratory diseases. In particular, a meta-stochastic gradient descent algorithm is proposed to mitigate the overfitting problem that arises from insufficient data when employing traditional gradient descent methods for neural network training. Furthermore, to ensure data privacy against gradient leakage, differential privacy noise from a standard Gaussian distribution is integrated into the gradients during the training of private models with local data, thereby preventing the reconstruction of medical images. Given the impracticality of centralizing respiratory disease data dispersed across various medical institutions, a weighted average algorithm is employed to aggregate local diagnostic models from different clients, enhancing the adaptability of a model across diverse scenarios. Experimental results show that the proposed method yields compelling results with the implementation of differential privacy, while effectively diagnosing respiratory diseases using data from different structures, categories, and distributions.
toXiv_bot_toot

@arXiv_mathNA_bot@mastoxiv.page
2025-07-18 07:56:52

Keep the beat going: Automatic drum transcription with momentum
Alisha L. Foster, Robert J. Webber
arxiv.org/abs/2507.12596

@arXiv_mathOC_bot@mastoxiv.page
2025-06-16 09:31:29

High Probability Convergence of Distributed Clipped Stochastic Gradient Descent with Heavy-tailed Noise
Yuchen Yang, Kaihong Lu, Long Wang
arxiv.org/abs/2506.11647

@arXiv_mathOC_bot@mastoxiv.page
2025-07-04 12:37:53

Replaced article(s) found for math.OC. arxiv.org/list/math.OC/new
[1/1]:
- Low-rank optimization methods based on projected-projected gradient descent that accumulate at Bo...
Guillaume Olikier, Kyle A. Gallivan, P. -A. Absil

@arXiv_mathOC_bot@mastoxiv.page
2025-05-30 10:18:13

This arxiv.org/abs/2505.08408 has been replaced.
initial toot: mastoxiv.page/@arXiv_mat…

@arXiv_mathOC_bot@mastoxiv.page
2025-05-30 10:18:13

This arxiv.org/abs/2505.08408 has been replaced.
initial toot: mastoxiv.page/@arXiv_mat…

@arXiv_statCO_bot@mastoxiv.page
2025-07-17 12:57:50

Replaced article(s) found for stat.CO. arxiv.org/list/stat.CO/new
[1/1]:
- Error bounds for particle gradient descent, and extensions of the log-Sobolev and Talagrand inequ...
Rocco Caprio, Juan Kuntz, Samuel Power, Adam M. Johansen

@arXiv_mathOC_bot@mastoxiv.page
2025-06-19 09:08:42

Primal-Dual Coordinate Descent for Nonconvex-Nonconcave Saddle Point Problems Under the Weak MVI Assumption
Iyad Walwil, Olivier Fercoq
arxiv.org/abs/2506.15597

@arXiv_mathOC_bot@mastoxiv.page
2025-06-19 09:08:02

Efficient Online Mirror Descent Stochastic Approximation for Multi-Stage Stochastic Programming
Junhui Zhang, Patrick Jaillet
arxiv.org/abs/2506.15392

@arXiv_mathOC_bot@mastoxiv.page
2025-06-02 10:14:39

This arxiv.org/abs/2401.06738 has been replaced.
initial toot: mastoxiv.page/@arXiv_mat…

@arXiv_mathOC_bot@mastoxiv.page
2025-07-29 11:16:41

Numerical Design of Optimized First-Order Algorithms
Yassine Kamri, Julien M. Hendrickx, Fran\c{c}ois Glineur
arxiv.org/abs/2507.20773 arxi…

@arXiv_mathOC_bot@mastoxiv.page
2025-07-01 17:22:52

Replaced article(s) found for math.OC. arxiv.org/list/math.OC/new
[1/2]:
- Projected gradient descent accumulates at Bouligand stationary points
Guillaume Olikier, Ir\`ene Waldspurger

@arXiv_mathOC_bot@mastoxiv.page
2025-07-16 08:41:31

Non-smooth stochastic gradient descent using smoothing functions
Tommaso Giovannelli, Jingfu Tan, Luis Nunes Vicente
arxiv.org/abs/2507.10901

@arXiv_mathOC_bot@mastoxiv.page
2025-07-16 09:19:11

Deep Equilibrium models for Poisson Imaging Inverse problems via Mirror Descent
Christian Daniele, Silvia Villa, Samuel Vaiter, Luca Calatroni
arxiv.org/abs/2507.11461

@arXiv_mathOC_bot@mastoxiv.page
2025-06-12 09:17:22

Empirical and computer-aided robustness analysis of long-step and accelerated methods in smooth convex optimization
Pierre Vernimmen, Fran\c{c}ois Glineur
arxiv.org/abs/2506.09730

@arXiv_mathOC_bot@mastoxiv.page
2025-07-23 09:12:02

Learning Acceleration Algorithms for Fast Parametric Convex Optimization with Certified Robustness
Rajiv Sambharya, Jinho Bok, Nikolai Matni, George Pappas
arxiv.org/abs/2507.16264

@arXiv_mathOC_bot@mastoxiv.page
2025-07-21 09:16:40

Last-Iterate Complexity of SGD for Convex and Smooth Stochastic Problems
Guillaume Garrigos, Daniel Cortild, Lucas Ketels, Juan Peypouquet
arxiv.org/abs/2507.14122

@arXiv_mathOC_bot@mastoxiv.page
2025-06-17 11:46:25

An Extended Variational Barzilai-Borwein Method
Xin Xu
arxiv.org/abs/2506.12731 arxiv.org/pdf/2506.12731