PLRV-O: Advancing Differentially Private Deep Learning via Privacy Loss Random Variable Optimization
Qin Yang, Nicholas Stout, Meisam Mohammady, Han Wang, Ayesha Samreen, Christopher J Quinn, Yan Yan, Ashish Kundu, Yuan Hong
https://arxiv.org/abs/2509.06264
Comparative Analysis of Novel NIRMAL Optimizer Against Adam and SGD with Momentum
Nirmal Gaud, Surej Mouli, Preeti Katiyar, Vaduguru Venkata Ramya
https://arxiv.org/abs/2508.04293
Online Quantum State Tomography via Stochastic Gradient Descent
Jian-Feng Cai, Yuling Jiao, Yinan Li, Xiliang Lu, Jerry Zhijian Yang, Juntao You
https://arxiv.org/abs/2507.07601
Revisit Stochastic Gradient Descent for Strongly Convex Objectives: Tight Uniform-in-Time Bounds
Kang Chen, Yasong Feng, Tianyu Wang
https://arxiv.org/abs/2508.20823 https://
Information Entropy-Based Scheduling for Communication-Efficient Decentralized Learning
Jaiprakash Nagar, Zheng Chen, Marios Kountouris, Photios A. Stavrou
https://arxiv.org/abs/2507.17426
A Study of Hybrid and Evolutionary Metaheuristics for Single Hidden Layer Feedforward Neural Network Architecture
Gautam Siddharth Kashyap, Md Tabrez Nafis, Samar Wazir
https://arxiv.org/abs/2506.15737
Non-Asymptotic Analysis of Online Local Private Learning with SGD
Enze Shi, Jinhan Xie, Bei Jiang, Linglong Kong, Xuming He
https://arxiv.org/abs/2507.07041
Optimal Condition for Initialization Variance in Deep Neural Networks: An SGD Dynamics Perspective
Hiroshi Horii (SU), Sothea Has (KHM)
https://arxiv.org/abs/2508.12834 https://…
Last-Iterate Complexity of SGD for Convex and Smooth Stochastic Problems
Guillaume Garrigos, Daniel Cortild, Lucas Ketels, Juan Peypouquet
https://arxiv.org/abs/2507.14122
Stochastic Gradient-Descent Calibration of Pyragas Delayed-Feedback Control for Chaos Suppression in the Sprott Circuit
Adib Kabir, Onil Morshed, Oishi Kabir
https://arxiv.org/abs/2506.06639