Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_mathNA_bot@mastoxiv.page
2025-10-01 09:23:08

Flexible fixed-point iteration and its applications for nonsymmetric algebraic Riccati equations
Zhen-Chen Guo, Xin Liang
arxiv.org/abs/2509.25942

@arXiv_mathOC_bot@mastoxiv.page
2025-10-01 08:56:27

A Block-Activated Decomposition Algorithm for Multi-Stage Stochastic Variational Inequalities
Minh N. B\`ui
arxiv.org/abs/2509.26198 arxiv.…

@underdarkGIS@fosstodon.org
2025-11-28 08:44:19

Wohoo, QGIS Arrow support has been merged: github.com/qgis/QGIS/pull/63749
Thanks @… & @…

@arXiv_mathPR_bot@mastoxiv.page
2025-10-01 10:22:27

Locally Lipschitz Path Dependent FBSDEs with Unbounded Terminal Conditions in Brownian and L{\'e}vy Settings
Hannah Geiss (JYU), C\'eline Labart (LAMA), Adrien Richou (IMB), Alexander Steinicke
arxiv.org/abs/2509.26423

@arXiv_mathOC_bot@mastoxiv.page
2025-10-01 10:07:58

Global Optimization Algorithm for Mixed-Integer Nonlinear Programs with Trigonometric Functions
Christopher Montez, Sujeevraja Sanjeevi, Kaarthik Sundar
arxiv.org/abs/2509.26516

@UP8@mastodon.social
2025-10-20 21:52:36

⚡ MXene current collectors could reduce size and improve recyclability of Li-ion batteries
techxplore.com/news/2025-10-mx

@arXiv_csLG_bot@mastoxiv.page
2025-10-08 10:26:09

Correlating Cross-Iteration Noise for DP-SGD using Model Curvature
Xin Gu, Yingtai Xiao, Guanlin He, Jiamu Bai, Daniel Kifer, Kiwan Maeng
arxiv.org/abs/2510.05416

@seeingwithsound@mas.to
2025-10-19 10:09:25

To ChatGPT: List the implant dates of human recipients of a Neuralink brain implant chatgpt.com/share/68f4b806-b23 "For a company like Neuralink, whose entire value proposition hinges on rapid iteration and scale, a pause of six-plus w…

@deprogrammaticaipsum@mas.to
2025-11-16 09:17:07

"For Levy and Newborn, the stakes were clear, and the title of the first chapter of the book says it all: “The Challenge is World Champion Kasparov”. Said chapter describes in detail the match between a first iteration of a chess supercomputer by IBM, the less well-known “Deep Thought”. It was a strong contender, having defeated quite a few grandmasters along the way (including the aforementioned Levy), but was no match for Kasparov in August 1989."

@arXiv_mathNA_bot@mastoxiv.page
2025-10-14 10:23:28

An efficient iteration method to reconstruct the drift term from the final measurement
Dakang Cen, Wenlong Zhang, Zhidong Zhang
arxiv.org/abs/2510.10940

@kubikpixel@chaos.social
2025-10-03 06:11:00

Digital ID – The New Chains of Capitalist Surveillance
[…] From passports to colonial passbooks, from welfare cards to border regimes, the apparatus of identification has always been tied to domination. Digital ID is simply the latest iteration of this long history, but with a scale and sophistication that makes its dangers even more profound. […]
👷

@grifferz@social.bitfolk.com
2025-10-09 13:04:12

Love it when someone finally responds to me after literally months of silence, I ask a question and they respond correcting me and saying they've taken action now anyway "to avoid further iteration and delay".

@mcdanlj@social.makerforums.info
2025-12-09 04:04:54

I made a #HamRadio antenna that I really like, and wanted to share the design and process so others can do it too. It's a carefully-tuned linked dipole that is light and has served me well for #POTA activations. It's now in its second iteration and so far I'm very happy with it.

@gedankenstuecke@scholar.social
2025-10-02 16:22:17

«at this point, AT Proto has become essentially a sort of ideological vaporware; a way for Jay Graber et al to run a social media platform while claiming they don't run a social media platform. This is, of course, just another iteration of the Silicon Valley monoproduct: power without accountability.»
Delusions of a Protocol | Azhdarchid
#bluesky

@arXiv_csHC_bot@mastoxiv.page
2025-10-13 09:36:10

Promptimizer: User-Led Prompt Optimization for Personal Content Classification
Leijie Wang, Kathryn Yurechko, Amy X. Zhang
arxiv.org/abs/2510.09009

@ckent@urbanists.social
2025-11-09 17:52:19

web.archive.org/web/2005032323
This is the last known iteration of the website “isdickcheneydeadyet.com”, from March 2005. This is when it had a blog format, and continued as a dead site until the end …

@arXiv_quantph_bot@mastoxiv.page
2025-10-10 11:17:39

An Improved Quantum Algorithm for 3-Tuple Lattice Sieving
Lynn Engelberts, Yanlin Chen, Amin Shiraz Gilani, Maya-Iggy van Hoof, Stacey Jeffery, Ronald de Wolf
arxiv.org/abs/2510.08473

@arXiv_csIT_bot@mastoxiv.page
2025-10-13 08:07:00

Low Complexity Detector for XL-MIMO Uplink: A Cross Splitting Based Information Geometry Approach
Wenjun Zhang, An-An Lu, Xiqi Gao
arxiv.org/abs/2510.09039

@arXiv_statML_bot@mastoxiv.page
2025-10-08 09:35:29

Implicit Updates for Average-Reward Temporal Difference Learning
Hwanwoo Kim, Dongkyu Derek Cho, Eric Laber
arxiv.org/abs/2510.06149 arxiv.…

@arXiv_csLG_bot@mastoxiv.page
2025-10-15 08:21:22

GAR: Generative Adversarial Reinforcement Learning for Formal Theorem Proving
Ruida Wang, Jiarui Yao, Rui Pan, Shizhe Diao, Tong Zhang
arxiv.org/abs/2510.11769

@arXiv_csAR_bot@mastoxiv.page
2025-10-09 07:51:00

Cocoon: A System Architecture for Differentially Private Training with Correlated Noises
Donghwan Kim, Xin Gu, Jinho Baek, Timothy Lo, Younghoon Min, Kwangsik Shin, Jongryool Kim, Jongse Park, Kiwan Maeng
arxiv.org/abs/2510.07304

@arXiv_mathOC_bot@mastoxiv.page
2025-10-07 10:41:52

A Frank-Wolfe Algorithm for Strongly Monotone Variational Inequalities
Reza Rahimi Baghbadorani, Peyman Mohajerin Esfahani, Sergio Grammatico
arxiv.org/abs/2510.03842

@arXiv_mathDS_bot@mastoxiv.page
2025-10-10 09:27:39

Outer length billiards on a large scale
Peter Albers, Lael Edwards-Costa, Serge Tabachnikov
arxiv.org/abs/2510.08370 arxiv.org/pdf/2510.083…

@arXiv_astrophHE_bot@mastoxiv.page
2025-10-07 09:04:32

Harnessing the XMM-Newton data: X-ray spectral modelling of 4XMM-DR11 detections and 4XMM-DR11s sources
A. Viitanen, G. Mountrichas, H. Stiele, F. J. Carrera, A. Ruiz, J. Ballet, A. Akylas, A. Corral, M. Freyberg, A. Georgakakis, I. Georgantopoulos, S. Mateos, C. Motch, A. Nebot, H. Tranin, N. Webb
arxiv.org/abs/2510.03409

@arXiv_csDS_bot@mastoxiv.page
2025-10-03 08:02:51

Improved $\ell_{p}$ Regression via Iteratively Reweighted Least Squares
Alina Ene, Ta Duy Nguyen, Adrian Vladu
arxiv.org/abs/2510.01729 arx…

@arXiv_eessSY_bot@mastoxiv.page
2025-10-03 09:00:41

Off-Policy Reinforcement Learning with Anytime Safety Guarantees via Robust Safe Gradient Flow
Pol Mestres, Arnau Marzabal, Jorge Cort\'es
arxiv.org/abs/2510.01492

@ckent@urbanists.social
2025-11-09 17:52:19

web.archive.org/web/2005032323
This is the last known iteration of the website “isdickcheneydeadyet.com”, from March 2005. This is when it had a blog format, and continued as a dead site until the end …

@arXiv_mathNA_bot@mastoxiv.page
2025-10-08 09:03:49

A Warm-basis Method for Bridging Learning and Iteration: a Case Study in Fluorescence Molecular Tomography
Ruchi Guo, Jiahua Jiang, Bangti Jin, Wuwei Ren, Jianru Zhang
arxiv.org/abs/2510.05926

@arXiv_csLG_bot@mastoxiv.page
2025-10-13 10:42:20

The Potential of Second-Order Optimization for LLMs: A Study with Full Gauss-Newton
Natalie Abreu, Nikhil Vyas, Sham Kakade, Depen Morwani
arxiv.org/abs/2510.09378

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:37:10

S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
arxiv.org/abs/2511.10133 arxiv.org/pdf/2511.10133 arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot

@arXiv_eessSY_bot@mastoxiv.page
2025-10-02 08:48:31

MM-LMPC: Multi-Modal Learning Model Predictive Control via Bandit-Based Mode Selection
Wataru Hashimoto, Kazumune Hashimoto
arxiv.org/abs/2510.00410

@arXiv_csLG_bot@mastoxiv.page
2025-10-03 11:00:21

Flatness-Aware Stochastic Gradient Langevin Dynamics
Stefano Bruno, Youngsik Hwang, Jaehyeon An, Sotirios Sabanis, Dong-Young Lim
arxiv.org/abs/2510.02174

@arXiv_mathOC_bot@mastoxiv.page
2025-10-13 08:39:40

A generalized alternating NGMRES method for PDE-constrained optimization problems governed by transport equations
Yunhui He, Andreas Mang
arxiv.org/abs/2510.08782

@arXiv_mathNA_bot@mastoxiv.page
2025-10-10 09:15:49

Smoother-type a posteriori error estimates for finite element methods
Yuwen Li, Han Shui
arxiv.org/abs/2510.07677 arxiv.org/pdf/2510.07677

@arXiv_csLG_bot@mastoxiv.page
2025-10-02 11:11:41

On the Benefits of Weight Normalization for Overparameterized Matrix Sensing
Yudong Wei, Liang Zhang, Bingcong Li, Niao He
arxiv.org/abs/2510.01175

@arXiv_mathOC_bot@mastoxiv.page
2025-10-10 08:39:59

Accelerated Price Adjustment for Fisher Markets with Exact Recovery of Competitive Equilibrium
He Chen, Chonghe Jiang, Anthony Man-Cho So
arxiv.org/abs/2510.07759

@arXiv_mathOC_bot@mastoxiv.page
2025-10-10 09:30:49

On the Complexity of Lower-Order Implementations of Higher-Order Methods
Nikita Doikov, Geovani Nunes Grapiglia
arxiv.org/abs/2510.07992 ar…

@arXiv_mathNA_bot@mastoxiv.page
2025-10-03 09:28:51

Mixed-precision iterative refinement for low-rank Lyapunov equations
Peter Benner, Xiaobo Liu
arxiv.org/abs/2510.02126 arxiv.org/pdf/2510.0…

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:35:50

dHPR: A Distributed Halpern Peaceman--Rachford Method for Non-smooth Distributed Optimization Problems
Zhangcheng Feng, Defeng Sun, Yancheng Yuan, Guojun Zhang
arxiv.org/abs/2511.10069 arxiv.org/pdf/2511.10069 arxiv.org/html/2511.10069
arXiv:2511.10069v1 Announce Type: new
Abstract: This paper introduces the distributed Halpern Peaceman--Rachford (dHPR) method, an efficient algorithm for solving distributed convex composite optimization problems with non-smooth objectives, which achieves a non-ergodic $O(1/k)$ iteration complexity regarding Karush--Kuhn--Tucker residual. By leveraging the symmetric Gauss--Seidel decomposition, the dHPR effectively decouples the linear operators in the objective functions and consensus constraints while maintaining parallelizability and avoiding additional large proximal terms, leading to a decentralized implementation with provably fast convergence. The superior performance of dHPR is demonstrated through comprehensive numerical experiments on distributed LASSO, group LASSO, and $L_1$-regularized logistic regression problems.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:35:40

An inexact semismooth Newton-Krylov method for semilinear elliptic optimal control problem
Shiqi Chen, Xuesong Chen
arxiv.org/abs/2511.10058 arxiv.org/pdf/2511.10058 arxiv.org/html/2511.10058
arXiv:2511.10058v1 Announce Type: new
Abstract: An inexact semismooth Newton method has been proposed for solving semi-linear elliptic optimal control problems in this paper. This method incorporates the generalized minimal residual (GMRES) method, a type of Krylov subspace method, to solve the Newton equations and utilizes nonmonotonic line search to adjust the iteration step size. The original problem is reformulated into a nonlinear equation through variational inequality principles and discretized using a second-order finite difference scheme. By leveraging slanting differentiability, the algorithm constructs semismooth Newton directions and employs GMRES method to inexactly solve the Newton equations, significantly reducing computational overhead. A dynamic nonmonotonic line search strategy is introduced to adjust stepsizes adaptively, ensuring global convergence while overcoming local stagnation. Theoretical analysis demonstrates that the algorithm achieves superlinear convergence near optimal solutions when the residual control parameter $\eta_k$ approaches to 0. Numerical experiments validate the method's accuracy and efficiency in solving semilinear elliptic optimal control problems, corroborating theoretical insights.
toXiv_bot_toot

@arXiv_mathNA_bot@mastoxiv.page
2025-10-02 08:48:21

A Computationally Efficient Finite Element Method for Shape Reconstruction of Inverse Conductivity Problems
Lefu Cai, Zhixin Liu, Minghui Song, Xianchao Wang
arxiv.org/abs/2510.00597

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:50:00

(Adaptive) Scaled gradient methods beyond locally Holder smoothness: Lyapunov analysis, convergence rate and complexity
Susan Ghaderi, Morteza Rahimi, Yves Moreau, Masoud Ahookhosh
arxiv.org/abs/2511.10425 arxiv.org/pdf/2511.10425 arxiv.org/html/2511.10425
arXiv:2511.10425v1 Announce Type: new
Abstract: This paper addresses the unconstrained minimization of smooth convex functions whose gradients are locally Holder continuous. Building on these results, we analyze the Scaled Gradient Algorithm (SGA) under local smoothness assumptions, proving its global convergence and iteration complexity. Furthermore, under local strong convexity and the Kurdyka-Lojasiewicz (KL) inequality, we establish linear convergence rates and provide explicit complexity bounds. In particular, we show that when the gradient is locally Lipschitz continuous, SGA attains linear convergence for any KL exponent. We then introduce and analyze an adaptive variant of SGA (AdaSGA), which automatically adjusts the scaling and step-size parameters. For this method, we show global convergence, and derive local linear rates under strong convexity.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:58:00

Measuring dissimilarity between convex cones by means of max-min angles
Welington de Oliveira, Valentina Sessa, David Sossa
arxiv.org/abs/2511.10483 arxiv.org/pdf/2511.10483 arxiv.org/html/2511.10483
arXiv:2511.10483v1 Announce Type: new
Abstract: This work introduces a novel dissimilarity measure between two convex cones, based on the max-min angle between them. We demonstrate that this measure is closely related to the Pompeiu-Hausdorff distance, a well-established metric for comparing compact sets. Furthermore, we examine cone configurations where the measure admits simplified or analytic forms. For the specific case of polyhedral cones, a nonconvex cutting-plane method is deployed to compute, at least approximately, the measure between them. Our approach builds on a tailored version of Kelley's cutting-plane algorithm, which involves solving a challenging master program per iteration. When this master program is solved locally, our method yields an angle that satisfies certain necessary optimality conditions of the underlying nonconvex optimization problem yielding the dissimilarity measure between the cones. As an application of the proposed mathematical and algorithmic framework, we address the image-set classification task under limited data conditions, a task that falls within the scope of the \emph{Few-Shot Learning} paradigm. In this context, image sets belonging to the same class are modeled as polyhedral cones, and our dissimilarity measure proves useful for understanding whether two image sets belong to the same class.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-10-07 11:17:22

A Time-certified Predictor-corrector IPM Algorithm for Box-QP
Liang Wu, Yunhong Che, Richard D. Braatz, Jan Drgona
arxiv.org/abs/2510.04467

@arXiv_mathOC_bot@mastoxiv.page
2025-10-03 08:57:41

Exponential convergence of a distributed divide-and-conquer algorithm for constrained convex optimization on networks
Nazar Emirov, Guohui Song, Qiyu Sun
arxiv.org/abs/2510.01511

@arXiv_mathOC_bot@mastoxiv.page
2025-10-03 08:04:31

DeMuon: A Decentralized Muon for Matrix Optimization over Graphs
Chuan He, Shuyi Ren, Jingwei Mao, Erik G. Larsson
arxiv.org/abs/2510.01377