Tootfinder

Opt-in global Mastodon full text search. Join the index!

@underdarkGIS@fosstodon.org
2025-11-28 08:44:19

Wohoo, QGIS Arrow support has been merged: github.com/qgis/QGIS/pull/63749
Thanks @… & @…

@deprogrammaticaipsum@mas.to
2025-11-16 09:17:07

"For Levy and Newborn, the stakes were clear, and the title of the first chapter of the book says it all: “The Challenge is World Champion Kasparov”. Said chapter describes in detail the match between a first iteration of a chess supercomputer by IBM, the less well-known “Deep Thought”. It was a strong contender, having defeated quite a few grandmasters along the way (including the aforementioned Levy), but was no match for Kasparov in August 1989."

@Techmeme@techhub.social
2026-01-05 22:25:49

Boston Dynamics unveils a new iteration of its Atlas humanoid robot designed to work in Hyundai's plants starting in 2028, including at a factory in Georgia (Hyonhee Shin/Bloomberg)
bloomberg.com/news/articles/20

@azonenberg@ioc.exchange
2026-01-02 11:52:25

Initial experiments on a GPU-accelerated parallel CDR PLL filter.
Fundamentally, the problem is that a PLL is stateful so you can't process any given iteration of it without knowing the previous state. I'm trying to work around that by recognizing that the impulse response of the PLL loop filter tails off to effectively zero after a while, so we can truncate and samples older than that point will not materially affect the output.
What you see here is the first pass of wha…

ngscopeclient displaying a low frequency square wave labeled "DEBUG_BlockBoundaries" and a jitter waveform with large spikes every time the block boundary signal toggles
@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:37:10

S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
arxiv.org/abs/2511.10133 arxiv.org/pdf/2511.10133 arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot

@mcdanlj@social.makerforums.info
2025-12-09 04:04:54

I made a #HamRadio antenna that I really like, and wanted to share the design and process so others can do it too. It's a carefully-tuned linked dipole that is light and has served me well for #POTA activations. It's now in its second iteration and so far I'm very happy with it.

@NFL@darktundra.xyz
2026-01-05 14:21:13

Browns fire Kevin Stefanski after five-win season; two-time Coach of the Year may land another job this cycle

cbssports.com/nfl/news/clevela

@ckent@urbanists.social
2025-11-09 17:52:19

web.archive.org/web/2005032323
This is the last known iteration of the website “isdickcheneydeadyet.com”, from March 2005. This is when it had a blog format, and continued as a dead site until the end …

@ckent@urbanists.social
2025-11-09 17:52:19

web.archive.org/web/2005032323
This is the last known iteration of the website “isdickcheneydeadyet.com”, from March 2005. This is when it had a blog format, and continued as a dead site until the end …

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:35:50

dHPR: A Distributed Halpern Peaceman--Rachford Method for Non-smooth Distributed Optimization Problems
Zhangcheng Feng, Defeng Sun, Yancheng Yuan, Guojun Zhang
arxiv.org/abs/2511.10069 arxiv.org/pdf/2511.10069 arxiv.org/html/2511.10069
arXiv:2511.10069v1 Announce Type: new
Abstract: This paper introduces the distributed Halpern Peaceman--Rachford (dHPR) method, an efficient algorithm for solving distributed convex composite optimization problems with non-smooth objectives, which achieves a non-ergodic $O(1/k)$ iteration complexity regarding Karush--Kuhn--Tucker residual. By leveraging the symmetric Gauss--Seidel decomposition, the dHPR effectively decouples the linear operators in the objective functions and consensus constraints while maintaining parallelizability and avoiding additional large proximal terms, leading to a decentralized implementation with provably fast convergence. The superior performance of dHPR is demonstrated through comprehensive numerical experiments on distributed LASSO, group LASSO, and $L_1$-regularized logistic regression problems.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:35:40

An inexact semismooth Newton-Krylov method for semilinear elliptic optimal control problem
Shiqi Chen, Xuesong Chen
arxiv.org/abs/2511.10058 arxiv.org/pdf/2511.10058 arxiv.org/html/2511.10058
arXiv:2511.10058v1 Announce Type: new
Abstract: An inexact semismooth Newton method has been proposed for solving semi-linear elliptic optimal control problems in this paper. This method incorporates the generalized minimal residual (GMRES) method, a type of Krylov subspace method, to solve the Newton equations and utilizes nonmonotonic line search to adjust the iteration step size. The original problem is reformulated into a nonlinear equation through variational inequality principles and discretized using a second-order finite difference scheme. By leveraging slanting differentiability, the algorithm constructs semismooth Newton directions and employs GMRES method to inexactly solve the Newton equations, significantly reducing computational overhead. A dynamic nonmonotonic line search strategy is introduced to adjust stepsizes adaptively, ensuring global convergence while overcoming local stagnation. Theoretical analysis demonstrates that the algorithm achieves superlinear convergence near optimal solutions when the residual control parameter $\eta_k$ approaches to 0. Numerical experiments validate the method's accuracy and efficiency in solving semilinear elliptic optimal control problems, corroborating theoretical insights.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:50:00

(Adaptive) Scaled gradient methods beyond locally Holder smoothness: Lyapunov analysis, convergence rate and complexity
Susan Ghaderi, Morteza Rahimi, Yves Moreau, Masoud Ahookhosh
arxiv.org/abs/2511.10425 arxiv.org/pdf/2511.10425 arxiv.org/html/2511.10425
arXiv:2511.10425v1 Announce Type: new
Abstract: This paper addresses the unconstrained minimization of smooth convex functions whose gradients are locally Holder continuous. Building on these results, we analyze the Scaled Gradient Algorithm (SGA) under local smoothness assumptions, proving its global convergence and iteration complexity. Furthermore, under local strong convexity and the Kurdyka-Lojasiewicz (KL) inequality, we establish linear convergence rates and provide explicit complexity bounds. In particular, we show that when the gradient is locally Lipschitz continuous, SGA attains linear convergence for any KL exponent. We then introduce and analyze an adaptive variant of SGA (AdaSGA), which automatically adjusts the scaling and step-size parameters. For this method, we show global convergence, and derive local linear rates under strong convexity.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:58:00

Measuring dissimilarity between convex cones by means of max-min angles
Welington de Oliveira, Valentina Sessa, David Sossa
arxiv.org/abs/2511.10483 arxiv.org/pdf/2511.10483 arxiv.org/html/2511.10483
arXiv:2511.10483v1 Announce Type: new
Abstract: This work introduces a novel dissimilarity measure between two convex cones, based on the max-min angle between them. We demonstrate that this measure is closely related to the Pompeiu-Hausdorff distance, a well-established metric for comparing compact sets. Furthermore, we examine cone configurations where the measure admits simplified or analytic forms. For the specific case of polyhedral cones, a nonconvex cutting-plane method is deployed to compute, at least approximately, the measure between them. Our approach builds on a tailored version of Kelley's cutting-plane algorithm, which involves solving a challenging master program per iteration. When this master program is solved locally, our method yields an angle that satisfies certain necessary optimality conditions of the underlying nonconvex optimization problem yielding the dissimilarity measure between the cones. As an application of the proposed mathematical and algorithmic framework, we address the image-set classification task under limited data conditions, a task that falls within the scope of the \emph{Few-Shot Learning} paradigm. In this context, image sets belonging to the same class are modeled as polyhedral cones, and our dissimilarity measure proves useful for understanding whether two image sets belong to the same class.
toXiv_bot_toot