Tootfinder

Opt-in global Mastodon full text search. Join the index!

@mariyadelano@hachyderm.io
2025-11-14 21:05:53

So I grew up next to #Chernobyl and this is, well, TERRIFYING.
A story for y’all: I’m from a city called Zhytomyr, 2 hours west of Kyiv in the North of #Ukraine. We were downwind of the Chernobyl #nuclear power plant when the 1986 disaster happened.
I wasn’t born for another 12 years, but my childhood was filled with stories and the aftermath of it all. Things like:
- My grandmother worked as a head doctor in a hospital and rehabilitation facility exclusively for children of Chernobyl victims to treat the extremely high prevalence of Tuberculosis and other severe health complications. (To specify: these were SECOND GENERATION of exposure).
- A lot of the kids in that facility were orphans, because their parents died young from health problems.
- My uncle’s wife was born in Pripyat. She was 1 year old when the disaster happened. Her parents were told to evacuate while given no information about what happened. They had to pack up their things and rush out to an unfamiliar city with their baby, never to see the rest of their belongings, apartment, or hometown again.
- When I was a kid, it became so common to see weirdly mutated animals and insects that even 2-3 year olds would make jokes about “Chernobyl mosquitos” and I wouldn’t even flinch seeing occasional giant bugs, dark frogs, weird-looking dogs.
- We’d frequently hear of nearby farms having issues with their animals being born too mutated to survive or random outbreaks from contaminated water / food. Crops would randomly fail. People would get poisoned on a regular basis. This all got less common as I grew up.
- My mother still remembers being a little girl, 10 years old, and looking outside from their balcony at the clouds blowing over from Chernobyl that day. People were told to not go outside and to shut all the windows, but not given an explanation as to why. My mother swears that the rain looked different. They weren’t able to go and buy more food for the kitchen for multiple days.
Anyway - nuclear safety isn’t a joke. I don’t understand how this level of carelessness can happen after Chernobyl and Fukushima.

404media.co/power-companies-ar

@teledyn@mstdn.ca
2025-10-14 19:04:57

A proud achievement of my half-century IT career happened in 1996 consulting to the #UNDP toward the first published edition of the Humanity Development Library.
It seemed easy enough: "It can be fairly estimated that 1/3, or about 20 million pages of UN, and as much University and NGO material are very useful. Those 20 million pages useful UN publications probably contain about 50% of solutions for major World problems. This information must be released in digital format for non-profit redistribution in all countries."
also portable and accessible to all platforms, everywhere.
Happily, not only did the project live on, but thanks to @… our once-intractable problem of global delivery is now globally solved!
So, whether or not this is timely, I don't know, but should you need to suddenly rebuild some semblance of civilization from scratch…
Humanity Development Library 2.0 CD-ROM 1998 : #HumanityLibrariesProject : #InternetArchive
archive.org/details/humanity-d

@arXiv_mathNT_bot@mastoxiv.page
2025-10-14 11:41:18

On some conjectural supercongruences involving the sequence $t_n(x)$
Hui-Li Han, Chen Wang
arxiv.org/abs/2510.11338 arxiv.org/pdf/2510.1133…

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:28:40

Convergence analysis of inexact MBA method for constrained upper-$\mathcal{C}^2$ optimization problems
Ruyu Liu, Shaohua Pan
arxiv.org/abs/2511.09940 arxiv.org/pdf/2511.09940 arxiv.org/html/2511.09940
arXiv:2511.09940v1 Announce Type: new
Abstract: This paper concerns a class of constrained optimization problems in which, the objective and constraint functions are both upper-$\mathcal{C}^2$. For such nonconvex and nonsmooth optimization problems, we develop an inexact moving balls approximation (MBA) method by a workable inexactness criterion for the solving of subproblems. By leveraging a global error bound for the strongly convex program associated with parametric optimization problems, we establish the full convergence of the iterate sequence under the partial bounded multiplier property (BMP) and the Kurdyka-{\L}ojasiewicz (KL) property of the constructed potential function, and achieve the local convergence rate of the iterate and objective value sequences if the potential function satisfies the KL property of exponent $q\in[1/2,1)$. A verifiable condition is also provided to check whether the potential function satisfies the KL property of exponent $q\in[1/2,1)$ at the given critical point. To the best of our knowledge, this is the first implementable inexact MBA method with a full convergence certificate for the constrained nonconvex and nonsmooth optimization problem.
toXiv_bot_toot

@arXiv_mathCO_bot@mastoxiv.page
2025-10-07 11:06:52

The functional Loomis-Whitney type inequality in the Heisenberg groups and Projection theorems over finite fields
Daewoong Cheong, Thang Pham, Dung The Tran
arxiv.org/abs/2510.05022

@arXiv_csLG_bot@mastoxiv.page
2025-10-15 10:52:31

Improving Decision Trees through the Lens of Parameterized Local Search
Juha Harviainen, Frank Sommer, Manuel Sorge
arxiv.org/abs/2510.12726

@arXiv_csDM_bot@mastoxiv.page
2025-10-13 07:33:50

A CSP approach to Graph Sandwich Problems
Manuel Bodirsky, Santiago Guzm\'an-Pro
arxiv.org/abs/2510.09128 arxiv.org/pdf/2510.09128

@arXiv_mathAP_bot@mastoxiv.page
2025-10-14 11:04:58

Gaussian beam interactions and inverse source problems for nonlinear wave equations
Matti Lassas, Tony Liimatainen, Valter Pohjola, Teemu Tyni
arxiv.org/abs/2510.11494

@arXiv_statML_bot@mastoxiv.page
2025-10-15 09:29:31

High-Probability Bounds For Heterogeneous Local Differential Privacy
Maryam Aliakbarpour, Alireza Fallah, Swaha Roy, Ria Stevens
arxiv.org/abs/2510.11895

@arXiv_csAI_bot@mastoxiv.page
2025-10-15 09:38:21

MatSciBench: Benchmarking the Reasoning Ability of Large Language Models in Materials Science
Junkai Zhang, Jingru Gan, Xiaoxuan Wang, Zian Jia, Changquan Gu, Jianpeng Chen, Yanqiao Zhu, Mingyu Derek Ma, Dawei Zhou, Ling Li, Wei Wang
arxiv.org/abs/2510.12171

@arXiv_mathNA_bot@mastoxiv.page
2025-10-14 10:52:48

Randomized flexible Krylov methods for $\ell_p$ regularization
Malena Sabat\'e Landman, Yuji Nakatsukasa
arxiv.org/abs/2510.11237 arxiv…

@grahamperrin@bsd.cafe
2025-11-08 22:22:01

@… FYI some teething problems for people, including me, with the outdated version of pkg. Repeated segfaults, a known issue with that version.
Segfault-free with 2.4.2_1:
<

@arXiv_mathDS_bot@mastoxiv.page
2025-10-06 08:45:59

On entry-exit formulas for degenerate turning point problems in planar slow-fast systems
Renato Huzak, Kristian Uldall Kristiansen
arxiv.org/abs/2510.02770

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:37:10

S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
arxiv.org/abs/2511.10133 arxiv.org/pdf/2511.10133 arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:39:30

Halpern Acceleration of the Inexact Proximal Point Method of Rockafellar
Liwei Zhang, Fanli Zhuang, Ning Zhang
arxiv.org/abs/2511.10372 arxiv.org/pdf/2511.10372 arxiv.org/html/2511.10372
arXiv:2511.10372v1 Announce Type: new
Abstract: This paper investigates a Halpern acceleration of the inexact proximal point method for solving maximal monotone inclusion problems in Hilbert spaces. The proposed Halpern inexact proximal point method (HiPPM) is shown to be globally convergent, and a unified framework is developed to analyze its worst-case convergence rate. Under mild summability conditions on the inexactness tolerances, HiPPM achieves an $\mathcal{O}(1/k^{2})$ rate in terms of the squared fixed-point residual. Furthermore, under additional mild condition, the method retains a fast linear convergence rate. Building upon this framework, we further extend the acceleration technique to constrained convex optimization through the augmented Lagrangian formulation. In analogy to Rockafellar's classical results, the resulting accelerated inexact augmented Lagrangian method inherits the convergence rate and complexity guarantees of HiPPM. The analysis thus provides a unified theoretical foundation for accelerated inexact proximal algorithms and their augmented Lagrangian extensions.
toXiv_bot_toot

@arXiv_mathNA_bot@mastoxiv.page
2025-10-09 08:53:11

Algorithm for constructing optimal explicit finite-difference formulas in the Hilbert space
R. S. Karimov, D. D. Atoev
arxiv.org/abs/2510.06643

@arXiv_statML_bot@mastoxiv.page
2025-10-13 09:11:10

Interpretable Generative and Discriminative Learning for Multimodal and Incomplete Clinical Data
Albert Belenguer-Llorens, Carlos Sevilla-Salcedo, Janaina Mourao-Miranda, Vanessa G\'omez-Verdejo
arxiv.org/abs/2510.09513

@arXiv_csDM_bot@mastoxiv.page
2025-10-07 08:38:12

Maximum Biclique for Star 1,2,3 -free and Bounded Bimodularwidth Twin-free Bipartite Graphs $\star$
Fabien de Montgolfier (IRIF), Renaud Torfs (IRIF)
arxiv.org/abs/2510.04621

@arXiv_mathAP_bot@mastoxiv.page
2025-10-07 08:59:22

Forward and backward problems for abstract time-fractional Schr\"odinger equations
S. E. Chorfi, F. Et-tahri, L. Maniar, M. Yamamoto
arxiv.org/abs/2510.03600

@arXiv_csAI_bot@mastoxiv.page
2025-10-06 09:00:29

Geolog-IA: Conversational System for Academic Theses
Micaela Fuel Pozo, Andrea Guatumillo Saltos, Yese\~na Tipan Llumiquinga, Kelly Lascano Aguirre, Marilyn Castillo Jara, Christian Mejia-Escobar
arxiv.org/abs/2510.02653

@beeb@hachyderm.io
2025-12-10 17:37:59
Content warning: Advent of Code 2025 Day 10

Yes! Today's puzzle in #AdventOfCode was quite hard (especially part 2) but so rewarding and I learned a lot!
For part 1, I implemented A* from scratch, my favorite little pathfinding algo that I use pretty much every year for #AoC (sometimes I use a lib instead of implementing it but it's been a while so a refresher was in order).
For part 2, after trying A* again and noticing it was running for way too long, I went back to the drawing board and solved the first machine by hand. I noticed the constraints were a system of linear equations.
I then researched algorithms to solve such integer programming problems and didn't feel like learning AND implementing the algorithms in one day (ain't nobody got time fo that). But this lead me to discover the `good_lp` #rust crate which is really good and that I will keep in my back pocket from now on!
So I used the library to define a system of variables and constraints which could be solved magically for me.
#AoC2025 #AdventOfCode2025 #RustLang

@arXiv_mathOC_bot@mastoxiv.page
2025-10-13 08:15:40

Re$^3$MCN: Cubic Newton Variance Reduction Momentum Quadratic Regularization for Finite-sum Non-convex Problems
Dmitry Pasechnyuk-Vilensky, Dmitry Kamzolov, Martin Tak\'a\v{c}
arxiv.org/abs/2510.08714