So I grew up next to #Chernobyl and this is, well, TERRIFYING.
A story for y’all: I’m from a city called Zhytomyr, 2 hours west of Kyiv in the North of #Ukraine. We were downwind of the Chernobyl #nuclear power plant when the 1986 disaster happened.
I wasn’t born for another 12 years, but my childhood was filled with stories and the aftermath of it all. Things like:
- My grandmother worked as a head doctor in a hospital and rehabilitation facility exclusively for children of Chernobyl victims to treat the extremely high prevalence of Tuberculosis and other severe health complications. (To specify: these were SECOND GENERATION of exposure).
- A lot of the kids in that facility were orphans, because their parents died young from health problems.
- My uncle’s wife was born in Pripyat. She was 1 year old when the disaster happened. Her parents were told to evacuate while given no information about what happened. They had to pack up their things and rush out to an unfamiliar city with their baby, never to see the rest of their belongings, apartment, or hometown again.
- When I was a kid, it became so common to see weirdly mutated animals and insects that even 2-3 year olds would make jokes about “Chernobyl mosquitos” and I wouldn’t even flinch seeing occasional giant bugs, dark frogs, weird-looking dogs.
- We’d frequently hear of nearby farms having issues with their animals being born too mutated to survive or random outbreaks from contaminated water / food. Crops would randomly fail. People would get poisoned on a regular basis. This all got less common as I grew up.
- My mother still remembers being a little girl, 10 years old, and looking outside from their balcony at the clouds blowing over from Chernobyl that day. People were told to not go outside and to shut all the windows, but not given an explanation as to why. My mother swears that the rain looked different. They weren’t able to go and buy more food for the kitchen for multiple days.
Anyway - nuclear safety isn’t a joke. I don’t understand how this level of carelessness can happen after Chernobyl and Fukushima.
https://www.404media.co/power-companies-are-using-ai-to-build-nuclear-power-plants/
A proud achievement of my half-century IT career happened in 1996 consulting to the #UNDP toward the first published edition of the Humanity Development Library.
It seemed easy enough: "It can be fairly estimated that 1/3, or about 20 million pages of UN, and as much University and NGO material are very useful. Those 20 million pages useful UN publications probably contain about 50% of solutions for major World problems. This information must be released in digital format for non-profit redistribution in all countries."
also portable and accessible to all platforms, everywhere.
Happily, not only did the project live on, but thanks to @… our once-intractable problem of global delivery is now globally solved!
So, whether or not this is timely, I don't know, but should you need to suddenly rebuild some semblance of civilization from scratch…
Humanity Development Library 2.0 CD-ROM 1998 : #HumanityLibrariesProject : #InternetArchive
https://archive.org/details/humanity-development-library-2.0
Convergence analysis of inexact MBA method for constrained upper-$\mathcal{C}^2$ optimization problems
Ruyu Liu, Shaohua Pan
https://arxiv.org/abs/2511.09940 https://arxiv.org/pdf/2511.09940 https://arxiv.org/html/2511.09940
arXiv:2511.09940v1 Announce Type: new
Abstract: This paper concerns a class of constrained optimization problems in which, the objective and constraint functions are both upper-$\mathcal{C}^2$. For such nonconvex and nonsmooth optimization problems, we develop an inexact moving balls approximation (MBA) method by a workable inexactness criterion for the solving of subproblems. By leveraging a global error bound for the strongly convex program associated with parametric optimization problems, we establish the full convergence of the iterate sequence under the partial bounded multiplier property (BMP) and the Kurdyka-{\L}ojasiewicz (KL) property of the constructed potential function, and achieve the local convergence rate of the iterate and objective value sequences if the potential function satisfies the KL property of exponent $q\in[1/2,1)$. A verifiable condition is also provided to check whether the potential function satisfies the KL property of exponent $q\in[1/2,1)$ at the given critical point. To the best of our knowledge, this is the first implementable inexact MBA method with a full convergence certificate for the constrained nonconvex and nonsmooth optimization problem.
toXiv_bot_toot
MatSciBench: Benchmarking the Reasoning Ability of Large Language Models in Materials Science
Junkai Zhang, Jingru Gan, Xiaoxuan Wang, Zian Jia, Changquan Gu, Jianpeng Chen, Yanqiao Zhu, Mingyu Derek Ma, Dawei Zhou, Ling Li, Wei Wang
https://arxiv.org/abs/2510.12171
@… FYI some teething problems for people, including me, with the outdated version of pkg. Repeated segfaults, a known issue with that version.
Segfault-free with 2.4.2_1:
<https://www.
S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
https://arxiv.org/abs/2511.10133 https://arxiv.org/pdf/2511.10133 https://arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot
Halpern Acceleration of the Inexact Proximal Point Method of Rockafellar
Liwei Zhang, Fanli Zhuang, Ning Zhang
https://arxiv.org/abs/2511.10372 https://arxiv.org/pdf/2511.10372 https://arxiv.org/html/2511.10372
arXiv:2511.10372v1 Announce Type: new
Abstract: This paper investigates a Halpern acceleration of the inexact proximal point method for solving maximal monotone inclusion problems in Hilbert spaces. The proposed Halpern inexact proximal point method (HiPPM) is shown to be globally convergent, and a unified framework is developed to analyze its worst-case convergence rate. Under mild summability conditions on the inexactness tolerances, HiPPM achieves an $\mathcal{O}(1/k^{2})$ rate in terms of the squared fixed-point residual. Furthermore, under additional mild condition, the method retains a fast linear convergence rate. Building upon this framework, we further extend the acceleration technique to constrained convex optimization through the augmented Lagrangian formulation. In analogy to Rockafellar's classical results, the resulting accelerated inexact augmented Lagrangian method inherits the convergence rate and complexity guarantees of HiPPM. The analysis thus provides a unified theoretical foundation for accelerated inexact proximal algorithms and their augmented Lagrangian extensions.
toXiv_bot_toot
Interpretable Generative and Discriminative Learning for Multimodal and Incomplete Clinical Data
Albert Belenguer-Llorens, Carlos Sevilla-Salcedo, Janaina Mourao-Miranda, Vanessa G\'omez-Verdejo
https://arxiv.org/abs/2510.09513
Geolog-IA: Conversational System for Academic Theses
Micaela Fuel Pozo, Andrea Guatumillo Saltos, Yese\~na Tipan Llumiquinga, Kelly Lascano Aguirre, Marilyn Castillo Jara, Christian Mejia-Escobar
https://arxiv.org/abs/2510.02653