A proud achievement of my half-century IT career happened in 1996 consulting to the #UNDP toward the first published edition of the Humanity Development Library.
It seemed easy enough: "It can be fairly estimated that 1/3, or about 20 million pages of UN, and as much University and NGO material are very useful. Those 20 million pages useful UN publications probably contain about 50% of solutions for major World problems. This information must be released in digital format for non-profit redistribution in all countries."
also portable and accessible to all platforms, everywhere.
Happily, not only did the project live on, but thanks to @… our once-intractable problem of global delivery is now globally solved!
So, whether or not this is timely, I don't know, but should you need to suddenly rebuild some semblance of civilization from scratch…
Humanity Development Library 2.0 CD-ROM 1998 : #HumanityLibrariesProject : #InternetArchive
https://archive.org/details/humanity-development-library-2.0
Convergence analysis of inexact MBA method for constrained upper-$\mathcal{C}^2$ optimization problems
Ruyu Liu, Shaohua Pan
https://arxiv.org/abs/2511.09940 https://arxiv.org/pdf/2511.09940 https://arxiv.org/html/2511.09940
arXiv:2511.09940v1 Announce Type: new
Abstract: This paper concerns a class of constrained optimization problems in which, the objective and constraint functions are both upper-$\mathcal{C}^2$. For such nonconvex and nonsmooth optimization problems, we develop an inexact moving balls approximation (MBA) method by a workable inexactness criterion for the solving of subproblems. By leveraging a global error bound for the strongly convex program associated with parametric optimization problems, we establish the full convergence of the iterate sequence under the partial bounded multiplier property (BMP) and the Kurdyka-{\L}ojasiewicz (KL) property of the constructed potential function, and achieve the local convergence rate of the iterate and objective value sequences if the potential function satisfies the KL property of exponent $q\in[1/2,1)$. A verifiable condition is also provided to check whether the potential function satisfies the KL property of exponent $q\in[1/2,1)$ at the given critical point. To the best of our knowledge, this is the first implementable inexact MBA method with a full convergence certificate for the constrained nonconvex and nonsmooth optimization problem.
toXiv_bot_toot
Interpretable Generative and Discriminative Learning for Multimodal and Incomplete Clinical Data
Albert Belenguer-Llorens, Carlos Sevilla-Salcedo, Janaina Mourao-Miranda, Vanessa G\'omez-Verdejo
https://arxiv.org/abs/2510.09513
S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
https://arxiv.org/abs/2511.10133 https://arxiv.org/pdf/2511.10133 https://arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot
Halpern Acceleration of the Inexact Proximal Point Method of Rockafellar
Liwei Zhang, Fanli Zhuang, Ning Zhang
https://arxiv.org/abs/2511.10372 https://arxiv.org/pdf/2511.10372 https://arxiv.org/html/2511.10372
arXiv:2511.10372v1 Announce Type: new
Abstract: This paper investigates a Halpern acceleration of the inexact proximal point method for solving maximal monotone inclusion problems in Hilbert spaces. The proposed Halpern inexact proximal point method (HiPPM) is shown to be globally convergent, and a unified framework is developed to analyze its worst-case convergence rate. Under mild summability conditions on the inexactness tolerances, HiPPM achieves an $\mathcal{O}(1/k^{2})$ rate in terms of the squared fixed-point residual. Furthermore, under additional mild condition, the method retains a fast linear convergence rate. Building upon this framework, we further extend the acceleration technique to constrained convex optimization through the augmented Lagrangian formulation. In analogy to Rockafellar's classical results, the resulting accelerated inexact augmented Lagrangian method inherits the convergence rate and complexity guarantees of HiPPM. The analysis thus provides a unified theoretical foundation for accelerated inexact proximal algorithms and their augmented Lagrangian extensions.
toXiv_bot_toot
@… FYI some teething problems for people, including me, with the outdated version of pkg. Repeated segfaults, a known issue with that version.
Segfault-free with 2.4.2_1:
<https://www.
Geolog-IA: Conversational System for Academic Theses
Micaela Fuel Pozo, Andrea Guatumillo Saltos, Yese\~na Tipan Llumiquinga, Kelly Lascano Aguirre, Marilyn Castillo Jara, Christian Mejia-Escobar
https://arxiv.org/abs/2510.02653
Exploring one-dimensional, binary, radius-2 cellular automata, over cyclic configurations, in terms of their ability to solve decision problems by distributed consensus
Eurico Ruivo, Pedro Paulo Balbi, K\'evin Perrot, Marco Montalva-Medel, Eric Goles
https://arxiv.org/abs/2510.01040

Exploring one-dimensional, binary, radius-2 cellular automata, over cyclic configurations, in terms of their ability to solve decision problems by distributed consensus
Probing the ability of automata networks to solve decision problems has received a continuous attention in the literature, and specially with the automata reaching the answer by distributed consensus, i.e., their all taking on a same state, out of two. In the case of binary automata networks, regardless of the kind of update employed, the networks should display only two possible attractors, the fixed points $0^L$ and $1^L$, for all cyclic configurations of size $L$. A previous investigation in…
MatSciBench: Benchmarking the Reasoning Ability of Large Language Models in Materials Science
Junkai Zhang, Jingru Gan, Xiaoxuan Wang, Zian Jia, Changquan Gu, Jianpeng Chen, Yanqiao Zhu, Mingyu Derek Ma, Dawei Zhou, Ling Li, Wei Wang
https://arxiv.org/abs/2510.12171
Replaced article(s) found for math.NA. https://arxiv.org/list/math.NA/new
[1/2]:
- Convergence analysis of equilibrium methods for inverse problems
Daniel Obmann, Gyeongha Hwang, Markus Haltmeier