Tootfinder

Opt-in global Mastodon full text search. Join the index!

@mgorny@social.treehouse.systems
2026-02-18 08:20:05

Can Autism Spectrum be a superpower? Well, sometimes, I guess.
It's the same kind of superpower like having a chaingun in place of your hand. There are days you feel like you definitely ought to use it. And it sounds really cool in theory.
But it's not very useful if you need to open a jar. And I dare say that in my life, jar-opening situations are far more common than situations needing a chaingun. On top of that, most people don't really appreciate *you* having it, as if you had a choice.
#ActuallyAutistic

@markhburton@mstdn.social
2026-01-13 18:25:51

Muddled.
'Value' has specific meanings in Marxist political economy.
It isn't the appropriate term for what Boillier is discussing.
'Wealth' is closer but still not right.
It's a matter of standpoint: the abundance of nature isn't purely for human use, so those two terms don't apply well, or at all.
Toward a New Theory of Value (and Meaning): Living Systems as Generative - resilience

@arXiv_mathLO_bot@mastoxiv.page
2026-04-01 08:05:32

A General Theory of Class Symmetric Systems
Peter Holy, Emma Palmer, Jonathan Schilhan
arxiv.org/abs/2603.29521 arxiv.org/pdf/2603.29521

@curiouscat@fosstodon.org
2026-03-18 14:30:19

Excerpts from The Deming Library Volume XXI, Dr. W. Edwards #Deming Dr. Russell Ackoff and David Langford demonstrate that educators can begin a quality transformation by developing an understanding of the properties and powers of systems-oriented thinking...

@mgorny@social.treehouse.systems
2026-01-14 06:43:44

The medical theory of everything: one day it'll turn out that all your problems have a single common cause.

@arXiv_physicschemph_bot@mastoxiv.page
2026-03-27 08:19:37

Autotuning T-PaiNN: Enabling Data-Efficient GNN Interatomic Potential Development via Classical-to-Quantum Transfer Learning
Vivienne Pelletier, Vedant Bhat, Daniel J. Rivera, Steven A. Wilson, Christopher L. Muhich
arxiv.org/abs/2603.24752 arxiv.org/pdf/2603.24752 arxiv.org/html/2603.24752
arXiv:2603.24752v1 Announce Type: new
Abstract: Machine-learned interatomic potentials (MLIPs), particularly graph neural network (GNN)-based models, offer a promising route to achieving near-density functional theory (DFT) accuracy at significantly reduced computational cost. However, their practical deployment is often limited by the large volumes of expensive quantum mechanical training data required. In this work, we introduce a transfer learning framework, Transfer-PaiNN (T-PaiNN), that substantially improves the data efficiency of GNN-MLIPs by leveraging inexpensive classical force field data. The approach consists of pretraining a PaiNN MLIP architecture on large-scale datasets generated from classical molecular simulations, followed by fine-tuning (dubbed autotuning) using a comparatively small DFT dataset. We demonstrate the effectiveness of autotuning T-PaiNN on both gas-phase molecular systems (QM9 dataset) and condensed-phase liquid water. Across all cases, T-PaiNN significantly outperforms models trained solely on DFT data, achieving order-of-magnitude reductions in mean absolute error while accelerating training convergence. For example, using the QM9 data set, error reductions of up to 25 times are observed in low-data regimes, while liquid water simulations show improved predictions of energies, forces, and experimentally relevant properties such as density and diffusion. These gains arise from the model's ability to learn general features of the potential energy surface from extensive classical sampling, which are subsequently refined to quantum accuracy. Overall, this work establishes transfer learning from classical force fields as a practical and computationally efficient strategy for developing high-accuracy, data-efficient GNN interatomic potentials, enabling broader application of MLIPs to complex chemical systems.
toXiv_bot_toot

@Techpizzamondays@social.linux.pizza
2026-02-16 13:08:10

We’ve chosen the paper for @… next week (February 23rd): “Hallucinating with AI: AI Psychosis as Distributed Delusions” by Lucy Osler.
(Is it this Lucy Osler: @…? We think so!)
We’ll have copies to give out tonight (6 PM, Vi…

@arXiv_mathSG_bot@mastoxiv.page
2026-03-26 09:51:40

Replaced article(s) found for math.SG. arxiv.org/list/math.SG/new
[1/1]:
- Arithmetic geometry of quantum connections on Calabi-Yau $3$-folds
Shaoyun Bai, Jae Hee Lee, Daniel Pomerleano
arxiv.org/abs/2601.01654 mastoxiv.page/@arXiv_mathSG_bo
- Index theory for non-compact quantum graphs
Daniele Garrisi, Alessandro Portaluri, Li Wu
arxiv.org/abs/2509.09749 mastoxiv.page/@arXiv_mathFA_bo
- From Hitchin Systems to Rational Elliptic Surfaces with C*-actions via Orbifold Hilbert Schemes
Yonghong Huang
arxiv.org/abs/2509.14812 mastoxiv.page/@arXiv_mathAG_bo
- A note on Virasoro constraints for products
Hsian-Hua Tseng
arxiv.org/abs/2603.22486 mastoxiv.page/@arXiv_mathAG_bo
toXiv_bot_toot

@arXiv_mathLO_bot@mastoxiv.page
2026-03-31 08:06:12

The Cardinalities of Intervals of Equational Theories and Logics
Juan P. Aguilera, Nick Bezhanishvili, Tenyo Takahashi
arxiv.org/abs/2603.27203 arxiv.org/pdf/2603.27203 arxiv.org/html/2603.27203
arXiv:2603.27203v1 Announce Type: new
Abstract: We study the cardinality of classes of equational theories (varieties) and logics by applying descriptive set theory. We affirmatively solve open problems raised by Jackson and Lee [Trans. Am. Math. Soc. 370 (2018), pp. 4785-4812] regarding the cardinalities of subvariety lattices, and by Bezhanishvili et al. [J. Math. Log. (2025), in press] regarding the degrees of the finite model property (fmp). By coding equations and formulas by natural numbers, and theories and logics by real numbers, we examine their position in the Borel hierarchy. We prove that every interval of equational theories in a countable language corresponds to a $\boldsymbol{\Pi}^0_1$ set, and every fmp span of a normal modal logic to a $\boldsymbol{\Pi}^0_2$ set. It follows that they have cardinality either $\leq \aleph_0$ or $2^{\aleph_0}$, provably in ZFC. In the same manner, we observe that the set of pretabular extensions of a tense logic is a $\boldsymbol{\Pi}^0_2$ set, so its cardinality is either $\leq \aleph_0$ or $2^{\aleph_0}$. We also point out a negative solution to another open problem raised by Jackson and Lee [Trans. Am. Math. Soc. 370 (2018), pp. 4785-4812] regarding the existence of independent systems, which relies on Je\v{z}ek et al. [Bull. Aust. Math. Soc. 42 (1990), pp. 57-70].
toXiv_bot_toot

@arXiv_physicschemph_bot@mastoxiv.page
2026-03-27 09:59:36

Replaced article(s) found for physics.chem-ph. arxiv.org/list/physics.chem-ph
[1/1]:
- Split-Flows: Measure Transport and Information Loss Across Molecular Resolutions
Sander Hummerich, Tristan Bereau, Ullrich K\"othe
arxiv.org/abs/2511.01464 mastoxiv.page/@arXiv_physicsch
- Quantum eigenvalue processing
Guang Hao Low, Yuan Su
arxiv.org/abs/2401.06240 mastoxiv.page/@arXiv_quantph_b
- Coupled Lindblad pseudomode theory for simulating open quantum systems
Zhen Huang, Gunhee Park, Garnet Kin-Lic Chan, Lin Lin
arxiv.org/abs/2506.10308 mastoxiv.page/@arXiv_quantph_b
- Cyclic- and helical-symmetry-adapted phonon formalism within density functional perturbation theory
Abhiraj Sharma, Phanish Suryanarayana
arxiv.org/abs/2601.08745 mastoxiv.page/@arXiv_condmatmt
- Bound Trions in Two-Dimensional Monolayers: A Review
Roman Ya. Kezerashvili
arxiv.org/abs/2603.08346 mastoxiv.page/@arXiv_condmatme
toXiv_bot_toot

@arXiv_mathAP_bot@mastoxiv.page
2026-02-10 14:58:34

Crosslisted article(s) found for math.AP. arxiv.org/list/math.AP/new
[1/1]:
- Stability and Convergence of Modal Approximations in Coupled Thermoelastic Systems: Theory and Si...
I. Essadeq, S. Nafiri, S. Benjelloun, A. E. Fettouh

@mgorny@social.treehouse.systems
2026-01-18 18:04:19

Cynicism, "AI"
I've been pointed out the "Reflections on 2025" post by Samuel Albanie [1]. The author's writing style makes it quite a fun, I admit.
The first part, "The Compute Theory of Everything" is an optimistic piece on "#AI". Long story short, poor "AI researchers" have been struggling for years because of predominant misconception that "machines should have been powerful enough". Fortunately, now they can finally get their hands on the kind of power that used to be only available to supervillains, and all they have to do is forget about morals, agree that their research will be used to murder millions of people, and a few more millions will die as a side effect of the climate crisis. But I'm digressing.
The author is referring to an essay by Hans Moravec, "The Role of Raw Power in Intelligence" [2]. It's also quite an interesting read, starting with a chapter on how intelligence evolved independently at least four times. The key point inferred from that seems to be, that all we need is more computing power, and we'll eventually "brute-force" all AI-related problems (or die trying, I guess).
As a disclaimer, I have to say I'm not a biologist. Rather just a random guy who read a fair number of pieces on evolution. And I feel like the analogies brought here are misleading at best.
Firstly, there seems to be an assumption that evolution inexorably leads to higher "intelligence", with a certain implicit assumption on what intelligence is. Per that assumption, any animal that gets "brainier" will eventually become intelligent. However, this seems to be missing the point that both evolution and learning doesn't operate in a void.
Yes, many animals did attain a certain level of intelligence, but they attained it in a long chain of development, while solving specific problems, in specific bodies, in specific environments. I don't think that you can just stuff more brains into a random animal, and expect it to attain human intelligence; and the same goes for a computer — you can't expect that given more power, algorithms will eventually converge on human-like intelligence.
Secondly, and perhaps more importantly, what evolution did succeed at first is achieving neural networks that are far more energy efficient than whatever computers are doing today. Even if indeed "computing power" paved the way for intelligence, what came first is extremely efficient "hardware". Nowadays, human seem to be skipping that part. Optimizing is hard, so why bother with it? We can afford bigger data centers, we can afford to waste more energy, we can afford to deprive people of drinking water, so let's just skip to the easy part!
And on top of that, we're trying to squash hundreds of millions of years of evolution into… a decade, perhaps? What could possibly go wrong?
[1] #NoAI #NoLLM #LLM