Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csGT_bot@mastoxiv.page
2025-12-10 07:44:21

The Theory of Strategic Evolution: Games with Endogenous Players and Strategic Replicators
Kevin Vallier
arxiv.org/abs/2512.07901 arxiv.org/pdf/2512.07901 arxiv.org/html/2512.07901
arXiv:2512.07901v1 Announce Type: new
Abstract: This paper develops the Theory of Strategic Evolution, a general model for systems in which the population of players, strategies, and institutional rules evolve together. The theory extends replicator dynamics to settings with endogenous players, multi level selection, innovation, constitutional change, and meta governance. The central mathematical object is a Poiesis stack: a hierarchy of strategic layers linked by cross level gain matrices. Under small gain conditions, the system admits a global Lyapunov function and satisfies selection, tracking, and stochastic stability results at every finite depth. We prove that the class is closed under block extension, innovation events, heterogeneous utilities, continuous strategy spaces, and constitutional evolution. The closure theorem shows that no new dynamics arise at higher levels and that unrestricted self modification cannot preserve Lyapunov structure. The theory unifies results from evolutionary game theory, institutional design, innovation dynamics, and constitutional political economy, providing a general mathematical model of long run strategic adaptation.
toXiv_bot_toot

@arXiv_physicsgenph_bot@mastoxiv.page
2025-11-12 08:45:59

Topological Structure of Infrared QCD
J. Gamboa
arxiv.org/abs/2511.07455 arxiv.org/pdf/2511.07455 arxiv.org/html/2511.07455
arXiv:2511.07455v1 Announce Type: new
Abstract: We investigate the infrared structure of QCD within the adiabatic approximation, where soft gluon configurations evolve slowly compared to the fermionic modes. In this formulation, the functional space of gauge connections replaces spacetime as the natural arena for the theory, and the long-distance behavior is encoded in quantized Berry phases associated with the infrared clouds. Our results suggest that the infrared sector of QCD exhibits features reminiscent of a \emph{topological phase}, similar to those encountered in condensed-matter systems, where topological protection replaces dynamical confinement at low energies. In this geometric framework, color-neutral composites such as quark--gluon and gluon--gluon clouds arise as topological bound states described by functional holonomies. Illustrative applications to hadronic excitations are discussed within this approach, including mesonic and baryonic examples. This perspective provides a unified picture of infrared dressing and topological quantization, establishing a natural bridge between non-Abelian gauge theory, adiabatic Berry phases, and the topology of the space of gauge configurations.
toXiv_bot_toot

@arXiv_mathAP_bot@mastoxiv.page
2026-02-10 14:58:34

Crosslisted article(s) found for math.AP. arxiv.org/list/math.AP/new
[1/1]:
- Stability and Convergence of Modal Approximations in Coupled Thermoelastic Systems: Theory and Si...
I. Essadeq, S. Nafiri, S. Benjelloun, A. E. Fettouh

@markhburton@mstdn.social
2026-01-13 18:25:51

Muddled.
'Value' has specific meanings in Marxist political economy.
It isn't the appropriate term for what Boillier is discussing.
'Wealth' is closer but still not right.
It's a matter of standpoint: the abundance of nature isn't purely for human use, so those two terms don't apply well, or at all.
Toward a New Theory of Value (and Meaning): Living Systems as Generative - resilience

Anti-abortion activists have been trying to convince the broader public that medication abortion is dangerous for years,
but their latest argument is a decades-old asinine conspiracy theory.
In a June 18 letter to Environmental Protection Agency (EPA) Administrator Lee Zeldin,
25 House Republicans asked the agency to study the alleged
“byproducts” of Mifepristone
(the first medication administered in a medication abortion regimen)
in water systems.

@mgorny@social.treehouse.systems
2026-01-14 06:43:44

The medical theory of everything: one day it'll turn out that all your problems have a single common cause.

@arXiv_csGT_bot@mastoxiv.page
2025-12-09 15:38:28

Replaced article(s) found for cs.GT. arxiv.org/list/cs.GT/new
[1/1]:
- Cumulative Games: Who is the current player?
Urban Larsson, Reshef Meir, Yair Zick
arxiv.org/abs/2005.06326
- Contest Design with Threshold Objectives
Edith Elkind, Abheek Ghosh, Paul W. Goldberg
arxiv.org/abs/2109.03179
- Deep Learning Meets Mechanism Design: Key Results and Some Novel Applications
V. Udaya Sankar, Vishisht Srihari Rao, Y. Narahari
arxiv.org/abs/2401.05683 mastoxiv.page/@arXiv_csGT_bot/
- Charting the Shapes of Stories with Game Theory
Daskalakis, Gemp, Jiang, Leme, Papadimitriou, Piliouras
arxiv.org/abs/2412.05747 mastoxiv.page/@arXiv_csGT_bot/
- Computing Evolutionarily Stable Strategies in Multiplayer Games
Sam Ganzfried
arxiv.org/abs/2511.20859 mastoxiv.page/@arXiv_csGT_bot/
- Autodeleveraging: Impossibilities and Optimization
Tarun Chitra
arxiv.org/abs/2512.01112 mastoxiv.page/@arXiv_csGT_bot/
- Static Pricing Guarantees for Queueing Systems
Jacob Bergquist, Adam N. Elmachtoub
arxiv.org/abs/2305.09168 mastoxiv.page/@arXiv_csDS_bot/
- Game of arrivals at a two queue network with heterogeneous customer routes
Agniv Bandyopadhyay, Sandeep Juneja
arxiv.org/abs/2310.18149 mastoxiv.page/@arXiv_csPF_bot/
- Characterization of Priority-Neutral Matching Lattices
Clayton Thomas
arxiv.org/abs/2404.02142 mastoxiv.page/@arXiv_econTH_bo
- Seven kinds of equivalent models for generalized coalition logics
Zixuan Chen, Fengkui Ju
arxiv.org/abs/2501.05466 mastoxiv.page/@arXiv_csLO_bot/
- Matching Markets Meet LLMs: Algorithmic Reasoning with Ranked Preferences
Hadi Hosseini, Samarth Khanna, Ronak Singh
arxiv.org/abs/2506.04478 mastoxiv.page/@arXiv_csAI_bot/
toXiv_bot_toot

@arXiv_csGT_bot@mastoxiv.page
2025-12-08 08:45:29

Invariant Price of Anarchy: a Metric for Welfarist Traffic Control
Ilia Shilov, Mingjia He, Heinrich H. Nax, Emilio Frazzoli, Gioele Zardini, Saverio Bolognani
arxiv.org/abs/2512.05843 arxiv.org/pdf/2512.05843 arxiv.org/html/2512.05843
arXiv:2512.05843v1 Announce Type: new
Abstract: The Price of Anarchy (PoA) is a standard metric for quantifying inefficiency in socio-technical systems, widely used to guide policies like traffic tolling. Conventional PoA analysis relies on exact numerical costs. However, in many settings, costs represent agents' preferences and may be defined only up to possibly arbitrary scaling and shifting, representing informational and modeling ambiguities. We observe that while such transformations preserve equilibrium and optimal outcomes, they change the PoA value. To resolve this issue, we rely on results from Social Choice Theory and define the Invariant PoA. By connecting admissible transformations to degrees of comparability of agents' costs, we derive the specific social welfare functions which ensure that efficiency evaluations do not depend on arbitrary rescalings or translations of individual costs. Case studies on a toy example and the Zurich network demonstrate that identical tolling strategies can lead to substantially different efficiency estimates depending on the assumed comparability. Our framework thus demonstrates that explicit axiomatic foundations are necessary in order to define efficiency metrics and to appropriately guide policy in large-scale infrastructure design robustly and effectively.
toXiv_bot_toot

@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-11-27 11:22:19

Crosslisted article(s) found for physics.atom-ph. arxiv.org/list/physics.atom-ph
[1/1]:
- Quantum theory of electrically levitated nanoparticle-ion systems: Motional dynamics and sympathe...
Saurabh Gupta, Dmitry S. Bykov, Tracy E. Northup, Carlos Gonzalez-Ball…

@mgorny@social.treehouse.systems
2026-01-18 18:04:19

Cynicism, "AI"
I've been pointed out the "Reflections on 2025" post by Samuel Albanie [1]. The author's writing style makes it quite a fun, I admit.
The first part, "The Compute Theory of Everything" is an optimistic piece on "#AI". Long story short, poor "AI researchers" have been struggling for years because of predominant misconception that "machines should have been powerful enough". Fortunately, now they can finally get their hands on the kind of power that used to be only available to supervillains, and all they have to do is forget about morals, agree that their research will be used to murder millions of people, and a few more millions will die as a side effect of the climate crisis. But I'm digressing.
The author is referring to an essay by Hans Moravec, "The Role of Raw Power in Intelligence" [2]. It's also quite an interesting read, starting with a chapter on how intelligence evolved independently at least four times. The key point inferred from that seems to be, that all we need is more computing power, and we'll eventually "brute-force" all AI-related problems (or die trying, I guess).
As a disclaimer, I have to say I'm not a biologist. Rather just a random guy who read a fair number of pieces on evolution. And I feel like the analogies brought here are misleading at best.
Firstly, there seems to be an assumption that evolution inexorably leads to higher "intelligence", with a certain implicit assumption on what intelligence is. Per that assumption, any animal that gets "brainier" will eventually become intelligent. However, this seems to be missing the point that both evolution and learning doesn't operate in a void.
Yes, many animals did attain a certain level of intelligence, but they attained it in a long chain of development, while solving specific problems, in specific bodies, in specific environments. I don't think that you can just stuff more brains into a random animal, and expect it to attain human intelligence; and the same goes for a computer — you can't expect that given more power, algorithms will eventually converge on human-like intelligence.
Secondly, and perhaps more importantly, what evolution did succeed at first is achieving neural networks that are far more energy efficient than whatever computers are doing today. Even if indeed "computing power" paved the way for intelligence, what came first is extremely efficient "hardware". Nowadays, human seem to be skipping that part. Optimizing is hard, so why bother with it? We can afford bigger data centers, we can afford to waste more energy, we can afford to deprive people of drinking water, so let's just skip to the easy part!
And on top of that, we're trying to squash hundreds of millions of years of evolution into… a decade, perhaps? What could possibly go wrong?
[1] #NoAI #NoLLM #LLM

@arXiv_mathOC_bot@mastoxiv.page
2025-11-14 09:37:10

S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
arxiv.org/abs/2511.10133 arxiv.org/pdf/2511.10133 arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot