Invariant Price of Anarchy: a Metric for Welfarist Traffic Control
Ilia Shilov, Mingjia He, Heinrich H. Nax, Emilio Frazzoli, Gioele Zardini, Saverio Bolognani
https://arxiv.org/abs/2512.05843 https://arxiv.org/pdf/2512.05843 https://arxiv.org/html/2512.05843
arXiv:2512.05843v1 Announce Type: new
Abstract: The Price of Anarchy (PoA) is a standard metric for quantifying inefficiency in socio-technical systems, widely used to guide policies like traffic tolling. Conventional PoA analysis relies on exact numerical costs. However, in many settings, costs represent agents' preferences and may be defined only up to possibly arbitrary scaling and shifting, representing informational and modeling ambiguities. We observe that while such transformations preserve equilibrium and optimal outcomes, they change the PoA value. To resolve this issue, we rely on results from Social Choice Theory and define the Invariant PoA. By connecting admissible transformations to degrees of comparability of agents' costs, we derive the specific social welfare functions which ensure that efficiency evaluations do not depend on arbitrary rescalings or translations of individual costs. Case studies on a toy example and the Zurich network demonstrate that identical tolling strategies can lead to substantially different efficiency estimates depending on the assumed comparability. Our framework thus demonstrates that explicit axiomatic foundations are necessary in order to define efficiency metrics and to appropriately guide policy in large-scale infrastructure design robustly and effectively.
toXiv_bot_toot
On Dynamic Programming Theory for Leader-Follower Stochastic Games
Jilles Steeve Dibangoye, Thibaut Le Marre, Ocan Sankur, Fran\c{c}ois Schwarzentruber
https://arxiv.org/abs/2512.05667 https://arxiv.org/pdf/2512.05667 https://arxiv.org/html/2512.05667
arXiv:2512.05667v1 Announce Type: new
Abstract: Leader-follower general-sum stochastic games (LF-GSSGs) model sequential decision-making under asymmetric commitment, where a leader commits to a policy and a follower best responds, yielding a strong Stackelberg equilibrium (SSE) with leader-favourable tie-breaking. This paper introduces a dynamic programming (DP) framework that applies Bellman recursion over credible sets-state abstractions formally representing all rational follower best responses under partial leader commitments-to compute SSEs. We first prove that any LF-GSSG admits a lossless reduction to a Markov decision process (MDP) over credible sets. We further establish that synthesising an optimal memoryless deterministic leader policy is NP-hard, motivating the development of {\epsilon}-optimal DP algorithms with provable guarantees on leader exploitability. Experiments on standard mixed-motive benchmarks-including security games, resource allocation, and adversarial planning-demonstrate empirical gains in leader value and runtime scalability over state-of-the-art methods.
toXiv_bot_toot
Muddled.
'Value' has specific meanings in Marxist political economy.
It isn't the appropriate term for what Boillier is discussing.
'Wealth' is closer but still not right.
It's a matter of standpoint: the abundance of nature isn't purely for human use, so those two terms don't apply well, or at all.
Toward a New Theory of Value (and Meaning): Living Systems as Generative - resilience
S-D-RSM: Stochastic Distributed Regularized Splitting Method for Large-Scale Convex Optimization Problems
Maoran Wang, Xingju Cai, Yongxin Chen
https://arxiv.org/abs/2511.10133 https://arxiv.org/pdf/2511.10133 https://arxiv.org/html/2511.10133
arXiv:2511.10133v1 Announce Type: new
Abstract: This paper investigates the problems large-scale distributed composite convex optimization, with motivations from a broad range of applications, including multi-agent systems, federated learning, smart grids, wireless sensor networks, compressed sensing, and so on. Stochastic gradient descent (SGD) and its variants are commonly employed to solve such problems. However, existing algorithms often rely on vanishing step sizes, strong convexity assumptions, or entail substantial computational overhead to ensure convergence or obtain favorable complexity. To bridge the gap between theory and practice, we integrate consensus optimization and operator splitting techniques (see Problem Reformulation) to develop a novel stochastic splitting algorithm, termed the \emph{stochastic distributed regularized splitting method} (S-D-RSM). In practice, S-D-RSM performs parallel updates of proximal mappings and gradient information for only a randomly selected subset of agents at each iteration. By introducing regularization terms, it effectively mitigates consensus discrepancies among distributed nodes. In contrast to conventional stochastic methods, our theoretical analysis establishes that S-D-RSM achieves global convergence without requiring diminishing step sizes or strong convexity assumptions. Furthermore, it achieves an iteration complexity of $\mathcal{O}(1/\epsilon)$ with respect to both the objective function value and the consensus error. Numerical experiments show that S-D-RSM achieves up to 2--3$\times$ speedup compared to state-of-the-art baselines, while maintaining comparable or better accuracy. These results not only validate the algorithm's theoretical guarantees but also demonstrate its effectiveness in practical tasks such as compressed sensing and empirical risk minimization.
toXiv_bot_toot
Replaced article(s) found for cs.GT. https://arxiv.org/list/cs.GT/new
[1/1]:
- Egyptian Ratscrew: Discovering Dominant Strategies with Computational Game Theory
Justin Diamond, Ben Garcia
https://arxiv.org/abs/2304.01007
- Truthful and Almost Envy-Free Mechanism of Allocating Indivisible Goods: the Power of Randomness
Xiaolin Bu, Biaoshuai Tao
https://arxiv.org/abs/2407.13634 https://mastoxiv.page/@arXiv_csGT_bot/112811955506293858
- Learning the Value of Value Learning
Alex John London, Aydin Mohseni
https://arxiv.org/abs/2511.17714 https://mastoxiv.page/@arXiv_csAI_bot/115609411461995995
toXiv_bot_toot