Peuty: Handtasche mit OLED-Display
Der Franzose Richard Peuty hat eine Handtasche mit OLED-Display, App-Anbindung und kabelloser Ladetechnik vorgestellt – für wechselbare Designs je nach Outfit.
https://www.…
Despising machines to a man,
The Luddites joined up with the Klan,
And ride out by night
In a sheeting of white
To lynch all the robots they can.
-- C. M. and G. A. Maxson
I just got a "Decomputing" talk accepted by a very tech-focused business conference (as a Keynote even) and a labor rights organization who wanted me to propose a talk instantly went for the "The Luddites were right" suggestion. Two things that a few months ago would not have been possible I think.
Things are changing, the dire state of the world sometimes allows new narratives to punch through.
🇺🇦 #NowPlaying on KEXP's #Continent
LuuDadeejay:
🎵 Bacardi via Sgidongo
#LuuDadeejay
https://open.spotify.com/track/0XUkBLaeXu5mkyyq0gSpwY
🇺🇦 #NowPlaying on BBCRadio3's #NightTracks
Leo Todd Johnson:
🎵 wa
#LeoToddJohnson
https://open.spotify.com/track/39yjvsMkyBPpUMuVyrSIvZ
Please 🔁 BOOST to share what you like
- your followers don't see if you ⭐ favourite a post
Robust equilibria in continuous games: From strategic to dynamic robustness
Kyriakos Lotidis, Panayotis Mertikopoulos, Nicholas Bambos, Jose Blanchet
https://arxiv.org/abs/2512.08138 https://arxiv.org/pdf/2512.08138 https://arxiv.org/html/2512.08138
arXiv:2512.08138v1 Announce Type: new
Abstract: In this paper, we examine the robustness of Nash equilibria in continuous games, under both strategic and dynamic uncertainty. Starting with the former, we introduce the notion of a robust equilibrium as those equilibria that remain invariant to small -- but otherwise arbitrary -- perturbations to the game's payoff structure, and we provide a crisp geometric characterization thereof. Subsequently, we turn to the question of dynamic robustness, and we examine which equilibria may arise as stable limit points of the dynamics of "follow the regularized leader" (FTRL) in the presence of randomness and uncertainty. Despite their very distinct origins, we establish a structural correspondence between these two notions of robustness: strategic robustness implies dynamic robustness, and, conversely, the requirement of strategic robustness cannot be relaxed if dynamic robustness is to be maintained. Finally, we examine the rate of convergence to robust equilibria as a function of the underlying regularizer, and we show that entropically regularized learning converges at a geometric rate in games with affinely constrained action spaces.
toXiv_bot_toot
🇺🇦 #NowPlaying on KEXP's #DriveTime
Lou Tides:
🎵 Autostatic!
#LouTides
https://loutides.bandcamp.com/album/autostatic
https://open.spotify.com/track/29wF2QpiN9e9wc4gWhcZ1b
🇺🇦 #NowPlaying on KEXP's #Continent
Luuddadeejay:
🎵 Bacardi Via Sgidongo
#Luuddadeejay
https://open.spotify.com/track/0XUkBLaeXu5mkyyq0gSpwY
Multi-agent learning under uncertainty: Recurrence vs. concentration
Kyriakos Lotidis, Panayotis Mertikopoulos, Nicholas Bambos, Jose Blanchet
https://arxiv.org/abs/2512.08132 https://arxiv.org/pdf/2512.08132 https://arxiv.org/html/2512.08132
arXiv:2512.08132v1 Announce Type: new
Abstract: In this paper, we examine the convergence landscape of multi-agent learning under uncertainty. Specifically, we analyze two stochastic models of regularized learning in continuous games -- one in continuous and one in discrete time with the aim of characterizing the long-run behavior of the induced sequence of play. In stark contrast to deterministic, full-information models of learning (or models with a vanishing learning rate), we show that the resulting dynamics do not converge in general. In lieu of this, we ask instead which actions are played more often in the long run, and by how much. We show that, in strongly monotone games, the dynamics of regularized learning may wander away from equilibrium infinitely often, but they always return to its vicinity in finite time (which we estimate), and their long-run distribution is sharply concentrated around a neighborhood thereof. We quantify the degree of this concentration, and we show that these favorable properties may all break down if the underlying game is not strongly monotone -- underscoring in this way the limits of regularized learning in the presence of persistent randomness and uncertainty.
toXiv_bot_toot