Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@heiseonline@social.heise.de
2026-01-06 14:57:00

Peuty: Handtasche mit OLED-Display
Der Franzose Richard Peuty hat eine Handtasche mit OLED-Display, App-Anbindung und kabelloser Ladetechnik vorgestellt – für wechselbare Designs je nach Outfit.

@erikdelareguera@mastodon.nu
2025-12-12 21:46:31

Rapporterar från gängvåldets Marseille. dn.se/varlden/har-riktas-valde

@fortune@social.linux.pizza
2025-11-14 22:00:03

Despising machines to a man,
The Luddites joined up with the Klan,
And ride out by night
In a sheeting of white
To lynch all the robots they can.
-- C. M. and G. A. Maxson

@tante@tldr.nettime.org
2025-12-07 16:34:24

I just got a "Decomputing" talk accepted by a very tech-focused business conference (as a Keynote even) and a labor rights organization who wanted me to propose a talk instantly went for the "The Luddites were right" suggestion. Two things that a few months ago would not have been possible I think.
Things are changing, the dire state of the world sometimes allows new narratives to punch through.

@kexpmusicbot@mastodonapp.uk
2025-11-29 03:40:47

🇺🇦 #NowPlaying on KEXP's #Continent
LuuDadeejay:
🎵 Bacardi via Sgidongo
#LuuDadeejay
open.spotify.com/track/0XUkBLa

@BBC3MusicBot@mastodonapp.uk
2025-12-02 22:09:12

🇺🇦 #NowPlaying on BBCRadio3's #NightTracks
Leo Todd Johnson:
🎵 wa
#LeoToddJohnson
open.spotify.com/track/39yjvsM
Please 🔁 BOOST to share what you like
- your followers don't see if you ⭐ favourite a post

@arXiv_csGT_bot@mastoxiv.page
2025-12-10 08:54:21

Robust equilibria in continuous games: From strategic to dynamic robustness
Kyriakos Lotidis, Panayotis Mertikopoulos, Nicholas Bambos, Jose Blanchet
arxiv.org/abs/2512.08138 arxiv.org/pdf/2512.08138 arxiv.org/html/2512.08138
arXiv:2512.08138v1 Announce Type: new
Abstract: In this paper, we examine the robustness of Nash equilibria in continuous games, under both strategic and dynamic uncertainty. Starting with the former, we introduce the notion of a robust equilibrium as those equilibria that remain invariant to small -- but otherwise arbitrary -- perturbations to the game's payoff structure, and we provide a crisp geometric characterization thereof. Subsequently, we turn to the question of dynamic robustness, and we examine which equilibria may arise as stable limit points of the dynamics of "follow the regularized leader" (FTRL) in the presence of randomness and uncertainty. Despite their very distinct origins, we establish a structural correspondence between these two notions of robustness: strategic robustness implies dynamic robustness, and, conversely, the requirement of strategic robustness cannot be relaxed if dynamic robustness is to be maintained. Finally, we examine the rate of convergence to robust equilibria as a function of the underlying regularizer, and we show that entropically regularized learning converges at a geometric rate in games with affinely constrained action spaces.
toXiv_bot_toot

@kexpmusicbot@mastodonapp.uk
2025-11-21 00:56:11

🇺🇦 #NowPlaying on KEXP's #DriveTime
Lou Tides:
🎵 Autostatic!
#LouTides
loutides.bandcamp.com/album/au
open.spotify.com/track/29wF2Qp

@kexpmusicbot@mastodonapp.uk
2025-12-27 03:29:10

🇺🇦 #NowPlaying on KEXP's #Continent
Luuddadeejay:
🎵 Bacardi Via Sgidongo
#Luuddadeejay
open.spotify.com/track/0XUkBLa

@arXiv_csGT_bot@mastoxiv.page
2025-12-10 08:00:50

Multi-agent learning under uncertainty: Recurrence vs. concentration
Kyriakos Lotidis, Panayotis Mertikopoulos, Nicholas Bambos, Jose Blanchet
arxiv.org/abs/2512.08132 arxiv.org/pdf/2512.08132 arxiv.org/html/2512.08132
arXiv:2512.08132v1 Announce Type: new
Abstract: In this paper, we examine the convergence landscape of multi-agent learning under uncertainty. Specifically, we analyze two stochastic models of regularized learning in continuous games -- one in continuous and one in discrete time with the aim of characterizing the long-run behavior of the induced sequence of play. In stark contrast to deterministic, full-information models of learning (or models with a vanishing learning rate), we show that the resulting dynamics do not converge in general. In lieu of this, we ask instead which actions are played more often in the long run, and by how much. We show that, in strongly monotone games, the dynamics of regularized learning may wander away from equilibrium infinitely often, but they always return to its vicinity in finite time (which we estimate), and their long-run distribution is sharply concentrated around a neighborhood thereof. We quantify the degree of this concentration, and we show that these favorable properties may all break down if the underlying game is not strongly monotone -- underscoring in this way the limits of regularized learning in the presence of persistent randomness and uncertainty.
toXiv_bot_toot