Scott Adams, creator of the Dilbert comic strip, syndicated to ~2K newspapers at its peak and dropped after he made racist comments in 2023, has died at 68 (Richard Sandomir/New York Times)
https://www.nytimes.com/2026/01/13/arts/scott-adams-dead.html
It's always lovely to have the doubters get their asses academically kicked when it comes to Ada Lovelace's actual mathematical capabilities, but at the same time, I am so, so, so tired. Just so very tired. From an open access 2017 Historia Mathematica article debunking the idea that Lovelace was not a competent mathematician.
Ukrainian drones destroy 70% of Temryuk port fuel tanks in strike on key Russian supply hub: https://benborges.xyz/2025/12/09/ukrainian-drones-destroy-of-temryuk.html
Moody Urbanity - Relations VI 🧬
情绪化城市 - 关系 VI 🧬
📷 Zeiss IKON Super Ikonta 533/16
🎞️ Ilford HP5 400, expired 1993
#filmphotography #Photography #blackandwhite
Determining the impact of post-main-sequence stellar evolution on the transiting giant planet population: #planets: https://ras.ac.uk/news-and-press/research-highlights/ageing-stars-may-be-destroying-their-closest-planets
RE: https://hachyderm.io/@thomasfuchs/115505473401296834
The worst launch was the first Starship.
It destroyed the launchpad during lift off, a while later went out of control because of engine failures and loss of thrust vector control and as cherry on top the system designed to destroy it (flight termination system) also failed.
This was declared a "success" by SpaceX.
Senaste avsnittet av Tyngre Träningssnack
Vad vet man kring sambandet mellan muskelmassa och styrka? | Podcast https://tyngre.se/podcast/tyngre-traningssnack/vad-vet-man-kring-sambandet-mellan-muskelmassa-och-styrka
Multi-agent learning under uncertainty: Recurrence vs. concentration
Kyriakos Lotidis, Panayotis Mertikopoulos, Nicholas Bambos, Jose Blanchet
https://arxiv.org/abs/2512.08132 https://arxiv.org/pdf/2512.08132 https://arxiv.org/html/2512.08132
arXiv:2512.08132v1 Announce Type: new
Abstract: In this paper, we examine the convergence landscape of multi-agent learning under uncertainty. Specifically, we analyze two stochastic models of regularized learning in continuous games -- one in continuous and one in discrete time with the aim of characterizing the long-run behavior of the induced sequence of play. In stark contrast to deterministic, full-information models of learning (or models with a vanishing learning rate), we show that the resulting dynamics do not converge in general. In lieu of this, we ask instead which actions are played more often in the long run, and by how much. We show that, in strongly monotone games, the dynamics of regularized learning may wander away from equilibrium infinitely often, but they always return to its vicinity in finite time (which we estimate), and their long-run distribution is sharply concentrated around a neighborhood thereof. We quantify the degree of this concentration, and we show that these favorable properties may all break down if the underlying game is not strongly monotone -- underscoring in this way the limits of regularized learning in the presence of persistent randomness and uncertainty.
toXiv_bot_toot
I don't appreciate the people who mock this poor man. Every year there are countless mass sandwichings in schools throughout the country, and we tolerate it because we, as a society, have lost our way. Kids and teachers huddled in their classrooms as mustard and onions fly through the air. WHERE'S YOUR HUMANITY, PEOPLE??
https…