Adrienne Jones-McAllister told News 12 Long Island that she was undergoing an MRI on her knee
when she asked the technician to get her husband, Keith McAllister, to help her get off the table.
She said she called out to him.
The man, 61, entered the MRI room while a scan was underway
The machine’s strong magnetic force drew him in by the metallic chain around his neck, according to a release from the Nassau County Police Department.
He died Thursday afternoon
Früher™ sagten¹ viele Linke gerne:² „Wenn Wahlen etwas³ ändern könnten, wären sie längst verboten.“ Heute sehen wir, daß Wahlen etwas ändern,⁴ und daß der Kapitalismus auf störende Änderungen⁵ mit Faschismus reagiert.
Was die steile These von damals gleichzeitig widerlegt und bestätigt.
__
¹so im letzten Jahrtausend
²und kamen sich so richtig cool und mit Durchblick vor
³bestehende Machtverhältnisse
⁴und in welche Richtung
⁵auch mittels Wahlen
Zur #CSD-Diskussion: Es gibt so ein paar Dinge, da ist es nicht "neutral", sie nicht zu unterstützen. Es gibt Dinge, da ist Unterstützung das Neutrale, der zu erwartende Standard. Menschenrechte gehören dazu, Demokratie, Gleichberechtigung.
Wem also ein CSD zu politisch ist, zu sehr einer politischen Richtung angehört, der zeigt damit, dass er gerade nicht neutral ist. Gleiche Rechte…
I agree!
#Anarchy #Anarchism
AI, AGI, and learning efficiency
My 4-month-old kid is not DDoSing Wikipedia right now, nor will they ever do so before learning to speak, read, or write. Their entire "training corpus" will not top even 100 million "tokens" before they can speak & understand language, and do so with real intentionally.
Just to emphasize that point: 100 words-per-minute times 60 minutes-per-hour times 12 hours-per-day times 365 days-per-year times 4 years is a mere 105,120,000 words. That's a ludicrously *high* estimate of words-per-minute and hours-per-day, and 4 years old (the age of my other kid) is well after basic speech capabilities are developed in many children, etc. More likely the available "training data" is at least 1 or 2 orders of magnitude less than this.
The point here is that large language models, trained as they are on multiple *billions* of tokens, are not developing their behavioral capabilities in a way that's remotely similar to humans, even if you believe those capabilities are similar (they are by certain very biased ways of measurement; they very much aren't by others). This idea that humans must be naturally good at acquiring language is an old one (see e.g. #AI #LLM #AGI
Fractal dimensions of complex networks: advocating for a topological approach
Rayna Andreeva, Hayde\'e Contreras-Peruyero, Sanjukta Krishnagopal, Nina Otter, Maria Antonietta Pascali, Elizabeth Thompson
https://arxiv.org/abs/2506.15236
Pushing the Limits of Safety: A Technical Report on the ATLAS Challenge 2025
Zonghao Ying, Siyang Wu, Run Hao, Peng Ying, Shixuan Sun, Pengyu Chen, Junze Chen, Hao Du, Kaiwen Shen, Shangkun Wu, Jiwei Wei, Shiyuan He, Yang Yang, Xiaohai Xu, Ke Ma, Qianqian Xu, Qingming Huang, Shi Lin, Xun Wang, Changting Lin, Meng Han, Yilei Jiang, Siqi Lai, Yaozhi Zheng, Yifei Song, Xiangyu Yue, Zonglei Jing, Tianyuan Zhang, Zhilei Zhu, Aishan Liu, Jiakai Wang, Siyuan Liang, Xianglong Kong, Hainan Li, …
Series D, Episode 11 - Orbit
AVON: Pressurize the rest of the ship.
SOOLIN: Right. [Applies some controls.] The forward section's pressurized.
AVON: All right. Vila, let's get to the airlock.
VILA: Me?
https://blake.torpidity.net/m/411/56 B7B2