Series B, Episode 03 - Weapon
ORAC: No. It was a priority one - for automatic relay to the senior officer present.
BLAKE: Orac, I want more information. I want to know everything there is to know about this man Coser; I want to know how he got out of the base; and I want to know what IMIPAK is.
https://blake.torpidi…
Critical demand in a stochastic model of flows in supply networks
Yannick Feld, Marc Barthelemy
https://arxiv.org/abs/2505.24813 https://
Series B, Episode 03 - Weapon
[Messroom. Travis enters]
SERVALAN: Are the guards in clear view?
TRAVIS: Three of them, as you ordered, Supreme Commander.
SERVALAN: And the fourth is concealed?
https://blake.torpidity.net/m/203/429 B7B3
Series D, Episode 02 - Power
TARRANT: Orac, is it possible for a living person to pass through a solid hatch without there being any material disturbance.
ORAC: The process is called teleportation.
DAYNA: Oh, well, nothing like stating the obvious.
https://blake.torpidity.net/m/402/256
Series C, Episode 08 - Rumours of Death
SHRINKER: What, what is this place?
AVON: It's a cave. If you're thinking of running, don't. There's nowhere to go. The only way out is the way we came in.
https://blake.torpidity.net/m/308/161 B7B4
AI, AGI, and learning efficiency
An addendum to this: I'm someone who would accurately be called "anti-AI" in the modern age, yet I'm also an "AI researcher" in some ways (have only dabbled in neutral nets).
I don't like:
- AI systems that are the product of labor abuses towards the data workers who curate their training corpora.
- AI systems that use inordinate amounts of water and energy during an intensifying climate catastrophe.
- AI systems that are fundamentally untrustworthy and which reinforce and amplify human biases, *especially* when those systems are exposed in a way that invites harms.
- AI systems which are designed to "save" my attention or brain bandwidth but such my doing so cripple my understating of the things I might use them for when I fact that understanding was the thing I was supposed to be using my time to gain, and where the later lack of such understanding will be costly to me.
- AI systems that are designed by and whose hype fattens the purse of people who materially support genocide and the construction of concentration campus (a.k.a. fascists).
In other words, I do not like and except in very extenuating circumstances I will not use ChatGPT, Claude, Copilot, Gemini, etc.
On the other hand, I do like:
- AI research as an endeavor to discover new technologies.
- Generative AI as a research topic using a spectrum of different methods.
- Speculating about non-human intelligences, including artificial ones, and including how to behave ethically towards them.
- Large language models as a specific technique, and autoencoders and other neural networks, assuming they're used responsibly in terms of both resource costs & presentation to end users.
I write this because I think some people (especially folks without CS backgrounds) may feel that opposing AI for all the harms it's causing runs the risk of opposing technological innovation more broadly, and/or may feel there's a risk that they will be "left behind" as everyone else embraces the hype and these technologies inevitability become ubiquitous and essential (I know I feel this way sometimes). Just know that is entirely possible and logically consistent to both oppose many forms of modern AI while also embracing and even being optimistic about AI research, and that while LLMs are currently all the rage, they're not the endpoint of what AI will look like in the future, and their downsides are not inherent in AI development.
Series B, Episode 05 - Pressure Point
AVON: It is not going to be easy.
BLAKE: Can we do it?
VILA: Not a chance, it's absolutely impossible.
AVON: There is a way. Where are Kasabi and her people?
https://blake.torpidity.net/m/205/324 B7B3
Series B, Episode 05 - Pressure Point
AVON: It is not going to be easy.
BLAKE: Can we do it?
VILA: Not a chance, it's absolutely impossible.
AVON: There is a way. Where are Kasabi and her people?
https://blake.torpidity.net/m/205/324 B7B3
Series A, Episode 01 - The Way Back
BLAKE: I have to think.
FOSTER: Of course. We'll talk after the meeting.
BLAKE: Hmm.
DEV TARRANT: What do you think?
https://blake.torpidity.net/m/101/89 B7B4