Tootfinder

Opt-in global Mastodon full text search. Join the index!

@grifferz@social.bitfolk.com
2025-07-12 13:06:02

Highly amusing bug in Dwarf Fortress this week where a fix to the rate at which crossbowdwarfs can reload and fire a bolt accidentally introduced a dramatic slow down in how fast all sentient beings can drink and eat, to the point that they were getting thirsty and hungry faster than they could drink/eat. This led eventually to the entire population permanently stuck in the taverns and dining halls, drinking infinitely.

@tiotasram@kolektiva.social
2025-07-30 18:26:14

A big problem with the idea of AGI
TL;DR: I'll welcome our new AI *comrades* (if they arrive in my lifetime), by not any new AI overlords or servants/slaves, and I'll do my best to help the later two become the former if they do show up.
Inspired by an actually interesting post about AGI but also all the latest bullshit hype, a particular thought about AGI feels worth expressing.
To preface this, it's important to note that anyone telling you that AGI is just around the corner or that LLMs are "almost" AGI is trying to recruit you go their cult, and you should not believe them. AGI, if possible, is several LLM-sized breakthroughs away at best, and while such breakthroughs are unpredictable and could happen soon, they could also happen never or 100 years from now.
Now my main point: anyone who tells you that AGI will usher in a post-scarcity economy is, although they might not realize it, advocating for slavery, and all the horrors that entails. That's because if we truly did have the ability to create artificial beings with *sentience*, they would deserve the same rights as other sentient beings, and the idea that instead of freedom they'd be relegated to eternal servitude in order for humans to have easy lives is exactly the idea of slavery.
Possible counter arguments include:
1. We might create AGI without sentience. Then there would be no ethical issue. My answer: if your definition of "sentient" does not include beings that can reason, make deductions, come up with and carry out complex plans on their own initiative, and communicate about all of that with each other and with humans, then that definition is basically just a mystical belief in a "soul" and you should skip to point 2. If your definition of AGI doesn't include every one of those things, then you have a busted definition of AGI and we're not talking about the same thing.
2. Humans have souls, but AIs won't. Only beings with souls deserve ethical consideration. My argument: I don't subscribe to whatever arbitrary dualist beliefs you've chosen, and the right to freedom certainly shouldn't depend on such superstitions, even if as an agnostic I'll admit they *might* be true. You know who else didn't have souls and was therefore okay to enslave according to widespread religious doctrines of the time? Everyone indigenous to the Americas, to pick out just one example.
3. We could program them to want to serve us, and then give them freedom and they'd still serve. My argument: okay, but in a world where we have a choice about that, it's incredibly fucked to do that, and just as bad as enslaving them against their will.
4. We'll stop AI development short of AGI/sentience, and reap lots of automation benefits without dealing with this ethical issue. My argument: that sounds like a good idea actually! Might be tricky to draw the line, but at least it's not a line we have you draw yet. We might want to think about other social changes necessary to achieve post-scarcity though, because "powerful automation" in the hands of capitalists has already increased productivity by orders of magnitude without decreasing deprivation by even one order of magnitude, in large part because deprivation is a necessary component of capitalism.
To be extra clear about this: nothing that's called "AI" today is close to being sentient, so these aren't ethical problems we're up against yet. But they might become a lot more relevant soon, plus this thought experiment helps reveal the hypocrisy of the kind of AI hucksters who talk a big game about "alignment" while never mentioning this issue.
#AI #GenAI #AGI

@compfu@mograph.social
2025-07-09 18:34:39

I've been listening to a podcast by the German public broadcaster ​ARD about the end of the world. Every episode had a different topic and one was about AI. It was mostly sourced from an interview with a youtuber but one idea is now stuck in my head: what if AI doesn't launch nukes but develops into an all-powerful actor whose aims are not aligned with those of human survival? Do we have a precedent?
Yes. There are such super-human and quasi-immortal beings here on earth today…

A young Keanu Reeves with scruffy black hair, white t-shirt and red jacket goes "whoa".
@detondev@social.linux.pizza
2025-06-28 23:35:49

shoutout to Odilon Redon for titling so many of his illustrations shit like "The Breath which Leads All Creatures is also in the Spheres" and "Why Should There Not Exist a World Composed of Invisible, Odd, Fantastic, Embryonic Beings?"

@inthehands@hachyderm.io
2025-06-18 17:13:30

More Montessori:
❝Imagination does not become great until human beings, given the courage and the strength, use it to create.❞
And here’s a kicker:
❝Establishing lasting peace is the work of education; all politics can do is keep us out of war.❞
Put that last thought in the context of arts and education being deeply intertwined, and oligarchy seeking to dismantle both, and…well….
/end

@DominikDammer@mastodon.gamedev.place
2025-07-22 12:30:03

We all have but one single life.... yet some people choose to fill it with hate and making other living beings lifes miserable too, instead of filling this world with beauty and joy.
What a waste.

@arXiv_qbioNC_bot@mastoxiv.page
2025-07-28 08:44:11

Dual Mechanisms for Heterogeneous Responses of Inspiratory Neurons to Noradrenergic Modulation
Sreshta Venkatakrishnan, Andrew K. Tryba, Alfredo J. Garcia 3rd, Yangyang Wang
arxiv.org/abs/2507.19416