Tootfinder

Opt-in global Mastodon full text search. Join the index!

@tiotasram@kolektiva.social
2025-07-30 18:26:14

A big problem with the idea of AGI
TL;DR: I'll welcome our new AI *comrades* (if they arrive in my lifetime), by not any new AI overlords or servants/slaves, and I'll do my best to help the later two become the former if they do show up.
Inspired by an actually interesting post about AGI but also all the latest bullshit hype, a particular thought about AGI feels worth expressing.
To preface this, it's important to note that anyone telling you that AGI is just around the corner or that LLMs are "almost" AGI is trying to recruit you go their cult, and you should not believe them. AGI, if possible, is several LLM-sized breakthroughs away at best, and while such breakthroughs are unpredictable and could happen soon, they could also happen never or 100 years from now.
Now my main point: anyone who tells you that AGI will usher in a post-scarcity economy is, although they might not realize it, advocating for slavery, and all the horrors that entails. That's because if we truly did have the ability to create artificial beings with *sentience*, they would deserve the same rights as other sentient beings, and the idea that instead of freedom they'd be relegated to eternal servitude in order for humans to have easy lives is exactly the idea of slavery.
Possible counter arguments include:
1. We might create AGI without sentience. Then there would be no ethical issue. My answer: if your definition of "sentient" does not include beings that can reason, make deductions, come up with and carry out complex plans on their own initiative, and communicate about all of that with each other and with humans, then that definition is basically just a mystical belief in a "soul" and you should skip to point 2. If your definition of AGI doesn't include every one of those things, then you have a busted definition of AGI and we're not talking about the same thing.
2. Humans have souls, but AIs won't. Only beings with souls deserve ethical consideration. My argument: I don't subscribe to whatever arbitrary dualist beliefs you've chosen, and the right to freedom certainly shouldn't depend on such superstitions, even if as an agnostic I'll admit they *might* be true. You know who else didn't have souls and was therefore okay to enslave according to widespread religious doctrines of the time? Everyone indigenous to the Americas, to pick out just one example.
3. We could program them to want to serve us, and then give them freedom and they'd still serve. My argument: okay, but in a world where we have a choice about that, it's incredibly fucked to do that, and just as bad as enslaving them against their will.
4. We'll stop AI development short of AGI/sentience, and reap lots of automation benefits without dealing with this ethical issue. My argument: that sounds like a good idea actually! Might be tricky to draw the line, but at least it's not a line we have you draw yet. We might want to think about other social changes necessary to achieve post-scarcity though, because "powerful automation" in the hands of capitalists has already increased productivity by orders of magnitude without decreasing deprivation by even one order of magnitude, in large part because deprivation is a necessary component of capitalism.
To be extra clear about this: nothing that's called "AI" today is close to being sentient, so these aren't ethical problems we're up against yet. But they might become a lot more relevant soon, plus this thought experiment helps reveal the hypocrisy of the kind of AI hucksters who talk a big game about "alignment" while never mentioning this issue.
#AI #GenAI #AGI

@inthehands@hachyderm.io
2025-08-28 22:05:22

It is perhaps not obvious that the way we talk about intelligence tends to employ circular reasoning. Here’s a sketch.
The trouble is that that the word “intelligence” is defined •ostensively•, i.e. by examples instead of by criteria — but we talk about intelligence as if it is some real unified •thing• that underlies all those examples. That leads to circular reasoning like this:
Q: What is intelligence?
A: It is the characteristics exhibited by intelligent beings, such as [examples of intelligence go here].
Q: And what are intelligent beings?
A: Beings which exhibit intelligence.
Q: And what is intelligence?
…[stack overflow]…

@DominikDammer@mastodon.gamedev.place
2025-07-22 12:30:03

We all have but one single life.... yet some people choose to fill it with hate and making other living beings lifes miserable too, instead of filling this world with beauty and joy.
What a waste.

Orwell was literally a peniless immigrant in France.
It's all there in "Down and Out in Paris and London".
It taught him empathy for the poor, who he wrote about as human beings.
This is the antidote to the xenophobia that was being pedalled in London today.

@arXiv_qbioNC_bot@mastoxiv.page
2025-07-28 08:44:11

Dual Mechanisms for Heterogeneous Responses of Inspiratory Neurons to Noradrenergic Modulation
Sreshta Venkatakrishnan, Andrew K. Tryba, Alfredo J. Garcia 3rd, Yangyang Wang
arxiv.org/abs/2507.19416

@samvarma@fosstodon.org
2025-09-14 16:40:34

I think the view from 10,000 feet for me is that all tech tries to interpose itself between two human beings and then extract value. Whether it's Uber, Airbnb, social media or Vision Pro/smart glasses and now earbuds.
Once a tech company is the filter through which we sense the world it's a next level amount of power that they will have over us.
The next thing will be that Apple thinks they deserve a 30% cut of that sale that you made with the local merchant.

@grifferz@social.bitfolk.com
2025-07-12 13:06:02

Highly amusing bug in Dwarf Fortress this week where a fix to the rate at which crossbowdwarfs can reload and fire a bolt accidentally introduced a dramatic slow down in how fast all sentient beings can drink and eat, to the point that they were getting thirsty and hungry faster than they could drink/eat. This led eventually to the entire population permanently stuck in the taverns and dining halls, drinking infinitely.

@compfu@mograph.social
2025-07-09 18:34:39

I've been listening to a podcast by the German public broadcaster ​ARD about the end of the world. Every episode had a different topic and one was about AI. It was mostly sourced from an interview with a youtuber but one idea is now stuck in my head: what if AI doesn't launch nukes but develops into an all-powerful actor whose aims are not aligned with those of human survival? Do we have a precedent?
Yes. There are such super-human and quasi-immortal beings here on earth today…

A young Keanu Reeves with scruffy black hair, white t-shirt and red jacket goes "whoa".