
2025-08-31 13:35:20
I wrote this in 2017 about AI 😀 , the current debate is far from new.
#ai
A big problem with the idea of AGI
TL;DR: I'll welcome our new AI *comrades* (if they arrive in my lifetime), by not any new AI overlords or servants/slaves, and I'll do my best to help the later two become the former if they do show up.
Inspired by an actually interesting post about AGI but also all the latest bullshit hype, a particular thought about AGI feels worth expressing.
To preface this, it's important to note that anyone telling you that AGI is just around the corner or that LLMs are "almost" AGI is trying to recruit you go their cult, and you should not believe them. AGI, if possible, is several LLM-sized breakthroughs away at best, and while such breakthroughs are unpredictable and could happen soon, they could also happen never or 100 years from now.
Now my main point: anyone who tells you that AGI will usher in a post-scarcity economy is, although they might not realize it, advocating for slavery, and all the horrors that entails. That's because if we truly did have the ability to create artificial beings with *sentience*, they would deserve the same rights as other sentient beings, and the idea that instead of freedom they'd be relegated to eternal servitude in order for humans to have easy lives is exactly the idea of slavery.
Possible counter arguments include:
1. We might create AGI without sentience. Then there would be no ethical issue. My answer: if your definition of "sentient" does not include beings that can reason, make deductions, come up with and carry out complex plans on their own initiative, and communicate about all of that with each other and with humans, then that definition is basically just a mystical belief in a "soul" and you should skip to point 2. If your definition of AGI doesn't include every one of those things, then you have a busted definition of AGI and we're not talking about the same thing.
2. Humans have souls, but AIs won't. Only beings with souls deserve ethical consideration. My argument: I don't subscribe to whatever arbitrary dualist beliefs you've chosen, and the right to freedom certainly shouldn't depend on such superstitions, even if as an agnostic I'll admit they *might* be true. You know who else didn't have souls and was therefore okay to enslave according to widespread religious doctrines of the time? Everyone indigenous to the Americas, to pick out just one example.
3. We could program them to want to serve us, and then give them freedom and they'd still serve. My argument: okay, but in a world where we have a choice about that, it's incredibly fucked to do that, and just as bad as enslaving them against their will.
4. We'll stop AI development short of AGI/sentience, and reap lots of automation benefits without dealing with this ethical issue. My argument: that sounds like a good idea actually! Might be tricky to draw the line, but at least it's not a line we have you draw yet. We might want to think about other social changes necessary to achieve post-scarcity though, because "powerful automation" in the hands of capitalists has already increased productivity by orders of magnitude without decreasing deprivation by even one order of magnitude, in large part because deprivation is a necessary component of capitalism.
To be extra clear about this: nothing that's called "AI" today is close to being sentient, so these aren't ethical problems we're up against yet. But they might become a lot more relevant soon, plus this thought experiment helps reveal the hypocrisy of the kind of AI hucksters who talk a big game about "alignment" while never mentioning this issue.
#AI #GenAI #AGI
I saw an advert for an AI influencer generator. So, I checked it out. Here's the current list of actors/influencers they can create. If you see an ad from these faces, it's AI bullshittery. #ai
AI-actress is a weird way to refer to an animated character.
#AI
Everyone thinks #AI can do someone else's job. Designers want to get rid of PMs. PMs think they no longer need devs. Devs can't wait to generate designs. And managers are anticipating getting rid of us all.
Alas, in the few cases the tools work at all, they get you no more than 80% of the way there. Without experts to identify where that 20-100% gap is, you have nothing.
I've …
"Today's #AI bubble has absorbed more of the country's wealth and represents more of its economic activity than historic nation-shattering bubbles, like the 19th century UK rail bubble. A much-discussed MIT paper found that 95% of companies that had tried AI had either nothing to show for it, or experienced a loss" -- @…
Das #DNIP-Briefing heute zu griechischen Amphoren und platzenden #AI-Blasen https://dnip.ch/2025/09/30/dnip-briefing…
Like just about everyone else I know, I seem to spend a lot of time thinking, reading, and talking about #AI. And, given that I work in fine arts education, it's inevitable I think about how AI affects the arts, and how the arts affect AI.
As part of my work in #ArtsPedagogy, I'm visi…
I’ve written about design patterns for the securing of LLM agents: #AI
Cory Doctorow @… on #AI:
[T]he AI bubble is driven by monopolists who've conquered their markets and have no more growth potential, who are desperate to convince investors that they can continue to grow by moving into…
The illusion of readiness: Stress testing large frontier models on multimodal medical benchmarks #AI
"the #AI bubble is driven by monopolists who've conquered their markets and have no more growth potential, who are desperate to convince investors that they can continue to grow by moving into some other sector...
when the bubble bursts, the money-hemorrhaging "foundation models" will be shut off and we'll lose the AI that can't do your job, and you will be long gone&q…
Here's my new Human Meme podcast episode about AI and AGI and the future of us!
#ai
Measuring #EnergyConsumption in #ProgrammingLanguages for #AI Applications
"As AI becomes part of everyday life, it brings a hidden climate cost"
#AI #ArtificialIntelligence #Climate
🤑 Introducing pay per crawl: enabling content owners to charge AI crawlers for access
#ai
"The [Wall Street Journal] writers compare the #AI bubble to other bubbles, like Worldcom's fraud-soaked fiber optic bonanza (which saw the company's CEO sent to prison, where he eventually died), and conclude that the AI bubble is vastly larger than any other bubble in recent history" -- @pluralistic
#LLMs
How much energy does your AI prompt use? It depends.
“[…]grid operators are freaking out. Tech companies can’t just keep doing this. Things are going to start going south.”
www.sciencenews.org/article/ai-energy-carbon-emissions-chatgpt mobile_share=true
#ai #aienergyconsumption
#AI #programming assist has been helpful for me. But I'm not losing my job anytime soon. Here's a simple example of why.
I have a script with
`cmd1`
I prompt GPT-4.1 to "now invoke cmd2 and cmd3 at the end". Good:
`cmd1`
`cmd2`
`cmd3`
"Add a 15 second pau…
Newspapers are using #ai generated images. What a shame...
#newspaper
Upgraded to an AMD Radeon RX 9070 XT GPU recently, just in time for the latest stable Linux firmware to be wonky with an amdgpu. Oh well, I'm accustomed to the song and dance of downgrading / pulling git branches for stability on #linux, and dabbling with the pro drivers.
And yes, that 4TB mount point is for
Amazing #supercut of various pieces on #AI from #TheDailyShow: https://www.youtube.com/watch?v=s20CbtHP6fs - hilarious and frightening at the same time ...
"Springer Nature book on machine learning is full of made-up citations" The book," Mastering Machine Learning: From Basics to Advanced", costs $169.
Yet more reason to be skeptical about #AI salespeople.
#VoyageAI introduces voyage-context-3, a contextualized chunk #embedding #llm that captures both chunk details and full document context 🔍
Security measures for Gen #AI seem to increase all around, which is overall good.
It's just sad to think about WHY statements like "Gemini 2.5 Flash Image does not currently support editing images of children" are necessary in the first place.
#nanobanana
If #AI is allegedly so good for worker productivity and improving efficiency across organizations, why are the AI companies making their own employees work 72 hour workweeks?
If the tech actually helped them get more done faster, wouldn’t they have SHORTER workweeks? Why aren’t they using their own tools to help their employees?
Context: #Anthropic earlier this year, and they told me that this is a job with far longer work hours than any other place they’ve worked at.
They are also pulling 60 hour weeks there, they have zero tolerance for remote work, because the culture is “you don’t want to be left behind”. This person basically disappeared from social life once they took this job. I had never seen them so tired before.
Want to run AI models on your laptop/PC but don't have an NVIDIA card? No problem.
#AI
Just saw this:
#AI can mean a lot of things these days, but lots of the popular meanings imply a bevy of harms that I definitely wouldn't feel are worth a cute fish game. In fact, these harms are so acute that even "just" playing into the AI hype becomes its own kind of harm (it's similar to blockchain in that way).
@… noticed that the authors claim the code base is 80% AI generated, which is a red flag because people with sound moral compasses wouldn't be using AI to "help" write code in the first place. The authors aren't by some miracle people who couldn't build this app without help, in case that influences your thinking about it: they have the skills to write the code themselves, although it likely would have taken longer (but also been better).
I was more interested in the fish-classification AI, and how much it might be dependent on datacenters. Thankfully, a quick glance at the code confirms they're using ONNX and running a self-trained neural network on your device. While the exponentially-increasing energy & water demands of datacenters to support billion-parameter models are a real concern, this is not that. Even a non-AI game can burn a lot of cycles on someone's phone, and I don't think there's anything to complain about energy-wise if we're just using cycles on the end user's device as long as we're not having them keep it on for hours crunching numbers like blockchain stuff does. Running whatever stuff locally while the user is playing a game is a negligible environmental concern, unlike, say, calling out to ChatGPT where you're directly feeding datacenter demand. Since they claimed to have trained the network themselves, and since it's actually totally reasonable to make your own dataset for this and get good-enough-for-a-silly-game results with just a few hundred examples, I don't have any ethical objections to the data sourcing or training processes either. Hooray! This is finally an example of "ethical use of neutral networks" that I can hold up as an example of what people should be doing instead of the BS they are doing.
But wait... Remember what I said about feeding the AI hype being its own form of harm? Yeah, between using AI tools for coding and calling their classifier "AI" in a way that makes it seem like the same kind of thing as ChatGPT et al., they're leaning into the hype rather than helping restrain it. And that means they're causing harm. Big AI companies can point to them and say "look AI enables cute things you like" when AI didn't actually enable it. So I'm feeling meh about this cute game and won't be sharing it aside from this post. If you love the cute fish, you don't really have to feel bad for playing with it, but I'd feel bad for advertising it without a disclaimer.
I really can't emphasize strongly enough how much I detest AI-generated images. I get physical discomfort having it shoved into my face *everywhere*. Even people here in the #FediVerse post #AIslop or use AI generated profile pictures. And all the non-human voiceovers on
"I firmly believe the (economic) #AI apocalypse is coming. These companies are not profitable. They can't be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people's money and then lighting it on fire. Eventually those other people are going to want to see a return on their investment" -- @pluralistic
They won't get to see that retu…
It struck me recently that, for all the hype, in the 18 months since I first wrote about AI agents I’ve yet to see one ‘in the wild’.
https://www.computing.co.uk/feature/2025/where-are-all-the-ai-agents
@…
A recommendation from my side:
A full stack #AI launchkit (docker based) for selfhosting with many apps including supabase, ollama, n8n, flowise, etc.
Been pondering today whether or not someone has already created an #AI version of ismy.blue
You'd have data points like:
* Skynet
* HAL
* Lt. Cmdr Data
* Generic chess playing program
* DeepBlue
* AlphaGo
* Inference engine
* Some machine learning example
* ChatGPT
* Voice recognition
* Siri
* Roomba
* Object recognition …
syslog-ng statement on #AI shortly after rsyslog AI announcement: https://fosstodon.org/@PCzanik/114856221034797034
The PR machine powering big tech’s AI energy story #AI boom depends …
At #AIED2025 we are getting a preview of the report of the #EuropeanDigitalEducationHub, providing practical examples of how XAI-Ed could be used
In the ZIRP era, #UX was evangelized as a magic wand to 10x value — and after that bubble popped, UX found a niche as a delivery function. Unfortunately, optimizing our process for faster outputs over outcomes meant that #AI came and ate our lunch with instant outputs/no outcomes.
But this was no golden…
The financial AI bubble will burst, companies will go broke, investments will be lost, there will be a large consolidation phase, as with every wave of innovation. But there is not any signal that the actual use and application of GenAI will decline. We struggle with finding added value, using them in the right way, mitigating the risks, that is the main challenge.
#AI
#Github #copilot now supports an instructions file.
Here's mine:
"Go Away!"
#ai #programming
"Artificial Intelligence (AI) and the Future of Information Privacy: Expert Viewpoints"
#AI
AI is transforming the cybersecurity landscape, but human insight remains critical.
As part of our ongoing #GEANTCybersecurity campaign webinars, Dr. Maria Bada will explore how #AI is being used on both sides of the conflict: to automate attacks as well as to detect threats faster.
She’…
うん、じわじわくるな……
※今日のDoodleはGoogle 創立 27 周年キャンペーンだそうです
#AI生成 #AIGenerated
"AI ‘Slop’ Websites Are Publishing Climate Science Denial"
#AI #ArtficialIntelligence #Climate #ClimateChange
Oh no it happened - client for a research project I’m working on got upset that we’re doing manual data analysis of survey responses, and complained about why we are so slow when their internal team working on a different report got “everything done in a couple of days with #AI tools”
And then they told us that waiting for proper human analysis is a “waste of time” and that we need to just chuck our dataset into AI and “get it over with”
I really don’t know what to do right now 🥲
Trying to do this properly on their expected timeline will mean very little sleep for multiple days, but giving up on the project quality and dumping it into AI is will make this entire project a waste of time. (As I wouldn’t be able to trust the output of the analysis, or be proud of it to showcase the final report as an example of our work, and not to mention that I don’t want to support this expectation to rush everything at work with these AI models)
Have you had an "AI error in your favor" yet?
I just got done slogging through Amazon customer support chat, where while talking to the bot it promised an item was returnable within 90 days, even though it actually was only eligible for the normal 30 days policy. When I finally got through to a human, they were able to let me know why the return actually wasn't eligible, but when I quoted them what the bot had promised, they made an exception and let me do the return. I didn't even really try to push the bot to make any promises, though that should be easy.
#AI
#AI is a marketing term; before we discuss "AI is fake" vs "AI is real" we need to unfold what we *mean* by AI.
For example, "artificial general intelligence" is fake and can't hurt you. Layoffs excused by "AI efficiency" are real and can hurt you.
Linkedin discourse is fake - but it CAN hurt you.
🦾 Emotional Manipulation by AI Companions
#ai #seduction
The new =COPILOT() function in #Microsoft #Excel enables users to easily leverage AI directly within their spreadsheets to quickly populate cells with data or analyze columns with #AI. For a 5min tutorial, v…
An eyecare foundation model for clinical assistance: a randomized controlled trial.
#ai
I Asked #AI to Build an App. It Made a Database Roasting Bot. We're All Doomed.
https://www.linkedin.com/pulse/i-asked-ai-build-app-m…
🎨 Perfect for #AI agents needing grounded web context and research tools demanding trust and freshness
🚀 Enables custom products where developers want complete control over how search data is used
📊 Structured response format eliminates need for complex data parsing and preprocessing steps
🌐
🦾 People use AI for companionship much less than we’re led to think
https://techcrunch.com/2025/06/26/people-use-ai-for-companionship-much-less-than-were-led-to-think/
"Can AI Slash Pollution? Fossil Fuel Industry Is Investing in Boosting Oil Production, Profits Instead"
#AI #ArtificialIntelligence #FossilFuels
[OT] Wall Street's #AI bubble is worse than the 1999 dot-com bubble, warns a top economist https://gizmodo.com/wall-streets-ai-bubble-is-worse-t…
LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding
No one has ever convinced anyone with "data" and "facts." A stakeholder will always have more confidence in their existing assumptions than in your findings.
The only way to change minds is through gradual influence that nurtures a sense of ownership.
#AI tools promise to accelerate the pace of research, but they do so only by skipping this process of sense-making. Res…
"AI Can Help Limit the Spread of Misinformation During Natural Disaster, Study Finds"
#AI #ArtificialIntelligence
It should surprise no one that AI watermarking is not going to work.
#ai
From #AI to #AGI? If humans had general intelligence, blind people would be learning to see with sound https://www.youtube.com/watch?v=nVugtxWmW4E
How much of my children's future is AI going to burn up? That depends on how much we feed the hype beast. *That* is why "don't use AI at all without mentioning the drawbacks & a very good reason" is my stance (and I'm an AI researcher, technically).
Local models that run on your laptop: acceptable if produced by ethical means (including data sourcing & compensation for data filtering) & training costs are mitigated. Are such models way worse than the huge datacenter-scale models? Yes, for now. Deal with it.
ChatGPT, Claude, Copilot, even DeepSeek: get out. You're feeding the beast that is consuming my kids' future. Heck, even talking up these models or about how "everyone is using them so it's okay" or about "they're not going away" I'd feeding the beast even if you don't touch them.
I wish it weren't like this, because the capabilities of the big models are cool even once you cut past the hype.
#AI
"Al: Five charts that put data- centre energy use - and emissions - into context"
#AI #ArtificialIntelligence #Technology
"ChatGPT psychosis": Experts warn that people are losing themselves to #AI https://futurism.com/expert-people-losing-themselves-ai Does getting too rich and powerful have the same effec…
"As we approach the coming jobs cliff, we're entering a period where a college isn't going to be worth it for the majority of people, since AI will take over most white-collar jobs. Combined with the demographic cliff, the entire higher education system will crumble."
This is the kind of statement you don't hear that much from sub-CEO-level #AI boosters, because it's awkward for them to admit that the tech they think is improving their life is going to be disastrous for society. Or if they do admit this, they spin it like it's a good thing (don't get me wrong, tuition is ludicrously high and higher education absolutely could be improved by a wholesale reinvention, but the potential AI-fueled collapse won't be an improvement).
I'm in the "anti-AI" crowd myself, and I think the current tech is in a hype bubble that will collapse before we see wholesale replacement of white-collar jobs, with a re-hiring to come that will somewhat make up for the current decimation. There will still be a lot of fallout for higher ed (and hopefully some productive transformation), but it might not be apocalyptic.
Fun question to ask the next person who extols the virtues of using generative AI for their job: "So how long until your boss can fire you and use the AI themselves?"
The following ideas are contradictory:
1. "AI is good enough to automate a lot of mundane tasks."
2. "AI is improving a lot so those pesky issues will be fixed soon."
3. "AI still needs supervision so I'm still needed to do the full job."
Upgraded image editing in Gemini is jaw dropping ,
#ai
Good article, regardless your opinion about GenAI , it will change society.
#ai
Good explanation of MCP and A2A.
#ai
Interesting use of NotebookLM.
I used NotebookLM to learn a new programming language, and it actually worked #ai