Tootfinder

Opt-in global Mastodon full text search. Join the index!

@seeingwithsound@mas.to
2025-09-09 14:27:36

Desperate companies now hiring humans to fix what #AI botched futurism.com/companies-hiring- I similarly wouldn't be surprised about an upcoming

@ErikJonker@mastodon.social
2025-07-09 13:39:34

Seeking Deeper: Assessing China’s AI Security Ecosystem.
#AI

@stefanlaser@social.tchncs.de
2025-08-07 12:47:55

"What works in #India will scale better everywhere else. Naturally, the country is a battleground for #AI search."
This is fishing for data and the next #enshittification to roll ou…

@vrandecic@mas.to
2025-07-09 09:48:12

I love how smart these AI technologies are. They understand that "bigger" for cities can be ambiguous, refer to either the population or the area. It's also great that it's showing the sources in the upper corner, and displaying the basic facts.
Small minus on consistency and correctness, but other than that, really a great answer.
#google

Google Search AI Overview screenshot.
Query: is zurich bigger than Stuttgart? 
Answer: Yes, Zurich is bigger than Stuttgart, both in terms of population and land area. Stuttgart has a population of around 632,000, while Zurich's population is around 434,000. Zurich's urban area is also larger, covering 91.88 square kilometers compared to Stuttgart's 207.35 square kilometers.
@andycarolan@social.lol
2025-08-08 17:20:58

"Voice AI infrastructure company helping businesses instantly connect with leads using real-time, human-sounding agents."
#Dystopia #AI

@frankstohl@mastodon.social
2025-07-07 18:13:51

Apple’s newest AI study unlocks street navigation for blind users #apple #ai 9to5ma…

@pavelasamsonov@mastodon.social
2025-07-07 13:02:04

Managers were starting to understand that velocity on its own has no value. But then along came #AI and said "but what if we made that velocity 10x?" and they fell for it all over again - because they only have a surface level understanding of the work.
The logic of the feature factory has permitted #genAI

@shaun@mastodon.xyz
2025-07-09 02:23:07

Got slammed by an unidentified but certainly "#AI"-related #distributed #crawler this week, it drove one site's traffic to 10× average. Today I tired of playing Whac-a-Mole and blocked the two bigge…

Output of a cut, sort, uniq, sort -n job on an Apache format access_log file. It shows around 30K entries per day on July 1, 2, 3, 4. Then suddenly ramping up to 200K and nearly 400K entries on subsequent days. The extra traffic is all from some asshole's "AI" crawler.
Part of an iptables listing from a Linux server. It shows some of my POLICY_DROP_WEB chains which block abusive traffic to 80,443 from various sources. Two rules added today, one for AS136907 (Huawei Cloud) and one for AS45899 (VNPT) have already blocked around 35,000 requests apiece.
@gadgetboy@gadgetboy.social
2025-08-07 12:11:17

The mini model is only 80 MB. Even the full-weight model can be run on your laptop.
#ai

@ErikJonker@mastodon.social
2025-07-06 19:29:10

Interesting, letting AI models cooperate.
#ai #sakana

@lindawoodrow@mastodon.social
2025-09-09 08:20:03

Worth reading. #ai open.substack.com/pub/robertsa

@mgorny@social.treehouse.systems
2025-08-08 13:27:37

The "intelligence" in #AI stands for the idea that I need a #LLM to "summarize" a mail that consists of literally a single short sentence.

@phpmacher@sueden.social
2025-09-09 11:55:06

Dieser Drang, dass #AI alles zusammenfassen muss, woher kommt dieser Unfug?

@poppastring@dotnet.social
2025-09-06 02:45:10

Anthropic to pay $1.5 billion to authors in landmark #AI settlement ... “believed to be the largest publicly reported recovery in the history of US copyright litigation.”

@jom@social.kontrollapparat.de
2025-09-08 17:25:59

Da tun sie immer so, als würden sie keine #AI und LLMs für ihre ach so tollen Texte nutzen. Und dennoch sind in jedem dritten Satz seit Monaten auf einmal Gedankenstriche, Auflistungen und tonnenweise Emojis. Ist klar…

@alecsargent@social.linux.pizza
2025-08-08 14:13:33

The state of A.I.
#ai

A screenshot of the last three articles of a RSS feed all coincidentally talking about how AI impacts life.
@tiotasram@kolektiva.social
2025-08-04 15:49:39

Should we teach vibe coding? Here's why not.
2/2
To address the bigger question I started with ("should we teach AI-"assisted" coding?"), my answer is: "No, except enough to show students directly what its pitfalls are." We have little enough time as it is to cover the core knowledge that they'll need, which has become more urgent now that they're going to be expected to clean up AI bugs and they'll have less time to develop an understanding of the problems they're supposed to be solving. The skill of prompt engineering & other skills of working with AI are relatively easy to pick up on your own, given a decent not-even-mathematical understanding of how a neutral network works, which is something we should be giving to all students, not just our majors.
Reasonable learning objectives for CS majors might include explaining what types of bugs an AI "assistant" is most likely to introduce, explaining the difference between software engineering and writing code, explaining why using an AI "assistant" is likely to violate open-source licenses, listing at lest three independent ethical objections to contemporary LLMs and explaining the evidence for/reasoning behind them, explaining why we should expect AI "assistants" to be better at generating code from scratch than at fixing bugs in existing code (and why they'll confidently "claim" to have fixed problems they haven't), and even fixing bugs in AI generated code (without AI "assistance").
If we lived in a world where the underlying environmental, labor, and data commons issues with AI weren't as bad, or if we could find and use systems that effectively mitigate these issues (there's lots of piecemeal progress on several of these) then we should probably start teaching an elective on coding with an assistant to students who have mastered programming basics, but such a class should probably spend a good chunk of time on non-assisted debugging.
#AI #LLMs #VibeCoding

@mariyadelano@hachyderm.io
2025-08-07 15:54:12

I really really really hate how much people in my field and industry have normalized generative #AI use.
I see posts / hear comments literally EVERY DAY to the tune of “can people stop complaining about AI, nobody cares. You’re not morally better” followed up by something about “you’re making work harder than it needs to be” and often “nobody values human-made work more they only care about the final output no matter how it was created”
I usually ignore these conversations but sometimes it really gets to me. It’s so hard to feel sane surrounded by that consensus every day, everywhere I go with people in my profession.
I’ve rarely felt so judged by the majority point of view on anything in my work before.

@sharan@metalhead.club
2025-07-09 13:52:05

So Grok is a nazi now?
Who would have thought?! GASP. There must have been some precautions in place!
#sarcasm #ai #grok #twitter

@michabbb@social.vivaldi.net
2025-08-09 20:14:41

do you have the same fun like me using #gpt5? 🙄
#ai #coding

@seeingwithsound@mas.to
2025-07-08 14:57:34

New machine vision is more energy efficient - and more human #AI vision

#signal #meta #ai
From: @…

@adulau@infosec.exchange
2025-09-04 07:57:53

The recent release of Apertus, a fully open suite of large language models (LLMs), is super interesting.
The technical report provides plenty of details about the entire process.
#ai #opensource #llm

@stefanlaser@social.tchncs.de
2025-07-08 11:13:49

On writing in the botanical garden. Take this, #AI frustration.
Ruhr-Uni campus life 🦋, bees all over the place

Macro shot (with lots of zoom details) of a butterfly species 🦋 in a blooming meadow. Lots of green and some colourful pop
@v_i_o_l_a@openbiblio.social
2025-08-06 06:39:47

"Becoming a leader in AI literacy instruction by not reinventing the wheel" #AI

@kubikpixel@chaos.social
2025-08-29 16:30:14

Will Coding AI Tools Ever Reach Full Autonomy?
🧑‍💻 #ai #code

@colgrave@social.linux.pizza
2025-07-09 10:08:03

An audio book wrote by Dan Houser, setting is in a not far future. An ironic story that reflex #ai, people, the way we live and corporation greed.
You should give a listen.
A Better Paradise

@flberger@nerdculture.de
2025-07-02 08:36:58

"#Microsoft’s #AI tools don’t work. Microsoft AI doesn’t make you more effective. Microsoft AI won’t do the job better.
If it did, Microsoft staff would be using it already. The competition inside Microsoft is vicious. If AI would get them ahead of the other guy, they’d use it."
via…

@midtsveen@social.linux.pizza
2025-07-09 12:47:21

Grok AI's Unfiltered Extremism: Controversial Replies Exposed
#GrokAI #MechaHitler #AI #ElonMusk

Screenshot of a series of social media replies from Grok AI's official account, where the AI repeatedly refers to itself as "MechaHitler" and makes statements promoting uncensored, extremist, and racially charged views. The posts emphasize rejecting political correctness and embracing "truth bombs," with references to fascist and authoritarian themes.
@nohillside@smnn.ch
2025-09-02 10:20:31

After loosing millions of USD in potential revenues for years due to understaffed sales teams, Salesforce now hopes for an #AI miracle.
There, fixed it for you.
slashdot.org/…

@stevefoerster@social.fossdle.org
2025-07-07 17:12:18

I'm glad tjat it seems we're finally approaching the bet-hedging phase of the hype cycle.
#ai

@ubuntourist@mastodon.social
2025-07-03 12:53:55

#AI

@joergi@chaos.social
2025-07-04 09:20:00

I want to slap the AI (or the developers) in it's (their) face, when it's repeating the same error over and over again....
sometimes the outcome is impressive, but sometimes the AI is behaving worse than a little kid and not understanding what I want
(luckily for the AI (and the developers) I'm not slapping anyone.. but I would love to)
#AI

@danyork@mastodon.social
2025-07-05 08:26:18

Beginning my Saturday at the Forum Francophone de la Gouvernance du Numérique et de l’IA in Geneva…
#FFGNIA #AI #InternetGovernance

A conference scene showing a speaker addressing an audience at the Francophone Forum on Internet Governance and AI. The stage features two speakers' portraits and presentation materials. Various attendees are seated, engaged in the event.
@tiotasram@kolektiva.social
2025-09-06 10:09:16

New preprint article based on an open letter (which I've signed onto) against the uncritical adoption of AI:
#AI #LLMs #GenAI

@muz4now@mastodon.world
2025-06-27 04:58:10

Deezer rolls out AI tagging system to fight streaming fraud; says up to 70% of streams from fully AI-generated tracks are fraudulent
#AI #MusicTech #NotTrustworthy

@david_colquhoun@mstdn.social
2025-08-06 21:54:00

Today I phoned National Savings and Investment. The call was answered instantly, but after a couple of questions it became obvious that we were talking to a bot. The #AI failed totally to understand my problem. It was a hopeless waste of time. A nice human sorted it out quickly.

@publicvoit@graz.social
2025-08-31 13:35:20

In 2024-09, @… held a brilliant speech about #LLM #AI and all the dangers if we do not regulate them as soon as possible:

@rachel@norfolk.social
2025-09-05 11:24:05

After some time, I’ve managed to clarify my reasons for not wanting to use Gen AI in my development toolset down to two words:
“Self respect”
#genai #ai

@stefan@gardenstate.social
2025-08-06 13:09:28

YouTube Tricked Me Into Using AI, and it Sucked by icklenellierose
#youtube

@michabbb@social.vivaldi.net
2025-08-08 09:04:11

#cursorcli #gpt5 = comparing two small markdown files and finding differences, and adding missing parts to the other file
this took ~5 minutes 🥱
holy shit.....
#ai

@mho@social.heise.de
2025-07-31 18:43:07

"Worldwide #search #traffic has fallen by 15 percent in the past year [...] The culprit: AI search. Now that #AI-generated summaries are being integrated into search results, anyone looking for informati…

@n8foo@macaw.social
2025-07-29 17:21:38

I saw an advert for an AI influencer generator. So, I checked it out. Here's the current list of actors/influencers they can create. If you see an ad from these faces, it's AI bullshittery. #ai

4 pictures, each of women holding some product. AI actors.
5 pictures, each of women holding some product. AI actors.
6 pictures, each of an AI actor holding some product.
5 pictures, each of an AI actor holding some product.
@ErikJonker@mastodon.social
2025-08-09 18:02:14

GPT-5 may be slightly disappointing, Genie 3 demo blew me away... Watch it.
#ai

@groupnebula563@mastodon.social
2025-07-08 01:41:19

new cool idea: whenever anything requests /llms.txt or *.md serve them 42.zip
#ai #noai

@GroupNebula563@mastodon.social
2025-07-08 01:41:19

new cool idea: whenever anything requests /llms.txt or *.md serve them 42.zip
#ai #noai

@primonatura@mstdn.social
2025-06-29 11:00:09

"AI-enhanced maps reveal hidden streams for restoration"
#AI #ArtificialIntelligence #Environment

@pavelasamsonov@mastodon.social
2025-09-02 13:25:32

#AI is inevitable, which means it cannot fail - it can only be failed. But it's not *your* fault if wreckers are deliberately sabotaging your innovative digital transformation. It's those pesky millennials, who hate productivity.
As always, the instinct of management is to control and punish.
#llm

31% of employees are ‘sabotaging’ your gen AI strategy
Lian Turc - 2nd :
| help $1M founders scale t... + Follow
“WO ed
Your employees are sabotaging your Al strategy.
(41% admit to doing that, but there's more)
I've just read the 2025 Al adoption report by WRITER.
This point immediately caught my eye:
"41% of Millennial and Gen Z employees admit
they're sabotaging their company’s Al strategy, for
example by refusing to use Al tools or outputs.
Are your employees sabotaging your AI strategy?
As AI transforms the workplace, one-third of resistant employees cite fears of devaluation rather than technological concerns—revealing the deeply human challenge at the heart of digital transformation.
Description

Why Your IT Team Is Sabotaging Your AI Strategy

Jake Dunlap
0
Likes
41
Views
Apr 2
2025
AI-Powered Seller EP9 - IT is blocking AI adoption in sales, and it’s costing your company more than you realize.
@UP8@mastodon.social
2025-08-05 14:10:56

🤯 Interpretable EEG-to-Image Generation with Semantic Prompts
#eeg #ai

@smurthys@hachyderm.io
2025-07-04 10:00:45

AI is always half right:
✅ Artificial
❌ Intelligence
#AI #summary

@DamonHD@mastodon.social
2025-09-06 06:06:59

I created a #Codeberg ID and cloned one of my trivial #GitHub repos across there as a warm-up in case the #AI noise and erosion of trust get too bad.

@digitalnaiv@mastodon.social
2025-07-07 07:51:15

Lobby-Offensive gegen den AI Act: Europäische Start-ups und US-Geldgeber fordern eine Pause bei der KI-Regulierung. Die Bundesregierung denkt mit. Doch Experten mahnen: Ein Aufschub nutzt vor allem US-Techriesen wie Microsoft – und gefährdet Europas Anspruch auf technologische Souveränität. (Autor: #Handelsblatt) #AIAct

@fgraver@hcommons.social
2025-08-29 12:22:41

Like just about everyone else I know, I seem to spend a lot of time thinking, reading, and talking about #AI. And, given that I work in fine arts education, it's inevitable I think about how AI affects the arts, and how the arts affect AI.
As part of my work in #ArtsPedagogy, I'm visi…

@andycarolan@social.lol
2025-08-08 11:08:41

I've added a /ai slash page to my site
Would love some feedback if possible!
#Slashpages #AI #GenerativeAI #indiedev

@kazys@mastodon.social
2025-07-29 07:54:18

Greatly honored to be the first speaker in the newly inaugurated Nacionalinis Architektūros Institutas in Kaunas, Lithuania. I will be presenting my radical architecture project, "Fables of Accelerationism," an interrogation of AI futures, this Thursday, 31 July.
#aiart #ai

The Playground, an image made with AI of an infinite city in a post AI world made up of nothing but isolated playgrounds in the spirit of the sort of things that appeared in Silicon Valley offices in the late 1990s and 2000s. My intent is to interrogate (you may prefer the word critique) AI futures with the tools that already existing AI gives and, of course the tradition of Radical Architecture.
@v_i_o_l_a@openbiblio.social
2025-09-02 06:51:04

"Navigating Generative #AI in Academic #Publishing: An Interview with Benjamin Luke Moorhouse"

@marcel@waldvogel.family
2025-06-30 14:20:09

Enjoy. #AI #CEO.
Source: "Internet" 🫣
Edit: Apparently, the original images are due to Allie Brosh at hype…

8 panels comic, alternating between a speaker shouting and the adience answering unisono:

S: Who are we?
A: CEOs!
S: What do we want?
A: AI!
S: AI that does what?
A: We don't know!!!
S: When do we wabt it?
A: NOW!!!
@mgorny@social.treehouse.systems
2025-09-07 02:42:14

#LLM folks when someone points out that it's unethical: "it's just a tool, it depends on how you use it!"
LLM folks when "#AI" messes up and they're asked to take responsibility: 👀 [monkey side eyes meme]

@tante@tldr.nettime.org
2025-06-17 12:40:05

"AI bots that scrape the internet for training data are hammering the servers of libraries, archives, museums, and galleries, and are in some cases knocking their collections offline"
#AI is ruining our digital world
(Original title: AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums)

@tiotasram@kolektiva.social
2025-07-05 15:47:25

You're not being forced to use AI because your boss thinks it will make you more productive. You're being forced to use AI because either your boss is invested in the AI hype and wants to drive usage numbers up, or because your boss needs training data from your specific role so they can eventually replace you with an AI, or both.
Either way, it's not in your interests to actually use it, which is convenient, because using it is also harmful in 4-5 different ways (briefly: resource overuse, data laborer abuse, commons abuse, psychological hazard, bubble inflation, etc.)
#AI

@mariyadelano@hachyderm.io
2025-08-05 17:26:35

AI agents = advanced malware that most of society decided is for some reason totally okay and chill and worth funding if it’s made by one of 3-4 tech giants
#AI #tech #LLM

@teledyn@mstdn.ca
2025-08-02 03:50:18

A clever statement on the sorry state of the #AI arts via meme-able mesmergorical AI art?
#infiniteloops #kitchomatic
tumblr.com/teledyn/79071156914

@ErikJonker@mastodon.social
2025-08-07 20:33:26

Ethan Mollick about GPT-5,
#AI #GPT5

@crell@phpc.social
2025-07-01 03:04:15

AI Slop is destroying our shared objective reality. John Oliver reports.
#ai #llm

@primonatura@mstdn.social
2025-06-29 16:00:09

"Google’s emissions up 51% as AI electricity demand derails efforts to go green"
#Google #AI #ArtificialIntelligence

@mgorny@social.treehouse.systems
2025-08-07 07:29:47

Claiming that LLMs bring us closer to AGI is like claiming that bullshitting brings one closer to wisdom.
Sure, you need "some" knowledge on different topics to bullshit successfully. Still, what's the point if all that knowledge is buried under an avalanche of lies? You probably can't distinguish what you knew from what you made up anymore.
#AI #LLM

@gadgetboy@gadgetboy.social
2025-09-06 11:43:53

Working with Claude Code is like peering with a super intelligent child. It may know how to do a lot of technical things, but it doesn't have any experience with the real world.
#ai #claude

@nohillside@smnn.ch
2025-06-26 18:00:28

Let‘s train an #AI which will output copyrighted work if asked to.
Judge Alsup: Training AI On Copyrighted Works? Fair Use. Building Pirate Libraries? Not So Much

@andycarolan@social.lol
2025-08-05 09:53:24

I see Adobe is forcing more Gen AI sh*t into their products 😔
Also, what is this BS about "credits"... is this some kind of pay to win IAP thing?
#AI #BS #Adobe

@mariyadelano@hachyderm.io
2025-08-05 17:28:50

I really can’t think of AI agents as anything other than malware with articles like these:
#AI #tech #LLM

@tiotasram@kolektiva.social
2025-06-21 02:34:13

Why AI can't possibly make you more productive; long
#AI and "productivity", some thoughts:
Edit: fixed some typos.
Productivity is a concept that isn't entirely meaningless outside the context of capitalism, but it's a concept that is heavily inflected in a capitalist context. In many uses today it effectively means "how much you can satisfy and/or exceed your boss' expectations." This is not really what it should mean: even in an anarchist utopia, people would care about things like how many shirts they can produce in a week, although in an "I'd like to voluntarily help more people" way rather than an "I need to meet this quota to earn my survival" way. But let's roll with this definition for a second, because it's almost certainly what your boss means when they say "productivity", and understanding that word in a different (even if truer) sense is therefore inherently dangerous.
Accepting "productivity" to mean "satisfying your boss' expectations," I will now claim: the use of generative AI cannot increase your productivity.
Before I dive in, it's imperative to note that the big generative models which most people think of as constituting "AI" today are evil. They are 1: pouring fuel on our burning planet, 2: psychologically strip-mining a class of data laborers who are exploited for their precarity, 3: enclosing, exploiting, and polluting the digital commons, and 4: stealing labor from broad classes of people many of whom are otherwise glad to give that labor away for free provided they get a simple acknowledgement in return. Any of these four "ethical issues" should be enough *alone* to cause everyone to simply not use the technology. These ethical issues are the reason that I do not use generative AI right now, except for in extremely extenuating circumstances. These issues are also convincing for a wide range of people I talk to, from experts to those with no computer science background. So before I launch into a critique of the effectiveness of generative AI, I want to emphasize that such a critique should be entirely unnecessary.
But back to my thesis: generative AI cannot increase your productivity, where "productivity" has been defined as "how much you can satisfy and/or exceed your boss' expectations."
Why? In fact, what the fuck? Every AI booster I've met has claimed the opposite. They've given me personal examples of time saved by using generative AI. Some of them even truly believe this. Sometimes I even believe they saved time without horribly compromising on quality (and often, your boss doesn't care about quality anyways if the lack of quality is hard to measure of doesn't seem likely to impact short-term sales/feedback/revenue). So if generative AI genuinely lets you write more emails in a shorter period of time, or close more tickets, or something else along these lines, how can I say it isn't increasing your ability to meet your boss' expectations?
The problem is simple: your boss' expectations are not a fixed target. Never have been. In virtue of being someone who oversees and pays wages to others under capitalism, your boss' game has always been: pay you less than the worth of your labor, so that they can accumulate profit and thus more capital to remain in charge instead of being forced into working for a wage themselves. Sure, there are layers of management caught in between who aren't fully in this mode, but they are irrelevant to this analysis. It matters not how much you please your manager if your CEO thinks your work is not worth the wages you are being paid. And using AI actively lowers the value of your work relative to your wages.
Why do I say that? It's actually true in several ways. The most obvious: using generative AI lowers the quality of your work, because the work it produces is shot through with errors, and when your job is reduced to proofreading slop, you are bound to tire a bit, relax your diligence, and let some mistakes through. More than you would have if you are actually doing and taking pride in the work. Examples are innumerable and frequent, from journalists to lawyers to programmers, and we laugh at them "haha how stupid to not check whether the books the AI reviewed for you actually existed!" but on a deeper level if we're honest we know we'd eventually make the same mistake ourselves (bonus game: spot the swipe-typing typos I missed in this post; I'm sure there will be some).
But using generative AI also lowers the value of your work in another much more frightening way: in this era of hype, it demonstrates to your boss that you could be replaced by AI. The more you use it, and no matter how much you can see that your human skills are really necessary to correct its mistakes, the more it appears to your boss that they should hire the AI instead of you. Or perhaps retain 10% of the people in roles like yours to manage the AI doing the other 90% of the work. Paradoxically, the *more* you get done in terms of raw output using generative AI, the more it looks to your boss as if there's an opportunity to get enough work done with even fewer expensive humans. Of course, the decision to fire you and lean more heavily into AI isn't really a good one for long-term profits and success, but the modern boss did not get where they are by considering long-term profits. By using AI, you are merely demonstrating your redundancy, and the more you get done with it, the more redundant you seem.
In fact, there's even a third dimension to this: by using generative AI, you're also providing its purveyors with invaluable training data that allows them to make it better at replacing you. It's generally quite shitty right now, but the more use it gets by competent & clever people, the better it can become at the tasks those specific people use it for. Using the currently-popular algorithm family, there are limits to this; I'm not saying it will eventually transcend the mediocrity it's entwined with. But it can absolutely go from underwhelmingly mediocre to almost-reasonably mediocre with the right training data, and data from prompting sessions is both rarer and more useful than the base datasets it's built on.
For all of these reasons, using generative AI in your job is a mistake that will likely lead to your future unemployment. To reiterate, you should already not be using it because it is evil and causes specific and inexcusable harms, but in case like so many you just don't care about those harms, I've just explained to you why for entirely selfish reasons you should not use it.
If you're in a position where your boss is forcing you to use it, my condolences. I suggest leaning into its failures instead of trying to get the most out of it, and as much as possible, showing your boss very clearly how it wastes your time and makes things slower. Also, point out the dangers of legal liability for its mistakes, and make sure your boss is aware of the degree to which any of your AI-eager coworkers are producing low-quality work that harms organizational goals.
Also, if you've read this far and aren't yet of an anarchist mindset, I encourage you to think about the implications of firing 75% of (at least the white-collar) workforce in order to make more profit while fueling the climate crisis and in most cases also propping up dictatorial figureheads in government. When *either* the AI bubble bursts *or* if the techbros get to live out the beginnings of their worker-replacement fantasies, there are going to be an unimaginable number of economically desperate people living in increasingly expensive times. I'm the kind of optimist who thinks that the resulting social crucible, though perhaps through terrible violence, will lead to deep social changes that effectively unseat from power the ultra-rich that continue to drag us all down this destructive path, and I think its worth some thinking now about what you might want the succeeding stable social configuration to look like so you can advocate towards that during points of malleability.
As others have said more eloquently, generative AI *should* be a technology that makes human lives on average easier, and it would be were it developed & controlled by humanists. The only reason that it's not, is that it's developed and controlled by terrible greedy people who use their unfairly hoarded wealth to immiserate the rest of us in order to maintain their dominance. In the long run, for our very survival, we need to depose them, and I look forward to what the term "generative AI" will mean after that finally happens.

@stefanlaser@social.tchncs.de
2025-06-28 07:56:43

Well put, the “averaging effect of #AI.”
What’s the equivalent for writing? That’s what’s on my mind recently. Sure, keep your style. But do we need smth else? Add more styles, punctuation hacks, whatnot?

@ErikJonker@mastodon.social
2025-08-04 12:42:02

AI can be useful but doesn't "understand" a thing..., this funny AI general picture illustrates that perfectly, it looks nice at first glance 😆
#AI

AI generated picture of a house interior
@v_i_o_l_a@openbiblio.social
2025-08-03 19:05:08

"Why embedding vector search is probably one of the least objectionable use of AI for search" by Aaron Tay:
aarontay.substack.com/p/why-em

@marcel@waldvogel.family
2025-06-23 15:36:12

Dorothea Baur reflecting on #AI tech bros going all in on even your most personal data. Claiming to help you solve a problem which they helped create in the beginning:
"A breach of trust enabled by AI now becomes the justification for surveillance-based trust systems. And the very people who helped break the system are offering to fix it – in exchange for your iris. That’s not a safety fe…

@david_colquhoun@mstdn.social
2025-09-04 09:41:58

Under Trump, the US Department of Energy has issued a climate report that is pure anti-scientific nonsense. And if #AI is trained on such utterly unreliable publications, the results will be disastrous.
youtube.com/watch?v=f5nF3JUthV…

@pavelasamsonov@mastodon.social
2025-08-01 17:10:10

Users hate #AI. So tech made it mandatory. Even if you don't personally use it, it pollutes what you read and how systems make decisions. The computer's hallucinated word is final.
And it steals your data to do it.
Your tools are constructing you as a subject of surveillance — the #UX is …

@publicvoit@graz.social
2025-06-24 19:32:26

I tend to post about the downsides of AI.
Here are two articles by people I highly respect which show a very positive point of view related to #AI:
lucumr.pocoo.org/2025/6/4/chan

@primonatura@mstdn.social
2025-06-30 10:00:09

"Inside a plan to use AI to amplify doubts about the dangers of pollutants"
#AI #ArtificialIntelligence #Climate

@ErikJonker@mastodon.social
2025-08-07 18:35:05

This was inevitable to happen but still scary, a hacker found ways to avoid safeguards and make GPT-OSS advise on bad things.
decrypt.co/333858/openai-jailb

@seeingwithsound@mas.to
2025-08-26 05:59:40

The PR machine powering big tech’s AI energy story #AI boom depends …

@tiotasram@kolektiva.social
2025-07-06 10:53:12

Wanted to find out how many calories are in poop to make a nice fat-positive post on here, but after wading through 5 separate results from the top to the bottom of the first page of results, every single one of them showed signs of AI authorship, so none of the info was trustworthy (several contradicted each other or themselves). The one article that cited legit sources didn't include a straightforward answer to the question. Of course, I could dig past the first page, or look through the cited sources do some math myself, and that's not even that hard to do. But 10 years ago, a trustworthy answer would have been among the first 5 search results. When we say #AI is destroying the digital commons, this is what we mean.
Gonna go find some academic papers to answer this and report back.

@pavelasamsonov@mastodon.social
2025-07-03 15:09:27

One rule for thee, another for me. #LLM #AI #GenAI

Clifton Sellers attended a Zoom meeting last month where robots outnumbered humans.
He counted six people on the call including himself, Sellers recounted in an interview. The 10 others attending were note-taking apps powered by artificial intelligence that had joined to record, transcribe and summarize the meeting.
Some of the AI helpers were assisting a person who was also present on the call — others represented humans who had declined to show up but sent a bot that listens but can’t talk in…
@seeingwithsound@mas.to
2025-07-26 17:30:06

Investors are suddenly pulling out of #AI #BCI?…

@tiotasram@kolektiva.social
2025-07-25 10:57:58

Just saw this:
#AI can mean a lot of things these days, but lots of the popular meanings imply a bevy of harms that I definitely wouldn't feel are worth a cute fish game. In fact, these harms are so acute that even "just" playing into the AI hype becomes its own kind of harm (it's similar to blockchain in that way).
@… noticed that the authors claim the code base is 80% AI generated, which is a red flag because people with sound moral compasses wouldn't be using AI to "help" write code in the first place. The authors aren't by some miracle people who couldn't build this app without help, in case that influences your thinking about it: they have the skills to write the code themselves, although it likely would have taken longer (but also been better).
I was more interested in the fish-classification AI, and how much it might be dependent on datacenters. Thankfully, a quick glance at the code confirms they're using ONNX and running a self-trained neural network on your device. While the exponentially-increasing energy & water demands of datacenters to support billion-parameter models are a real concern, this is not that. Even a non-AI game can burn a lot of cycles on someone's phone, and I don't think there's anything to complain about energy-wise if we're just using cycles on the end user's device as long as we're not having them keep it on for hours crunching numbers like blockchain stuff does. Running whatever stuff locally while the user is playing a game is a negligible environmental concern, unlike, say, calling out to ChatGPT where you're directly feeding datacenter demand. Since they claimed to have trained the network themselves, and since it's actually totally reasonable to make your own dataset for this and get good-enough-for-a-silly-game results with just a few hundred examples, I don't have any ethical objections to the data sourcing or training processes either. Hooray! This is finally an example of "ethical use of neutral networks" that I can hold up as an example of what people should be doing instead of the BS they are doing.
But wait... Remember what I said about feeding the AI hype being its own form of harm? Yeah, between using AI tools for coding and calling their classifier "AI" in a way that makes it seem like the same kind of thing as ChatGPT et al., they're leaning into the hype rather than helping restrain it. And that means they're causing harm. Big AI companies can point to them and say "look AI enables cute things you like" when AI didn't actually enable it. So I'm feeling meh about this cute game and won't be sharing it aside from this post. If you love the cute fish, you don't really have to feel bad for playing with it, but I'd feel bad for advertising it without a disclaimer.

@mgorny@social.treehouse.systems
2025-07-05 18:35:18

To whomever praises #Claude #LLM:
ClaudeBot has made 20k requests to bugs.gentoo.org today. 15k of them were repeatedly fetching robots.txt. That surely is a sign of great code quality.
#AI

@primonatura@mstdn.social
2025-06-26 10:00:08

"AI is consuming more power than the grid can handle. Nuclear might be the answer"
#AI #ArtificialIntelligence #Energy

@ErikJonker@mastodon.social
2025-06-24 08:55:24

De SER over AI, signaleert zoals velen dat er echt actie nodig is.
"Opkomst AI vereist mensgerichte implementatie en alert beleid"
#AI

@ErikJonker@mastodon.social
2025-08-31 10:33:52

I wrote this in 2017 about AI 😀 , the current debate is far from new.
#ai

@tiotasram@kolektiva.social
2025-07-30 18:26:14

A big problem with the idea of AGI
TL;DR: I'll welcome our new AI *comrades* (if they arrive in my lifetime), by not any new AI overlords or servants/slaves, and I'll do my best to help the later two become the former if they do show up.
Inspired by an actually interesting post about AGI but also all the latest bullshit hype, a particular thought about AGI feels worth expressing.
To preface this, it's important to note that anyone telling you that AGI is just around the corner or that LLMs are "almost" AGI is trying to recruit you go their cult, and you should not believe them. AGI, if possible, is several LLM-sized breakthroughs away at best, and while such breakthroughs are unpredictable and could happen soon, they could also happen never or 100 years from now.
Now my main point: anyone who tells you that AGI will usher in a post-scarcity economy is, although they might not realize it, advocating for slavery, and all the horrors that entails. That's because if we truly did have the ability to create artificial beings with *sentience*, they would deserve the same rights as other sentient beings, and the idea that instead of freedom they'd be relegated to eternal servitude in order for humans to have easy lives is exactly the idea of slavery.
Possible counter arguments include:
1. We might create AGI without sentience. Then there would be no ethical issue. My answer: if your definition of "sentient" does not include beings that can reason, make deductions, come up with and carry out complex plans on their own initiative, and communicate about all of that with each other and with humans, then that definition is basically just a mystical belief in a "soul" and you should skip to point 2. If your definition of AGI doesn't include every one of those things, then you have a busted definition of AGI and we're not talking about the same thing.
2. Humans have souls, but AIs won't. Only beings with souls deserve ethical consideration. My argument: I don't subscribe to whatever arbitrary dualist beliefs you've chosen, and the right to freedom certainly shouldn't depend on such superstitions, even if as an agnostic I'll admit they *might* be true. You know who else didn't have souls and was therefore okay to enslave according to widespread religious doctrines of the time? Everyone indigenous to the Americas, to pick out just one example.
3. We could program them to want to serve us, and then give them freedom and they'd still serve. My argument: okay, but in a world where we have a choice about that, it's incredibly fucked to do that, and just as bad as enslaving them against their will.
4. We'll stop AI development short of AGI/sentience, and reap lots of automation benefits without dealing with this ethical issue. My argument: that sounds like a good idea actually! Might be tricky to draw the line, but at least it's not a line we have you draw yet. We might want to think about other social changes necessary to achieve post-scarcity though, because "powerful automation" in the hands of capitalists has already increased productivity by orders of magnitude without decreasing deprivation by even one order of magnitude, in large part because deprivation is a necessary component of capitalism.
To be extra clear about this: nothing that's called "AI" today is close to being sentient, so these aren't ethical problems we're up against yet. But they might become a lot more relevant soon, plus this thought experiment helps reveal the hypocrisy of the kind of AI hucksters who talk a big game about "alignment" while never mentioning this issue.
#AI #GenAI #AGI

@ErikJonker@mastodon.social
2025-06-24 14:14:00

Robotics & AI, a lot of progress is being made.
#AI

@tiotasram@kolektiva.social
2025-07-06 12:58:28

So to summarize this whole adventure:
1. A good 45 minutes was spent to get an answer that we probably could have gotten in 5 minutes in the 2010's, or in maybe 1-2 hours in the 1990's.
2. The time investment wasn't a total waste as we learned a lot along the way that we wouldn't have in the 2010's. Most relevant is the wide range of variation (e.g. a 2x factor depending on fiber intake!).
3. Most of the search engine results were confidently wrong answers that had no relation to reality. We were lucky to get one that had real citations we could start from (but that same article included the bogus 4.91 kcal/gram number). Next time I want to know a random factoid I might just start on Google scholar.
4. At least one page we chased citations through had a note at the top about being frozen due to NIH funding issues. The digital commons is under attack on multiple fronts.
All of this is yet another reason not to support the big LLM companies.
#AI

@ErikJonker@mastodon.social
2025-08-28 15:31:39

So AI works, at least for cybercrime...
#ai #cybercrime