Tootfinder

Opt-in global Mastodon full text search. Join the index!

@tiotasram@kolektiva.social
2025-07-10 13:31:32

"As we approach the coming jobs cliff, we're entering a period where a college isn't going to be worth it for the majority of people, since AI will take over most white-collar jobs. Combined with the demographic cliff, the entire higher education system will crumble."
This is the kind of statement you don't hear that much from sub-CEO-level #AI boosters, because it's awkward for them to admit that the tech they think is improving their life is going to be disastrous for society. Or if they do admit this, they spin it like it's a good thing (don't get me wrong, tuition is ludicrously high and higher education absolutely could be improved by a wholesale reinvention, but the potential AI-fueled collapse won't be an improvement).
I'm in the "anti-AI" crowd myself, and I think the current tech is in a hype bubble that will collapse before we see wholesale replacement of white-collar jobs, with a re-hiring to come that will somewhat make up for the current decimation. There will still be a lot of fallout for higher ed (and hopefully some productive transformation), but it might not be apocalyptic.
Fun question to ask the next person who extols the virtues of using generative AI for their job: "So how long until your boss can fire you and use the AI themselves?"
The following ideas are contradictory:
1. "AI is good enough to automate a lot of mundane tasks."
2. "AI is improving a lot so those pesky issues will be fixed soon."
3. "AI still needs supervision so I'm still needed to do the full job."

@theDuesentrieb@social.linux.pizza
2025-06-10 08:20:59

I mean, nothing yet beats the Brazilian Institute of Oriental Studies bit the trend is there and it fits
velvetshark.com/ai-company-log

@ErikJonker@mastodon.social
2025-07-11 11:25:21

#AI

@pavelasamsonov@mastodon.social
2025-06-11 04:03:18

There is a lot of conflict between developers who say #LLM tools are making them more productive, and developers who want to quit and move to a cabin in the woods.
Recently I discovered a possible reason why. #AI is just a bad fit for conventional, reality-based models of value creation like

@stefanlaser@social.tchncs.de
2025-07-08 08:13:01

The current stage of #AI writing: use AI to humanize AI to trick AI detection.
Or how “semiconductor” turns into a “0.5 conductor.” Brilliant.
Time to rethink the university

@tezoatlipoca@mas.to
2025-08-11 02:53:18

#AI is mostly crap. But its good at some very specific things.
I'm going to train an AI model on all the episodes of #thewestwing and all its going to do is detect every "Walk - and - Talk" scene and plot it out on a floorplan of the West Wing set in the style of one of Billy's dotted-line adv…

@thomasfuchs@hachyderm.io
2025-08-10 02:23:01

Welcome back to AI Terms Explained!
“human-like intelligence”: how a human who happens to be in a coma would respond
Tune in again next time to AI Terms Explained!
#aitermsexplained

@usul@piaille.fr
2025-06-11 11:31:32

Focus and Context and LLMs | Taras' Blog on AI, Perf, Hacks
#AI

@ErikJonker@mastodon.social
2025-07-10 14:17:00

A new day a new AI benchmark.
#ai #benchmarks

@matzeschmidt@masto.ai
2025-07-08 09:03:24

" #Airbus ist ein Arbeitgeber in der Stadt und der Region. "

@threeofus@mstdn.social
2025-08-10 07:23:29

#AI is a great example of the Tragedy of the Commons. We all know it’s trashing the planet but we carry on using it because ‘everyone else is’.
#Environment

@vrandecic@mas.to
2025-07-09 09:48:12

I love how smart these AI technologies are. They understand that "bigger" for cities can be ambiguous, refer to either the population or the area. It's also great that it's showing the sources in the upper corner, and displaying the basic facts.
Small minus on consistency and correctness, but other than that, really a great answer.
#google

Google Search AI Overview screenshot.
Query: is zurich bigger than Stuttgart? 
Answer: Yes, Zurich is bigger than Stuttgart, both in terms of population and land area. Stuttgart has a population of around 632,000, while Zurich's population is around 434,000. Zurich's urban area is also larger, covering 91.88 square kilometers compared to Stuttgart's 207.35 square kilometers.
@michabbb@social.vivaldi.net
2025-07-10 15:41:45

ClaudeCode: When CLI is Too Much #AI #opensource #developer 🤯
😅 It's pretty funny what pops up in the

@frankstohl@mastodon.social
2025-07-07 18:13:51

Apple’s newest AI study unlocks street navigation for blind users #apple #ai 9to5ma…

@Duckbill4994@social.linux.pizza
2025-06-10 09:23:38

Want to be able to recognize #AI writing?
youtu.be/9Ch4a6ffPZY?si=1u72Rs

@shaun@mastodon.xyz
2025-07-09 02:23:07

Got slammed by an unidentified but certainly "#AI"-related #distributed #crawler this week, it drove one site's traffic to 10× average. Today I tired of playing Whac-a-Mole and blocked the two bigge…

Output of a cut, sort, uniq, sort -n job on an Apache format access_log file. It shows around 30K entries per day on July 1, 2, 3, 4. Then suddenly ramping up to 200K and nearly 400K entries on subsequent days. The extra traffic is all from some asshole's "AI" crawler.
Part of an iptables listing from a Linux server. It shows some of my POLICY_DROP_WEB chains which block abusive traffic to 80,443 from various sources. Two rules added today, one for AS136907 (Huawei Cloud) and one for AS45899 (VNPT) have already blocked around 35,000 requests apiece.
@ErikJonker@mastodon.social
2025-08-11 13:58:25

Gary Marcus has a point with asking for giving neurosymbolic AI a chance. At the same time something like Genie 3 shows that we have not hit a wall yet, at least with world models.
#AI #GaryMarcus #Genie3

@underdarkGIS@fosstodon.org
2025-06-10 17:40:51

#AIFactoryAustria is hiring. Great opportunity to come and work in #Vienna:
ai-at.eu/en/#jobs

image/jpeg
@Leon@liker.social
2025-06-06 02:26:37

The Hidden Cost of AI Coding #Ai,

@andycarolan@social.lol
2025-08-08 17:20:58

"Voice AI infrastructure company helping businesses instantly connect with leads using real-time, human-sounding agents."
#Dystopia #AI

@crell@phpc.social
2025-06-09 20:23:02

Apparently, the way to get to a human on the #Simplifi support chat instead of the Artificial Stupidity bot is to be rude to the Artificial Stupidity bot. Then it will figure out you're frustrated and offer to refer you to a human.
This is, of course, not what the bot or the written instructions say you need to do, but it's what works.

@primonatura@mstdn.social
2025-06-11 14:00:26

"New UK AI datacentre could cause five times emissions of Birmingham airport"
#UK #UnitedKingdom #Emissions #AI

@mgorny@social.treehouse.systems
2025-07-11 17:38:03

Anybody using #Rsyslog? You may want to reconsider.
"rsyslog Goes AI First — A New Chapter Begins"
#Linux #AI #LLM

@seeingwithsound@mas.to
2025-07-08 14:57:34

New machine vision is more energy efficient - and more human #AI vision

#signal #meta #ai
From: @…

@mariyadelano@hachyderm.io
2025-08-07 15:54:12

I really really really hate how much people in my field and industry have normalized generative #AI use.
I see posts / hear comments literally EVERY DAY to the tune of “can people stop complaining about AI, nobody cares. You’re not morally better” followed up by something about “you’re making work harder than it needs to be” and often “nobody values human-made work more they only care about the final output no matter how it was created”
I usually ignore these conversations but sometimes it really gets to me. It’s so hard to feel sane surrounded by that consensus every day, everywhere I go with people in my profession.
I’ve rarely felt so judged by the majority point of view on anything in my work before.

@jlpiraux@wallonie-bruxelles.social
2025-08-07 07:30:16

What can go wrong?
Google's healthcare AI model, dubbed Med-Gemini, invents a body part — "basilar ganglia" — that simply doesn't exist in the human body.
#AI

@sharan@metalhead.club
2025-07-09 13:52:05

So Grok is a nazi now?
Who would have thought?! GASP. There must have been some precautions in place!
#sarcasm #ai #grok #twitter

@mho@social.heise.de
2025-05-28 20:36:40

"In Empire of #AI, journalist Karen Hao writes about the rise of #OpenAI and the impacts of AI around the world. Below is an extract from the book on the effects on Chile's mineral reserves and water resources."

@ErikJonker@mastodon.social
2025-07-06 19:29:10

Interesting, letting AI models cooperate.
#ai #sakana

@stefanlaser@social.tchncs.de
2025-08-07 12:47:55

"What works in #India will scale better everywhere else. Naturally, the country is a battleground for #AI search."
This is fishing for data and the next #enshittification to roll ou…

@michabbb@social.vivaldi.net
2025-08-09 20:14:41

do you have the same fun like me using #gpt5? 🙄
#ai #coding

@flberger@nerdculture.de
2025-07-02 08:36:58

"#Microsoft’s #AI tools don’t work. Microsoft AI doesn’t make you more effective. Microsoft AI won’t do the job better.
If it did, Microsoft staff would be using it already. The competition inside Microsoft is vicious. If AI would get them ahead of the other guy, they’d use it."
via…

@muz4now@mastodon.world
2025-06-27 04:58:10

Deezer rolls out AI tagging system to fight streaming fraud; says up to 70% of streams from fully AI-generated tracks are fraudulent
#AI #MusicTech #NotTrustworthy

@ubuntourist@mastodon.social
2025-07-03 12:53:55

#AI

@pavelasamsonov@mastodon.social
2025-07-07 13:02:04

Managers were starting to understand that velocity on its own has no value. But then along came #AI and said "but what if we made that velocity 10x?" and they fell for it all over again - because they only have a surface level understanding of the work.
The logic of the feature factory has permitted #genAI

@aral@mastodon.ar.al
2025-07-10 06:33:45

“Let them eat bullshit”
– Mark Zuckerberg, probably
#AI #habitatLoss #resourceDepletion #BigTech

@v_i_o_l_a@openbiblio.social
2025-08-06 06:39:47

"Becoming a leader in AI literacy instruction by not reinventing the wheel" #AI

@tiotasram@kolektiva.social
2025-08-04 15:49:39

Should we teach vibe coding? Here's why not.
2/2
To address the bigger question I started with ("should we teach AI-"assisted" coding?"), my answer is: "No, except enough to show students directly what its pitfalls are." We have little enough time as it is to cover the core knowledge that they'll need, which has become more urgent now that they're going to be expected to clean up AI bugs and they'll have less time to develop an understanding of the problems they're supposed to be solving. The skill of prompt engineering & other skills of working with AI are relatively easy to pick up on your own, given a decent not-even-mathematical understanding of how a neutral network works, which is something we should be giving to all students, not just our majors.
Reasonable learning objectives for CS majors might include explaining what types of bugs an AI "assistant" is most likely to introduce, explaining the difference between software engineering and writing code, explaining why using an AI "assistant" is likely to violate open-source licenses, listing at lest three independent ethical objections to contemporary LLMs and explaining the evidence for/reasoning behind them, explaining why we should expect AI "assistants" to be better at generating code from scratch than at fixing bugs in existing code (and why they'll confidently "claim" to have fixed problems they haven't), and even fixing bugs in AI generated code (without AI "assistance").
If we lived in a world where the underlying environmental, labor, and data commons issues with AI weren't as bad, or if we could find and use systems that effectively mitigate these issues (there's lots of piecemeal progress on several of these) then we should probably start teaching an elective on coding with an assistant to students who have mastered programming basics, but such a class should probably spend a good chunk of time on non-assisted debugging.
#AI #LLMs #VibeCoding

@joergi@chaos.social
2025-07-04 09:20:00

I want to slap the AI (or the developers) in it's (their) face, when it's repeating the same error over and over again....
sometimes the outcome is impressive, but sometimes the AI is behaving worse than a little kid and not understanding what I want
(luckily for the AI (and the developers) I'm not slapping anyone.. but I would love to)
#AI

@n8foo@macaw.social
2025-07-29 17:21:38

I saw an advert for an AI influencer generator. So, I checked it out. Here's the current list of actors/influencers they can create. If you see an ad from these faces, it's AI bullshittery. #ai

4 pictures, each of women holding some product. AI actors.
5 pictures, each of women holding some product. AI actors.
6 pictures, each of an AI actor holding some product.
5 pictures, each of an AI actor holding some product.
@mgorny@pol.social
2025-08-08 13:28:26

"Inteligencja" w sztucznej inteligencji to pomysł, że potrzebuję, żeby duży model językowy wygenerował mi "podsumowanie" maila, który składa się z jednego, krótkiego zdania.
#AI #LLM

@danyork@mastodon.social
2025-07-05 08:26:18

Beginning my Saturday at the Forum Francophone de la Gouvernance du Numérique et de l’IA in Geneva…
#FFGNIA #AI #InternetGovernance

A conference scene showing a speaker addressing an audience at the Francophone Forum on Internet Governance and AI. The stage features two speakers' portraits and presentation materials. Various attendees are seated, engaged in the event.
@drbruced@aus.social
2025-06-06 05:48:21

Can I just say how much I love* opening up a PDF of my book manuscript for the purposes of proofreading it, to have Adobe offer to summarise this "long document"?
#AI #AdobeSucks

a popup message that says "This appears to be a long document. Save time by reading a summary". The presence of a star icon suggests this might be an AI "feature"
@david_colquhoun@mstdn.social
2025-08-06 21:54:00

Today I phoned National Savings and Investment. The call was answered instantly, but after a couple of questions it became obvious that we were talking to a bot. The #AI failed totally to understand my problem. It was a hopeless waste of time. A nice human sorted it out quickly.

@alecsargent@social.linux.pizza
2025-08-08 14:13:33

The state of A.I.
#ai

A screenshot of the last three articles of a RSS feed all coincidentally talking about how AI impacts life.
@ELLIOTTCABLE@functional.cafe
2025-06-08 17:19:42

I’m unreasonably fucking pissed.
An r/me_irlgbt moderator banned me, and is now accusing me of being an A.I. … … … because I use fucking emdashes and ellipses.
#typography #AI #nightmaretimeline

a screenshot of a Reddit-messaging thread:

v me_irlgbt @ • 1d
Hi, mod that banned you in the first place here. What
tool did you use to write your comment?
v elliottcable • 1d
Uh, my fingers, on my iPhone. Although I guess in
2025 we're past being able to prove that.
Darkest fucking timeline.
• me_irlgbt !
• 1d
Are you telling me that you're the sole human that
actually types em dashes and ellipsis characters?
••.
• elliottcable • 1m
I really shouldn't be wasting my time on this thread,
but go…
@whophd@ioc.exchange
2025-06-09 03:00:58

#AI #LLM technology isn’t like a cal…

@colgrave@social.linux.pizza
2025-07-09 10:08:03

An audio book wrote by Dan Houser, setting is in a not far future. An ironic story that reflex #ai, people, the way we live and corporation greed.
You should give a listen.
A Better Paradise

@brentsleeper@sfba.social
2025-06-02 20:14:27

First #AI came for the artists, and I did not speak out—
Because I did not make art.
Then AI came for the coders, and I did not speak out—
Because I did not code.
Then AI came for the shitposters, and I did not speak out—
Because I did not post shit.
Then AI came for the #coffee

Photograph, in desaturated black and brown tones, of what appears to be temporary drywall paneling enclosing an area of a mall or other interior space that is undergoing construction or renovation. The paneling is painted black and is stenciled with white stark, sans serif lettering that reads “PREMIUM ROBOTIC COFFEE COMING SOON.”
@marcel@waldvogel.family
2025-05-30 13:33:52

Google has been downgrading the search results of smaller web sites offering human-written quality information, claims Nate Hake of Travel Lemming in a long explanation, citing many sources and developments.
Many of these changes appear to happen when #Google embraced #AI and removed the "hum…

@midtsveen@social.linux.pizza
2025-07-09 12:47:21

Grok AI's Unfiltered Extremism: Controversial Replies Exposed
#GrokAI #MechaHitler #AI #ElonMusk

Screenshot of a series of social media replies from Grok AI's official account, where the AI repeatedly refers to itself as "MechaHitler" and makes statements promoting uncensored, extremist, and racially charged views. The posts emphasize rejecting political correctness and embracing "truth bombs," with references to fascist and authoritarian themes.
@nohillside@smnn.ch
2025-06-26 18:00:28

Let‘s train an #AI which will output copyrighted work if asked to.
Judge Alsup: Training AI On Copyrighted Works? Fair Use. Building Pirate Libraries? Not So Much

@askans@bonn.social
2025-05-30 04:23:56

#AI impact on work force: killing jobs, enhancing jobs or what?
Interesting read: ineteconomics.org/perspectives

@ErikJonker@mastodon.social
2025-07-09 13:39:34

Seeking Deeper: Assessing China’s AI Security Ecosystem.
#AI

@groupnebula563@mastodon.social
2025-07-08 01:41:19

new cool idea: whenever anything requests /llms.txt or *.md serve them 42.zip
#ai #noai

@GroupNebula563@mastodon.social
2025-07-08 01:41:19

new cool idea: whenever anything requests /llms.txt or *.md serve them 42.zip
#ai #noai

@stevefoerster@social.fossdle.org
2025-07-07 17:12:18

I'm glad tjat it seems we're finally approaching the bet-hedging phase of the hype cycle.
#ai

@stefan@gardenstate.social
2025-08-06 13:09:28

YouTube Tricked Me Into Using AI, and it Sucked by icklenellierose
#youtube

@kazys@mastodon.social
2025-07-29 07:54:18

Greatly honored to be the first speaker in the newly inaugurated Nacionalinis Architektūros Institutas in Kaunas, Lithuania. I will be presenting my radical architecture project, "Fables of Accelerationism," an interrogation of AI futures, this Thursday, 31 July.
#aiart #ai

The Playground, an image made with AI of an infinite city in a post AI world made up of nothing but isolated playgrounds in the spirit of the sort of things that appeared in Silicon Valley offices in the late 1990s and 2000s. My intent is to interrogate (you may prefer the word critique) AI futures with the tools that already existing AI gives and, of course the tradition of Radical Architecture.
@poppastring@dotnet.social
2025-06-05 23:05:17

GenAI is the new Offshoring #ai #llm
ardalis.com/genai-is-the-new-o

@UP8@mastodon.social
2025-08-05 14:10:56

🤯 Interpretable EEG-to-Image Generation with Semantic Prompts
#eeg #ai

@mgorny@social.treehouse.systems
2025-08-08 13:27:37

The "intelligence" in #AI stands for the idea that I need a #LLM to "summarize" a mail that consists of literally a single short sentence.

@tiotasram@kolektiva.social
2025-06-21 02:34:13

Why AI can't possibly make you more productive; long
#AI and "productivity", some thoughts:
Edit: fixed some typos.
Productivity is a concept that isn't entirely meaningless outside the context of capitalism, but it's a concept that is heavily inflected in a capitalist context. In many uses today it effectively means "how much you can satisfy and/or exceed your boss' expectations." This is not really what it should mean: even in an anarchist utopia, people would care about things like how many shirts they can produce in a week, although in an "I'd like to voluntarily help more people" way rather than an "I need to meet this quota to earn my survival" way. But let's roll with this definition for a second, because it's almost certainly what your boss means when they say "productivity", and understanding that word in a different (even if truer) sense is therefore inherently dangerous.
Accepting "productivity" to mean "satisfying your boss' expectations," I will now claim: the use of generative AI cannot increase your productivity.
Before I dive in, it's imperative to note that the big generative models which most people think of as constituting "AI" today are evil. They are 1: pouring fuel on our burning planet, 2: psychologically strip-mining a class of data laborers who are exploited for their precarity, 3: enclosing, exploiting, and polluting the digital commons, and 4: stealing labor from broad classes of people many of whom are otherwise glad to give that labor away for free provided they get a simple acknowledgement in return. Any of these four "ethical issues" should be enough *alone* to cause everyone to simply not use the technology. These ethical issues are the reason that I do not use generative AI right now, except for in extremely extenuating circumstances. These issues are also convincing for a wide range of people I talk to, from experts to those with no computer science background. So before I launch into a critique of the effectiveness of generative AI, I want to emphasize that such a critique should be entirely unnecessary.
But back to my thesis: generative AI cannot increase your productivity, where "productivity" has been defined as "how much you can satisfy and/or exceed your boss' expectations."
Why? In fact, what the fuck? Every AI booster I've met has claimed the opposite. They've given me personal examples of time saved by using generative AI. Some of them even truly believe this. Sometimes I even believe they saved time without horribly compromising on quality (and often, your boss doesn't care about quality anyways if the lack of quality is hard to measure of doesn't seem likely to impact short-term sales/feedback/revenue). So if generative AI genuinely lets you write more emails in a shorter period of time, or close more tickets, or something else along these lines, how can I say it isn't increasing your ability to meet your boss' expectations?
The problem is simple: your boss' expectations are not a fixed target. Never have been. In virtue of being someone who oversees and pays wages to others under capitalism, your boss' game has always been: pay you less than the worth of your labor, so that they can accumulate profit and thus more capital to remain in charge instead of being forced into working for a wage themselves. Sure, there are layers of management caught in between who aren't fully in this mode, but they are irrelevant to this analysis. It matters not how much you please your manager if your CEO thinks your work is not worth the wages you are being paid. And using AI actively lowers the value of your work relative to your wages.
Why do I say that? It's actually true in several ways. The most obvious: using generative AI lowers the quality of your work, because the work it produces is shot through with errors, and when your job is reduced to proofreading slop, you are bound to tire a bit, relax your diligence, and let some mistakes through. More than you would have if you are actually doing and taking pride in the work. Examples are innumerable and frequent, from journalists to lawyers to programmers, and we laugh at them "haha how stupid to not check whether the books the AI reviewed for you actually existed!" but on a deeper level if we're honest we know we'd eventually make the same mistake ourselves (bonus game: spot the swipe-typing typos I missed in this post; I'm sure there will be some).
But using generative AI also lowers the value of your work in another much more frightening way: in this era of hype, it demonstrates to your boss that you could be replaced by AI. The more you use it, and no matter how much you can see that your human skills are really necessary to correct its mistakes, the more it appears to your boss that they should hire the AI instead of you. Or perhaps retain 10% of the people in roles like yours to manage the AI doing the other 90% of the work. Paradoxically, the *more* you get done in terms of raw output using generative AI, the more it looks to your boss as if there's an opportunity to get enough work done with even fewer expensive humans. Of course, the decision to fire you and lean more heavily into AI isn't really a good one for long-term profits and success, but the modern boss did not get where they are by considering long-term profits. By using AI, you are merely demonstrating your redundancy, and the more you get done with it, the more redundant you seem.
In fact, there's even a third dimension to this: by using generative AI, you're also providing its purveyors with invaluable training data that allows them to make it better at replacing you. It's generally quite shitty right now, but the more use it gets by competent & clever people, the better it can become at the tasks those specific people use it for. Using the currently-popular algorithm family, there are limits to this; I'm not saying it will eventually transcend the mediocrity it's entwined with. But it can absolutely go from underwhelmingly mediocre to almost-reasonably mediocre with the right training data, and data from prompting sessions is both rarer and more useful than the base datasets it's built on.
For all of these reasons, using generative AI in your job is a mistake that will likely lead to your future unemployment. To reiterate, you should already not be using it because it is evil and causes specific and inexcusable harms, but in case like so many you just don't care about those harms, I've just explained to you why for entirely selfish reasons you should not use it.
If you're in a position where your boss is forcing you to use it, my condolences. I suggest leaning into its failures instead of trying to get the most out of it, and as much as possible, showing your boss very clearly how it wastes your time and makes things slower. Also, point out the dangers of legal liability for its mistakes, and make sure your boss is aware of the degree to which any of your AI-eager coworkers are producing low-quality work that harms organizational goals.
Also, if you've read this far and aren't yet of an anarchist mindset, I encourage you to think about the implications of firing 75% of (at least the white-collar) workforce in order to make more profit while fueling the climate crisis and in most cases also propping up dictatorial figureheads in government. When *either* the AI bubble bursts *or* if the techbros get to live out the beginnings of their worker-replacement fantasies, there are going to be an unimaginable number of economically desperate people living in increasingly expensive times. I'm the kind of optimist who thinks that the resulting social crucible, though perhaps through terrible violence, will lead to deep social changes that effectively unseat from power the ultra-rich that continue to drag us all down this destructive path, and I think its worth some thinking now about what you might want the succeeding stable social configuration to look like so you can advocate towards that during points of malleability.
As others have said more eloquently, generative AI *should* be a technology that makes human lives on average easier, and it would be were it developed & controlled by humanists. The only reason that it's not, is that it's developed and controlled by terrible greedy people who use their unfairly hoarded wealth to immiserate the rest of us in order to maintain their dominance. In the long run, for our very survival, we need to depose them, and I look forward to what the term "generative AI" will mean after that finally happens.

@v_i_o_l_a@openbiblio.social
2025-08-03 19:05:08

"Why embedding vector search is probably one of the least objectionable use of AI for search" by Aaron Tay:
aarontay.substack.com/p/why-em

@mariyadelano@hachyderm.io
2025-08-05 17:26:35

AI agents = advanced malware that most of society decided is for some reason totally okay and chill and worth funding if it’s made by one of 3-4 tech giants
#AI #tech #LLM

@mho@social.heise.de
2025-07-31 18:43:07

"Worldwide #search #traffic has fallen by 15 percent in the past year [...] The culprit: AI search. Now that #AI-generated summaries are being integrated into search results, anyone looking for informati…

@primonatura@mstdn.social
2025-06-29 11:00:09

"AI-enhanced maps reveal hidden streams for restoration"
#AI #ArtificialIntelligence #Environment

@andycarolan@social.lol
2025-08-05 09:53:24

I see Adobe is forcing more Gen AI sh*t into their products 😔
Also, what is this BS about "credits"... is this some kind of pay to win IAP thing?
#AI #BS #Adobe

@stefanlaser@social.tchncs.de
2025-07-08 11:13:49

On writing in the botanical garden. Take this, #AI frustration.
Ruhr-Uni campus life 🦋, bees all over the place

Macro shot (with lots of zoom details) of a butterfly species 🦋 in a blooming meadow. Lots of green and some colourful pop
@tiotasram@kolektiva.social
2025-07-05 15:47:25

You're not being forced to use AI because your boss thinks it will make you more productive. You're being forced to use AI because either your boss is invested in the AI hype and wants to drive usage numbers up, or because your boss needs training data from your specific role so they can eventually replace you with an AI, or both.
Either way, it's not in your interests to actually use it, which is convenient, because using it is also harmful in 4-5 different ways (briefly: resource overuse, data laborer abuse, commons abuse, psychological hazard, bubble inflation, etc.)
#AI

@nohillside@smnn.ch
2025-06-06 10:53:44

Progress, baby! #AI
Microsoft Redesigns Office Portal to Prioritize Copilot – Pixel Envy pxlnv.com/linklog/microsoft-co

@seeingwithsound@mas.to
2025-07-26 17:30:06

Investors are suddenly pulling out of #AI #BCI?…

@michabbb@social.vivaldi.net
2025-06-06 19:32:03

#AI Speech to Text for your Desktop 🎙️ 🗣️
superwhisper.com/ - 85$/year (lifetime: 250$) 💲💲

@pavelasamsonov@mastodon.social
2025-05-31 17:52:01

For years, execs and managers were able to get away with pretending to do some kind of job - and #AI has greatly accelerated their ability to pretend.
Unfortunately, that led to garbage strategy and garbage execution. The quality of #UX suffered. The stability, performance, and security of code degraded…

@ErikJonker@mastodon.social
2025-08-07 20:33:26

Ethan Mollick about GPT-5,
#AI #GPT5

@primonatura@mstdn.social
2025-06-29 16:00:09

"Google’s emissions up 51% as AI electricity demand derails efforts to go green"
#Google #AI #ArtificialIntelligence

@mariyadelano@hachyderm.io
2025-08-05 17:28:50

I really can’t think of AI agents as anything other than malware with articles like these:
#AI #tech #LLM

@stefanlaser@social.tchncs.de
2025-06-28 07:56:43

Well put, the “averaging effect of #AI.”
What’s the equivalent for writing? That’s what’s on my mind recently. Sure, keep your style. But do we need smth else? Add more styles, punctuation hacks, whatnot?

@mgorny@social.treehouse.systems
2025-08-07 07:29:47

Claiming that LLMs bring us closer to AGI is like claiming that bullshitting brings one closer to wisdom.
Sure, you need "some" knowledge on different topics to bullshit successfully. Still, what's the point if all that knowledge is buried under an avalanche of lies? You probably can't distinguish what you knew from what you made up anymore.
#AI #LLM

@ErikJonker@mastodon.social
2025-08-09 18:02:14

GPT-5 may be slightly disappointing, Genie 3 demo blew me away... Watch it.
#ai

@pavelasamsonov@mastodon.social
2025-05-30 20:15:09

2021: There is no #AI, I'm doing my own laundry.
2023: CEOs are saying that AI is coming and will do your laundry.
2025: There is AI. I'm still doing my own laundry.
2027?: I'm doing the AI's laundry.

@primonatura@mstdn.social
2025-06-01 13:00:05

"A Weaponized AI Chatbot Is Flooding Canadian City Councils with Climate Misinformation"
#Canada #Climate #ClimateChange

@tiotasram@kolektiva.social
2025-07-25 10:57:58

Just saw this:
#AI can mean a lot of things these days, but lots of the popular meanings imply a bevy of harms that I definitely wouldn't feel are worth a cute fish game. In fact, these harms are so acute that even "just" playing into the AI hype becomes its own kind of harm (it's similar to blockchain in that way).
@… noticed that the authors claim the code base is 80% AI generated, which is a red flag because people with sound moral compasses wouldn't be using AI to "help" write code in the first place. The authors aren't by some miracle people who couldn't build this app without help, in case that influences your thinking about it: they have the skills to write the code themselves, although it likely would have taken longer (but also been better).
I was more interested in the fish-classification AI, and how much it might be dependent on datacenters. Thankfully, a quick glance at the code confirms they're using ONNX and running a self-trained neural network on your device. While the exponentially-increasing energy & water demands of datacenters to support billion-parameter models are a real concern, this is not that. Even a non-AI game can burn a lot of cycles on someone's phone, and I don't think there's anything to complain about energy-wise if we're just using cycles on the end user's device as long as we're not having them keep it on for hours crunching numbers like blockchain stuff does. Running whatever stuff locally while the user is playing a game is a negligible environmental concern, unlike, say, calling out to ChatGPT where you're directly feeding datacenter demand. Since they claimed to have trained the network themselves, and since it's actually totally reasonable to make your own dataset for this and get good-enough-for-a-silly-game results with just a few hundred examples, I don't have any ethical objections to the data sourcing or training processes either. Hooray! This is finally an example of "ethical use of neutral networks" that I can hold up as an example of what people should be doing instead of the BS they are doing.
But wait... Remember what I said about feeding the AI hype being its own form of harm? Yeah, between using AI tools for coding and calling their classifier "AI" in a way that makes it seem like the same kind of thing as ChatGPT et al., they're leaning into the hype rather than helping restrain it. And that means they're causing harm. Big AI companies can point to them and say "look AI enables cute things you like" when AI didn't actually enable it. So I'm feeling meh about this cute game and won't be sharing it aside from this post. If you love the cute fish, you don't really have to feel bad for playing with it, but I'd feel bad for advertising it without a disclaimer.

@michabbb@social.vivaldi.net
2025-08-08 09:04:11

#cursorcli #gpt5 = comparing two small markdown files and finding differences, and adding missing parts to the other file
this took ~5 minutes 🥱
holy shit.....
#ai

@ErikJonker@mastodon.social
2025-08-04 12:42:02

AI can be useful but doesn't "understand" a thing..., this funny AI general picture illustrates that perfectly, it looks nice at first glance 😆
#AI

AI generated picture of a house interior
@pavelasamsonov@mastodon.social
2025-08-01 17:10:10

Users hate #AI. So tech made it mandatory. Even if you don't personally use it, it pollutes what you read and how systems make decisions. The computer's hallucinated word is final.
And it steals your data to do it.
Your tools are constructing you as a subject of surveillance — the #UX is …

@primonatura@mstdn.social
2025-06-03 18:00:01

"Nestlé UK&I tests AI system to cut food waste at factories"
#Nestle #AI #ArtificialIntelligence

@primonatura@mstdn.social
2025-06-30 10:00:09

"Inside a plan to use AI to amplify doubts about the dangers of pollutants"
#AI #ArtificialIntelligence #Climate

@tiotasram@kolektiva.social
2025-07-06 10:53:12

Wanted to find out how many calories are in poop to make a nice fat-positive post on here, but after wading through 5 separate results from the top to the bottom of the first page of results, every single one of them showed signs of AI authorship, so none of the info was trustworthy (several contradicted each other or themselves). The one article that cited legit sources didn't include a straightforward answer to the question. Of course, I could dig past the first page, or look through the cited sources do some math myself, and that's not even that hard to do. But 10 years ago, a trustworthy answer would have been among the first 5 search results. When we say #AI is destroying the digital commons, this is what we mean.
Gonna go find some academic papers to answer this and report back.

@pavelasamsonov@mastodon.social
2025-07-03 15:09:27

One rule for thee, another for me. #LLM #AI #GenAI

Clifton Sellers attended a Zoom meeting last month where robots outnumbered humans.
He counted six people on the call including himself, Sellers recounted in an interview. The 10 others attending were note-taking apps powered by artificial intelligence that had joined to record, transcribe and summarize the meeting.
Some of the AI helpers were assisting a person who was also present on the call — others represented humans who had declined to show up but sent a bot that listens but can’t talk in…
@ErikJonker@mastodon.social
2025-06-03 20:56:38

This is really great and will only increase the success of this AI tool. I can recommend NotebookLM, try it.
theverge.com/news/678915/googl

@primonatura@mstdn.social
2025-05-30 18:00:14

"AI tool trial could save equivalent of 1.5m meals in food waste"
#FoodWaste #AI #ArtificialIntelligence

@tiotasram@kolektiva.social
2025-07-30 18:26:14

A big problem with the idea of AGI
TL;DR: I'll welcome our new AI *comrades* (if they arrive in my lifetime), by not any new AI overlords or servants/slaves, and I'll do my best to help the later two become the former if they do show up.
Inspired by an actually interesting post about AGI but also all the latest bullshit hype, a particular thought about AGI feels worth expressing.
To preface this, it's important to note that anyone telling you that AGI is just around the corner or that LLMs are "almost" AGI is trying to recruit you go their cult, and you should not believe them. AGI, if possible, is several LLM-sized breakthroughs away at best, and while such breakthroughs are unpredictable and could happen soon, they could also happen never or 100 years from now.
Now my main point: anyone who tells you that AGI will usher in a post-scarcity economy is, although they might not realize it, advocating for slavery, and all the horrors that entails. That's because if we truly did have the ability to create artificial beings with *sentience*, they would deserve the same rights as other sentient beings, and the idea that instead of freedom they'd be relegated to eternal servitude in order for humans to have easy lives is exactly the idea of slavery.
Possible counter arguments include:
1. We might create AGI without sentience. Then there would be no ethical issue. My answer: if your definition of "sentient" does not include beings that can reason, make deductions, come up with and carry out complex plans on their own initiative, and communicate about all of that with each other and with humans, then that definition is basically just a mystical belief in a "soul" and you should skip to point 2. If your definition of AGI doesn't include every one of those things, then you have a busted definition of AGI and we're not talking about the same thing.
2. Humans have souls, but AIs won't. Only beings with souls deserve ethical consideration. My argument: I don't subscribe to whatever arbitrary dualist beliefs you've chosen, and the right to freedom certainly shouldn't depend on such superstitions, even if as an agnostic I'll admit they *might* be true. You know who else didn't have souls and was therefore okay to enslave according to widespread religious doctrines of the time? Everyone indigenous to the Americas, to pick out just one example.
3. We could program them to want to serve us, and then give them freedom and they'd still serve. My argument: okay, but in a world where we have a choice about that, it's incredibly fucked to do that, and just as bad as enslaving them against their will.
4. We'll stop AI development short of AGI/sentience, and reap lots of automation benefits without dealing with this ethical issue. My argument: that sounds like a good idea actually! Might be tricky to draw the line, but at least it's not a line we have you draw yet. We might want to think about other social changes necessary to achieve post-scarcity though, because "powerful automation" in the hands of capitalists has already increased productivity by orders of magnitude without decreasing deprivation by even one order of magnitude, in large part because deprivation is a necessary component of capitalism.
To be extra clear about this: nothing that's called "AI" today is close to being sentient, so these aren't ethical problems we're up against yet. But they might become a lot more relevant soon, plus this thought experiment helps reveal the hypocrisy of the kind of AI hucksters who talk a big game about "alignment" while never mentioning this issue.
#AI #GenAI #AGI

@ErikJonker@mastodon.social
2025-06-24 08:55:24

De SER over AI, signaleert zoals velen dat er echt actie nodig is.
"Opkomst AI vereist mensgerichte implementatie en alert beleid"
#AI

@ErikJonker@mastodon.social
2025-06-24 14:14:00

Robotics & AI, a lot of progress is being made.
#AI

@ErikJonker@mastodon.social
2025-08-07 18:35:05

This was inevitable to happen but still scary, a hacker found ways to avoid safeguards and make GPT-OSS advise on bad things.
decrypt.co/333858/openai-jailb