
2025-09-09 14:27:36
Desperate companies now hiring humans to fix what #AI botched https://futurism.com/companies-hiring-humans-fix-ai I similarly wouldn't be surprised about an upcoming
Desperate companies now hiring humans to fix what #AI botched https://futurism.com/companies-hiring-humans-fix-ai I similarly wouldn't be surprised about an upcoming
Seeking Deeper: Assessing China’s AI Security Ecosystem.
#AI
"What works in #India will scale better everywhere else. Naturally, the country is a battleground for #AI search."
This is fishing for data and the next #enshittification to roll ou…
I love how smart these AI technologies are. They understand that "bigger" for cities can be ambiguous, refer to either the population or the area. It's also great that it's showing the sources in the upper corner, and displaying the basic facts.
Small minus on consistency and correctness, but other than that, really a great answer.
#google
Apple’s newest AI study unlocks street navigation for blind users #apple #ai https://9to5ma…
Managers were starting to understand that velocity on its own has no value. But then along came #AI and said "but what if we made that velocity 10x?" and they fell for it all over again - because they only have a surface level understanding of the work.
The logic of the feature factory has permitted #genAI
Got slammed by an unidentified but certainly "#AI"-related #distributed #crawler this week, it drove one site's traffic to 10× average. Today I tired of playing Whac-a-Mole and blocked the two bigge…
The mini model is only 80 MB. Even the full-weight model can be run on your laptop.
#ai
Dieser Drang, dass #AI alles zusammenfassen muss, woher kommt dieser Unfug?
Anthropic to pay $1.5 billion to authors in landmark #AI settlement ... “believed to be the largest publicly reported recovery in the history of US copyright litigation.”
https://www.…
Da tun sie immer so, als würden sie keine #AI und LLMs für ihre ach so tollen Texte nutzen. Und dennoch sind in jedem dritten Satz seit Monaten auf einmal Gedankenstriche, Auflistungen und tonnenweise Emojis. Ist klar…
The state of A.I.
#ai
Should we teach vibe coding? Here's why not.
2/2
To address the bigger question I started with ("should we teach AI-"assisted" coding?"), my answer is: "No, except enough to show students directly what its pitfalls are." We have little enough time as it is to cover the core knowledge that they'll need, which has become more urgent now that they're going to be expected to clean up AI bugs and they'll have less time to develop an understanding of the problems they're supposed to be solving. The skill of prompt engineering & other skills of working with AI are relatively easy to pick up on your own, given a decent not-even-mathematical understanding of how a neutral network works, which is something we should be giving to all students, not just our majors.
Reasonable learning objectives for CS majors might include explaining what types of bugs an AI "assistant" is most likely to introduce, explaining the difference between software engineering and writing code, explaining why using an AI "assistant" is likely to violate open-source licenses, listing at lest three independent ethical objections to contemporary LLMs and explaining the evidence for/reasoning behind them, explaining why we should expect AI "assistants" to be better at generating code from scratch than at fixing bugs in existing code (and why they'll confidently "claim" to have fixed problems they haven't), and even fixing bugs in AI generated code (without AI "assistance").
If we lived in a world where the underlying environmental, labor, and data commons issues with AI weren't as bad, or if we could find and use systems that effectively mitigate these issues (there's lots of piecemeal progress on several of these) then we should probably start teaching an elective on coding with an assistant to students who have mastered programming basics, but such a class should probably spend a good chunk of time on non-assisted debugging.
#AI #LLMs #VibeCoding
I really really really hate how much people in my field and industry have normalized generative #AI use.
I see posts / hear comments literally EVERY DAY to the tune of “can people stop complaining about AI, nobody cares. You’re not morally better” followed up by something about “you’re making work harder than it needs to be” and often “nobody values human-made work more they only care about the final output no matter how it was created”
I usually ignore these conversations but sometimes it really gets to me. It’s so hard to feel sane surrounded by that consensus every day, everywhere I go with people in my profession.
I’ve rarely felt so judged by the majority point of view on anything in my work before.
New machine vision is more energy efficient - and more human #AI vision
The recent release of Apertus, a fully open suite of large language models (LLMs), is super interesting.
The technical report provides plenty of details about the entire process.
#ai #opensource #llm
On writing in the botanical garden. Take this, #AI frustration.
Ruhr-Uni campus life 🦋, bees all over the place
"Becoming a leader in AI literacy instruction by not reinventing the wheel" #AI
An audio book wrote by Dan Houser, setting is in a not far future. An ironic story that reflex #ai, people, the way we live and corporation greed.
You should give a listen.
A Better Paradise
"#Microsoft’s #AI tools don’t work. Microsoft AI doesn’t make you more effective. Microsoft AI won’t do the job better.
If it did, Microsoft staff would be using it already. The competition inside Microsoft is vicious. If AI would get them ahead of the other guy, they’d use it."
via…
Grok AI's Unfiltered Extremism: Controversial Replies Exposed
#GrokAI #MechaHitler #AI #ElonMusk
After loosing millions of USD in potential revenues for years due to understaffed sales teams, Salesforce now hopes for an #AI miracle.
There, fixed it for you.
https://slashdot.org/…
I'm glad tjat it seems we're finally approaching the bet-hedging phase of the hype cycle.
#ai
I want to slap the AI (or the developers) in it's (their) face, when it's repeating the same error over and over again....
sometimes the outcome is impressive, but sometimes the AI is behaving worse than a little kid and not understanding what I want
(luckily for the AI (and the developers) I'm not slapping anyone.. but I would love to)
#AI
Beginning my Saturday at the Forum Francophone de la Gouvernance du Numérique et de l’IA in Geneva…
#FFGNIA #AI #InternetGovernance
Deezer rolls out AI tagging system to fight streaming fraud; says up to 70% of streams from fully AI-generated tracks are fraudulent
#AI #MusicTech #NotTrustworthy
Today I phoned National Savings and Investment. The call was answered instantly, but after a couple of questions it became obvious that we were talking to a bot. The #AI failed totally to understand my problem. It was a hopeless waste of time. A nice human sorted it out quickly.
YouTube Tricked Me Into Using AI, and it Sucked by icklenellierose
#youtube
#cursorcli #gpt5 = comparing two small markdown files and finding differences, and adding missing parts to the other file
this took ~5 minutes 🥱
holy shit.....
#ai
I saw an advert for an AI influencer generator. So, I checked it out. Here's the current list of actors/influencers they can create. If you see an ad from these faces, it's AI bullshittery. #ai
GPT-5 may be slightly disappointing, Genie 3 demo blew me away... Watch it.
#ai
"AI-enhanced maps reveal hidden streams for restoration"
#AI #ArtificialIntelligence #Environment
Lobby-Offensive gegen den AI Act: Europäische Start-ups und US-Geldgeber fordern eine Pause bei der KI-Regulierung. Die Bundesregierung denkt mit. Doch Experten mahnen: Ein Aufschub nutzt vor allem US-Techriesen wie Microsoft – und gefährdet Europas Anspruch auf technologische Souveränität. (Autor: #Handelsblatt) #AIAct
Like just about everyone else I know, I seem to spend a lot of time thinking, reading, and talking about #AI. And, given that I work in fine arts education, it's inevitable I think about how AI affects the arts, and how the arts affect AI.
As part of my work in #ArtsPedagogy, I'm visi…
I've added a /ai slash page to my site
Would love some feedback if possible!
#Slashpages #AI #GenerativeAI #indiedev
"Navigating Generative #AI in Academic #Publishing: An Interview with Benjamin Luke Moorhouse"
Enjoy. #AI #CEO.
Source: "Internet" 🫣
Edit: Apparently, the original images are due to Allie Brosh at https://hype…
"AI bots that scrape the internet for training data are hammering the servers of libraries, archives, museums, and galleries, and are in some cases knocking their collections offline"
#AI is ruining our digital world
(Original title: AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums)
You're not being forced to use AI because your boss thinks it will make you more productive. You're being forced to use AI because either your boss is invested in the AI hype and wants to drive usage numbers up, or because your boss needs training data from your specific role so they can eventually replace you with an AI, or both.
Either way, it's not in your interests to actually use it, which is convenient, because using it is also harmful in 4-5 different ways (briefly: resource overuse, data laborer abuse, commons abuse, psychological hazard, bubble inflation, etc.)
#AI
A clever statement on the sorry state of the #AI arts via meme-able mesmergorical AI art?
#infiniteloops #kitchomatic
https://www.tumblr.com/teledyn/790711569148411904/surrealist-fan-service?source=share
"Google’s emissions up 51% as AI electricity demand derails efforts to go green"
#Google #AI #ArtificialIntelligence
Claiming that LLMs bring us closer to AGI is like claiming that bullshitting brings one closer to wisdom.
Sure, you need "some" knowledge on different topics to bullshit successfully. Still, what's the point if all that knowledge is buried under an avalanche of lies? You probably can't distinguish what you knew from what you made up anymore.
#AI #LLM
Let‘s train an #AI which will output copyrighted work if asked to.
Judge Alsup: Training AI On Copyrighted Works? Fair Use. Building Pirate Libraries? Not So Much https://www.…
Why AI can't possibly make you more productive; long
#AI and "productivity", some thoughts:
Edit: fixed some typos.
Productivity is a concept that isn't entirely meaningless outside the context of capitalism, but it's a concept that is heavily inflected in a capitalist context. In many uses today it effectively means "how much you can satisfy and/or exceed your boss' expectations." This is not really what it should mean: even in an anarchist utopia, people would care about things like how many shirts they can produce in a week, although in an "I'd like to voluntarily help more people" way rather than an "I need to meet this quota to earn my survival" way. But let's roll with this definition for a second, because it's almost certainly what your boss means when they say "productivity", and understanding that word in a different (even if truer) sense is therefore inherently dangerous.
Accepting "productivity" to mean "satisfying your boss' expectations," I will now claim: the use of generative AI cannot increase your productivity.
Before I dive in, it's imperative to note that the big generative models which most people think of as constituting "AI" today are evil. They are 1: pouring fuel on our burning planet, 2: psychologically strip-mining a class of data laborers who are exploited for their precarity, 3: enclosing, exploiting, and polluting the digital commons, and 4: stealing labor from broad classes of people many of whom are otherwise glad to give that labor away for free provided they get a simple acknowledgement in return. Any of these four "ethical issues" should be enough *alone* to cause everyone to simply not use the technology. These ethical issues are the reason that I do not use generative AI right now, except for in extremely extenuating circumstances. These issues are also convincing for a wide range of people I talk to, from experts to those with no computer science background. So before I launch into a critique of the effectiveness of generative AI, I want to emphasize that such a critique should be entirely unnecessary.
But back to my thesis: generative AI cannot increase your productivity, where "productivity" has been defined as "how much you can satisfy and/or exceed your boss' expectations."
Why? In fact, what the fuck? Every AI booster I've met has claimed the opposite. They've given me personal examples of time saved by using generative AI. Some of them even truly believe this. Sometimes I even believe they saved time without horribly compromising on quality (and often, your boss doesn't care about quality anyways if the lack of quality is hard to measure of doesn't seem likely to impact short-term sales/feedback/revenue). So if generative AI genuinely lets you write more emails in a shorter period of time, or close more tickets, or something else along these lines, how can I say it isn't increasing your ability to meet your boss' expectations?
The problem is simple: your boss' expectations are not a fixed target. Never have been. In virtue of being someone who oversees and pays wages to others under capitalism, your boss' game has always been: pay you less than the worth of your labor, so that they can accumulate profit and thus more capital to remain in charge instead of being forced into working for a wage themselves. Sure, there are layers of management caught in between who aren't fully in this mode, but they are irrelevant to this analysis. It matters not how much you please your manager if your CEO thinks your work is not worth the wages you are being paid. And using AI actively lowers the value of your work relative to your wages.
Why do I say that? It's actually true in several ways. The most obvious: using generative AI lowers the quality of your work, because the work it produces is shot through with errors, and when your job is reduced to proofreading slop, you are bound to tire a bit, relax your diligence, and let some mistakes through. More than you would have if you are actually doing and taking pride in the work. Examples are innumerable and frequent, from journalists to lawyers to programmers, and we laugh at them "haha how stupid to not check whether the books the AI reviewed for you actually existed!" but on a deeper level if we're honest we know we'd eventually make the same mistake ourselves (bonus game: spot the swipe-typing typos I missed in this post; I'm sure there will be some).
But using generative AI also lowers the value of your work in another much more frightening way: in this era of hype, it demonstrates to your boss that you could be replaced by AI. The more you use it, and no matter how much you can see that your human skills are really necessary to correct its mistakes, the more it appears to your boss that they should hire the AI instead of you. Or perhaps retain 10% of the people in roles like yours to manage the AI doing the other 90% of the work. Paradoxically, the *more* you get done in terms of raw output using generative AI, the more it looks to your boss as if there's an opportunity to get enough work done with even fewer expensive humans. Of course, the decision to fire you and lean more heavily into AI isn't really a good one for long-term profits and success, but the modern boss did not get where they are by considering long-term profits. By using AI, you are merely demonstrating your redundancy, and the more you get done with it, the more redundant you seem.
In fact, there's even a third dimension to this: by using generative AI, you're also providing its purveyors with invaluable training data that allows them to make it better at replacing you. It's generally quite shitty right now, but the more use it gets by competent & clever people, the better it can become at the tasks those specific people use it for. Using the currently-popular algorithm family, there are limits to this; I'm not saying it will eventually transcend the mediocrity it's entwined with. But it can absolutely go from underwhelmingly mediocre to almost-reasonably mediocre with the right training data, and data from prompting sessions is both rarer and more useful than the base datasets it's built on.
For all of these reasons, using generative AI in your job is a mistake that will likely lead to your future unemployment. To reiterate, you should already not be using it because it is evil and causes specific and inexcusable harms, but in case like so many you just don't care about those harms, I've just explained to you why for entirely selfish reasons you should not use it.
If you're in a position where your boss is forcing you to use it, my condolences. I suggest leaning into its failures instead of trying to get the most out of it, and as much as possible, showing your boss very clearly how it wastes your time and makes things slower. Also, point out the dangers of legal liability for its mistakes, and make sure your boss is aware of the degree to which any of your AI-eager coworkers are producing low-quality work that harms organizational goals.
Also, if you've read this far and aren't yet of an anarchist mindset, I encourage you to think about the implications of firing 75% of (at least the white-collar) workforce in order to make more profit while fueling the climate crisis and in most cases also propping up dictatorial figureheads in government. When *either* the AI bubble bursts *or* if the techbros get to live out the beginnings of their worker-replacement fantasies, there are going to be an unimaginable number of economically desperate people living in increasingly expensive times. I'm the kind of optimist who thinks that the resulting social crucible, though perhaps through terrible violence, will lead to deep social changes that effectively unseat from power the ultra-rich that continue to drag us all down this destructive path, and I think its worth some thinking now about what you might want the succeeding stable social configuration to look like so you can advocate towards that during points of malleability.
As others have said more eloquently, generative AI *should* be a technology that makes human lives on average easier, and it would be were it developed & controlled by humanists. The only reason that it's not, is that it's developed and controlled by terrible greedy people who use their unfairly hoarded wealth to immiserate the rest of us in order to maintain their dominance. In the long run, for our very survival, we need to depose them, and I look forward to what the term "generative AI" will mean after that finally happens.
Well put, the “averaging effect of #AI.”
What’s the equivalent for writing? That’s what’s on my mind recently. Sure, keep your style. But do we need smth else? Add more styles, punctuation hacks, whatnot?
https://www.…
AI can be useful but doesn't "understand" a thing..., this funny AI general picture illustrates that perfectly, it looks nice at first glance 😆
#AI
"Why embedding vector search is probably one of the least objectionable use of AI for search" by Aaron Tay:
https://aarontay.substack.com/p/why-embedding-vector-search-is-probably
Dorothea Baur reflecting on #AI tech bros going all in on even your most personal data. Claiming to help you solve a problem which they helped create in the beginning:
"A breach of trust enabled by AI now becomes the justification for surveillance-based trust systems. And the very people who helped break the system are offering to fix it – in exchange for your iris. That’s not a safety fe…
Under Trump, the US Department of Energy has issued a climate report that is pure anti-scientific nonsense. And if #AI is trained on such utterly unreliable publications, the results will be disastrous.
https://www.youtube.com/watch?v=f5nF3JUthV…
I tend to post about the downsides of AI.
Here are two articles by people I highly respect which show a very positive point of view related to #AI:
https://lucumr.pocoo.org/2025/6/4/changes/…
"Inside a plan to use AI to amplify doubts about the dangers of pollutants"
#AI #ArtificialIntelligence #Climate
This was inevitable to happen but still scary, a hacker found ways to avoid safeguards and make GPT-OSS advise on bad things.
https://decrypt.co/333858/openai-jailbreak-proof-new-models-hacked
The PR machine powering big tech’s AI energy story #AI boom depends …
Wanted to find out how many calories are in poop to make a nice fat-positive post on here, but after wading through 5 separate results from the top to the bottom of the first page of results, every single one of them showed signs of AI authorship, so none of the info was trustworthy (several contradicted each other or themselves). The one article that cited legit sources didn't include a straightforward answer to the question. Of course, I could dig past the first page, or look through the cited sources do some math myself, and that's not even that hard to do. But 10 years ago, a trustworthy answer would have been among the first 5 search results. When we say #AI is destroying the digital commons, this is what we mean.
Gonna go find some academic papers to answer this and report back.
Just saw this:
#AI can mean a lot of things these days, but lots of the popular meanings imply a bevy of harms that I definitely wouldn't feel are worth a cute fish game. In fact, these harms are so acute that even "just" playing into the AI hype becomes its own kind of harm (it's similar to blockchain in that way).
@… noticed that the authors claim the code base is 80% AI generated, which is a red flag because people with sound moral compasses wouldn't be using AI to "help" write code in the first place. The authors aren't by some miracle people who couldn't build this app without help, in case that influences your thinking about it: they have the skills to write the code themselves, although it likely would have taken longer (but also been better).
I was more interested in the fish-classification AI, and how much it might be dependent on datacenters. Thankfully, a quick glance at the code confirms they're using ONNX and running a self-trained neural network on your device. While the exponentially-increasing energy & water demands of datacenters to support billion-parameter models are a real concern, this is not that. Even a non-AI game can burn a lot of cycles on someone's phone, and I don't think there's anything to complain about energy-wise if we're just using cycles on the end user's device as long as we're not having them keep it on for hours crunching numbers like blockchain stuff does. Running whatever stuff locally while the user is playing a game is a negligible environmental concern, unlike, say, calling out to ChatGPT where you're directly feeding datacenter demand. Since they claimed to have trained the network themselves, and since it's actually totally reasonable to make your own dataset for this and get good-enough-for-a-silly-game results with just a few hundred examples, I don't have any ethical objections to the data sourcing or training processes either. Hooray! This is finally an example of "ethical use of neutral networks" that I can hold up as an example of what people should be doing instead of the BS they are doing.
But wait... Remember what I said about feeding the AI hype being its own form of harm? Yeah, between using AI tools for coding and calling their classifier "AI" in a way that makes it seem like the same kind of thing as ChatGPT et al., they're leaning into the hype rather than helping restrain it. And that means they're causing harm. Big AI companies can point to them and say "look AI enables cute things you like" when AI didn't actually enable it. So I'm feeling meh about this cute game and won't be sharing it aside from this post. If you love the cute fish, you don't really have to feel bad for playing with it, but I'd feel bad for advertising it without a disclaimer.
"AI is consuming more power than the grid can handle. Nuclear might be the answer"
#AI #ArtificialIntelligence #Energy
De SER over AI, signaleert zoals velen dat er echt actie nodig is.
"Opkomst AI vereist mensgerichte implementatie en alert beleid"
#AI
I wrote this in 2017 about AI 😀 , the current debate is far from new.
#ai
A big problem with the idea of AGI
TL;DR: I'll welcome our new AI *comrades* (if they arrive in my lifetime), by not any new AI overlords or servants/slaves, and I'll do my best to help the later two become the former if they do show up.
Inspired by an actually interesting post about AGI but also all the latest bullshit hype, a particular thought about AGI feels worth expressing.
To preface this, it's important to note that anyone telling you that AGI is just around the corner or that LLMs are "almost" AGI is trying to recruit you go their cult, and you should not believe them. AGI, if possible, is several LLM-sized breakthroughs away at best, and while such breakthroughs are unpredictable and could happen soon, they could also happen never or 100 years from now.
Now my main point: anyone who tells you that AGI will usher in a post-scarcity economy is, although they might not realize it, advocating for slavery, and all the horrors that entails. That's because if we truly did have the ability to create artificial beings with *sentience*, they would deserve the same rights as other sentient beings, and the idea that instead of freedom they'd be relegated to eternal servitude in order for humans to have easy lives is exactly the idea of slavery.
Possible counter arguments include:
1. We might create AGI without sentience. Then there would be no ethical issue. My answer: if your definition of "sentient" does not include beings that can reason, make deductions, come up with and carry out complex plans on their own initiative, and communicate about all of that with each other and with humans, then that definition is basically just a mystical belief in a "soul" and you should skip to point 2. If your definition of AGI doesn't include every one of those things, then you have a busted definition of AGI and we're not talking about the same thing.
2. Humans have souls, but AIs won't. Only beings with souls deserve ethical consideration. My argument: I don't subscribe to whatever arbitrary dualist beliefs you've chosen, and the right to freedom certainly shouldn't depend on such superstitions, even if as an agnostic I'll admit they *might* be true. You know who else didn't have souls and was therefore okay to enslave according to widespread religious doctrines of the time? Everyone indigenous to the Americas, to pick out just one example.
3. We could program them to want to serve us, and then give them freedom and they'd still serve. My argument: okay, but in a world where we have a choice about that, it's incredibly fucked to do that, and just as bad as enslaving them against their will.
4. We'll stop AI development short of AGI/sentience, and reap lots of automation benefits without dealing with this ethical issue. My argument: that sounds like a good idea actually! Might be tricky to draw the line, but at least it's not a line we have you draw yet. We might want to think about other social changes necessary to achieve post-scarcity though, because "powerful automation" in the hands of capitalists has already increased productivity by orders of magnitude without decreasing deprivation by even one order of magnitude, in large part because deprivation is a necessary component of capitalism.
To be extra clear about this: nothing that's called "AI" today is close to being sentient, so these aren't ethical problems we're up against yet. But they might become a lot more relevant soon, plus this thought experiment helps reveal the hypocrisy of the kind of AI hucksters who talk a big game about "alignment" while never mentioning this issue.
#AI #GenAI #AGI
Robotics & AI, a lot of progress is being made.
#AI
So to summarize this whole adventure:
1. A good 45 minutes was spent to get an answer that we probably could have gotten in 5 minutes in the 2010's, or in maybe 1-2 hours in the 1990's.
2. The time investment wasn't a total waste as we learned a lot along the way that we wouldn't have in the 2010's. Most relevant is the wide range of variation (e.g. a 2x factor depending on fiber intake!).
3. Most of the search engine results were confidently wrong answers that had no relation to reality. We were lucky to get one that had real citations we could start from (but that same article included the bogus 4.91 kcal/gram number). Next time I want to know a random factoid I might just start on Google scholar.
4. At least one page we chased citations through had a note at the top about being frozen due to NIH funding issues. The digital commons is under attack on multiple fronts.
All of this is yet another reason not to support the big LLM companies.
#AI