2026-02-10 22:39:51
Dites voir : existe-t-il un collectif spécialisé dans la lutte contre la prolifération de l'intelligence artificielle générative ? #NoAI
J'ai vu l'AFCIA, mais honnêtement, ça manque de sérieux.
Il y en a d'autres ?
Je repense beaucoup au sticker que j'ai vu dans le RER hier ; il y a des gens qui luttent et s'indignent. Ca m'a fait du bien de savoir que…
Vu dans le RER, sur le trajet de retour de mon ancienne boite, que j'ai quittée ce jour pour avoir imposé brutalement et unilatéralement Š ses développeur·euses d'utiliser de l'IA. ✊
#ResistanceNet #NoAI
I read this completely disagreeing with the author, and I was pleased to see most of the comments.
My take is that AI under Capitalism will crush the Working Class and harm the planet.
http://antirez.com/news/158
#noAI
Whenever people are commenting on another half-assed, crappy #LLM feat, claiming that there are "some" use cases for this "#AI", substitute "AI" with "genocide".
Because, you know, there are "use cases" for genocide too, and apparently a lot of people don't mind, as long as they can benefit from it and look the other way.
#NoAI
AI replaces human slop machines.
If you are using AI to do your work, you admit to being a slop machine.
You deserve a pay cut.
#NoAI
So, "#AI boosted your productivity"? Well, are you a software developer or a factory worker?
Productivity is a measure of predictable output from repetitive processes. It is how much shit your factory floor produces. Of course, once attempts to boost productivity start affecting the quality of your product, things get hairy…
"Productivity" makes no sense for creative work. It makes zero sense for software developers. If your work is defined by productivity, then it makes no sense to use as #LLM to improve it. You can be replaced entirely.
Artists get that. The fact that many software developers don't suggests that the trade took a wrong turn at some point.
Inspired by #NoAI
Artist made a font that looks really good
#geekvillage #noai #fonts
Let me tell you a parable.
There was a student who was given as assignment of writing an essay. The student found 10 similar essays online. He copied selected bits of different essays. He tediously reworded the result, removed some sentences, added some adjectives and adverbs, shifted some more sentences, added some glue — all with the single-minded goal of covering up the tracks. Eventually, a voluminous essay was complete.
The student has put a lot of effort into this; possibly even more that if he had written it himself. He did learn a bit about essays, though he didn't really practice writing one. He did practice some skills that would be useful in a future bullshit job, though. The essay passes all #plagiarism checks, even though it immediately raises red flags to any human reading it: the sudden style changes, contradictory statements, sentences that don't make much sense in their context. And if he was asked to defend it, he might be in trouble.
So, the student put an effort (though not the right kind of effort), produced a mediocre essay and learned something (though bullshit skills rather than creative skills). Now let's consider a different situation: rather than doing all that himself, the student paid somebody else to do it; and not to *write* an original essay, but to do all the shenanigans described above.
That's precisely what using LLMs is. You tell them to write an essay, so they find and mix random stuff, and produce a mediocre essay. You don't put an effort, you don't learn anything, perhaps you don't even read "your" essay. And it passes all the plagiarism checks.
#AI #LLM #NoAI #NoLLM #chardet
The key takeaways from the early part of the #chardet thread (I didn't read beyond the ~30 first comments, I have my limits).
1. People there love cosplaying lawyers. Except when the other side also starts cosplaying lawyers, in which case they suddenly divert to suggesting asking professional lawyers.
2. Almost nobody there is concerned with ethics or morality.
3. There's a lot of GPL haters there. Like, they seem the kind of people who don't really care about licensing at all, just used MIT in their projects because it was cool and they heard something about license incompatibility and now bash at everything that's (L)GPL.
4. People don't get that LLMs are statistical models and can't build anything from the ground up. All they can do is remix, which implies they use existing code for inspiration.
5. The maintainer who did the rewrite is a total asshole, and is perfectly aware of it.
Honestly, I'm truly waiting for the subsidizing to end and companies start charging obscene amounts for the use of LLMs. Of course, the reality is that we're totally fucked. We have a lot of projects that adapted a lot of #slop, and people who are being increasingly addicted to this shit. The moment they can't afford it, we'd be left with lots of broken code nobody wants to maintain.
And I definitely don't want to put my effort into packaging crap if its maintainers don't even bother trying.
#AI #LLM #NoAI #NoLLM
It’s great to see a publication say “Naw, Fuck AI.” Support them if you can.
#noAI
Whenever a #FreeSoftware project is suffering from onslaught of low quality LLM-generated pull requests, there will be a bunch of #LLM lovers complaining that people shouldn't be talking of "LLM-generated" being part of the problem, because "using AI isn't bad" in itself. Of course, they entirely ignore all the ethical and environmental concerns, and probably write crappy code themselves.
#AI #NoAI
I am familiar with Komoot but never used it, and I guess I never will.
“Komoot has launched a ChatGPT integration…”
#noAI
Perhaps the main difference between myself and vibe coders is that we have completely different backgrounds.
I've learned coding as a kid, with no friends and no Internet. I didn't do it because it was cool; nerdy stuff was the exact opposite of cool and was likely to get you bullied. I didn't do it because it promised good salary; as a 10-year old, I didn't ponder much about my future, let alone salary. I did it because I was bored, and it was something interesting to do.
I didn't do specific exercises, but rather created whatever I've found interesting. I wasn't graded, I had all the time in the world, and I've enjoyed solving problems. Even if I had access to the Internet, I doubt I would start looking for ready solutions and copy-pasting them. My code was always mine, and I was proud of it; at least at the time.
Of course, nowadays I do stuff I don't enjoy as well. But I'm a grown man who takes responsibility for what I do. And even if my code is shit, it is my shit, and 100% eco.
#NoAI #NoLLM
I became a programmer because I found it much easier to program computers than to talk to people. Why would anyone in their sane mind claim that I'd be better off talking in human language to machines that pretend to be the kind of smug humans who have no clue about coding, but are going to fulfill all the assignments given by me by googling and copy-pasting whatever they can find?!
#NoAI #AI #LLM
So how you'd feel if you learned that the guy from whom you've been copying all your homework recently, has been not-so-secretly helping fascist governments commit genocide? And he's quite proud of it too.
Oh right, you'd just say "it's not like doing my own homework will change anything". And then you'll give him your lunch money.
#AI #LLM #NoAI #NoLLM #Claude #Anthropic
Last night I had a #nightmare.
I dreamt that I've sent a pull request to a project, and it turned out that the whole CI pipeline is just LLMs dynamically slopping random tests against the PR. And of course these tests couldn't pass, and you could do nothing to make the PR actually pass tests.
#AI #LLM #NoAI #slop
New on #blog: "Money isn’t going to solve the #burnout problem"
"""
The xz-utils backdoor situation brought the problem of FLOSS maintained burnout into the daylight. This in turn lead to numerous discussion on how to solve the problem, and the recurring theme was funding maintenance work.
While I’m definitely not opposed to giving people money for their FLOSS work, if you think that throwing some bucks will actually solve the problem, and especially if you think that you can just throw them once and then forget, I have bad news for you: it won’t. Surely, money is a big part of the problem, but it’s not the only reason people are getting burned out. It’s a systemic problem, and it’s in need of systemic solution, and that’s involves a lot of hard work undo everything that’s happened in the last, say, 20 years.
But let’s start at the beginning and ask the important question: why do people make free software?
"""
#FreeSoftware #OpenSource #AI #NoAI #LLM #NoLLM #Gentoo
Honestly, looking at the license violation thread of #chardet, I really feel like #OpenSource these days is a complete shitshow and I really don't feel like a part of the community anymore. Almost all replies are basically assholes questioning whether there "legally" is actually a problem there. Nobody's concerned that the whole thing is a huge dick move, which makes the maintainer a complete dick and nobody with a shed of morality left would be willing to approve this.
Also, it's a great opportunity to seed some GitHub blocklists.
#FreeSoftware #AI #LLM #NoAI #NoLLM
Isn't it ironic that we've moved from "you need special skills to be a programmer" to "everyone can learn to be a programmer", to "everyone can use an #LLM to be a programmer", and now because of all the deskilling we're going to circle back into "you need special skills to be a programmer".
#AI #NoAI #NoLLM
So I wanted to write a longer #NoAI piece but apparently my blog is down (and this time, miraculously, it might not be #AI scrapers), so I'll give you a sneak peek of what I wanted to say in the more hyperbolic part on how the #LLM discourse has all the common features of libertarian discourse.
"According to Google, LLM-backed searches don't consume much more energy than regular searches" [ignoring model training, surely.]
− According to carbrains, cars are actually cheaper than public transport, provided that you compare gasoline cost with ticket prices, and ignore the cost of buying and owning a car. Not to mention all the indirect costs of space waste (roads, parking lots, garages), environment pollution, accidents…
"AI is just a tool, people decide if it's used for good or bad."
− Ah, yes, and "guns don't kill people."
"AI has its uses."
− So does asbestos.
"Let's not judge contributions by whether they were created using AI, but on their actual quality."
− "Let's not judge contributions by whether they were created using slave work…"
"I do not use AI myself, but I don't want to block others."
− "I do not keep slaves myself…"
#NoLLM #hyperbole
In the age of "#AI" assisted programming and "vibe coding", I don't feel like calling myself a programmer anymore. In fact, I think that "an artist" is more appropriate.
All the code I write is mine entirely. It might be buggy, it might be inconsistent, but it reflects my personality. I've put my metaphorical soul into it. It's a work of art.
If people want to call themselves "software developers", and want their work described as a glorified copy-paste, so be it. I'm a software artist now.
EDIT: "craftsperson" is also a nice term, per the comments.
#NoAI #NoLLM #LLM
Sometimes I wonder why do I even bother. I mean, people are perfectly happy to let statistical models designed as bullshit generators do their coding. Why do I even bother running their test suites and inspecting the failures as a human, if these tests may well be complete bullshit?
#FreeSoftware #OpenSource #Gentoo #Python #AI #LLM #NoAI #NoLLM #VibeCoding
Searching the Internet in the past: you type a few keywords. You get a bunch of sites. You check these sites for the information you need.
Searching the Internet in the future: you type your question as a full sentence. You get an answer that may be complete bullshit. You ask for sources. You get a list of sources that may be entirely made up. You check the sources. They are an obvious #AI #slop…
#LLM #enshittification #NoAI #NoLLM
Whenever I see yet another #AI "AGENTS" file, trying to write instructions for *machines* in human language, like the #LLM statistical algorithm could actually reason about them, a Butlerian jihad opens in my pocket. And the fact of giving clear instructions like they were talking to an #ActuallyAutistic person is adding insult to the injury.
#NoAI
#LLM users be like:
Why are you accusing me of supporting slavery? I never said I support slavery. I merely buy cheap tobacco! It's not my fault that all the cheap tobacco is coming from slave-driven plantations! Find me a cheaper tobacco that's manufactured ethically, and I'll surely switch over!
Smokers are being persecuted again! All we wish for is for people to respect our constitutional right to poison everyone around us! Is it really that much?!
#AI #NoAI #NoLLM
Cynicism, "AI"
I've been pointed out the "Reflections on 2025" post by Samuel Albanie [1]. The author's writing style makes it quite a fun, I admit.
The first part, "The Compute Theory of Everything" is an optimistic piece on "#AI". Long story short, poor "AI researchers" have been struggling for years because of predominant misconception that "machines should have been powerful enough". Fortunately, now they can finally get their hands on the kind of power that used to be only available to supervillains, and all they have to do is forget about morals, agree that their research will be used to murder millions of people, and a few more millions will die as a side effect of the climate crisis. But I'm digressing.
The author is referring to an essay by Hans Moravec, "The Role of Raw Power in Intelligence" [2]. It's also quite an interesting read, starting with a chapter on how intelligence evolved independently at least four times. The key point inferred from that seems to be, that all we need is more computing power, and we'll eventually "brute-force" all AI-related problems (or die trying, I guess).
As a disclaimer, I have to say I'm not a biologist. Rather just a random guy who read a fair number of pieces on evolution. And I feel like the analogies brought here are misleading at best.
Firstly, there seems to be an assumption that evolution inexorably leads to higher "intelligence", with a certain implicit assumption on what intelligence is. Per that assumption, any animal that gets "brainier" will eventually become intelligent. However, this seems to be missing the point that both evolution and learning doesn't operate in a void.
Yes, many animals did attain a certain level of intelligence, but they attained it in a long chain of development, while solving specific problems, in specific bodies, in specific environments. I don't think that you can just stuff more brains into a random animal, and expect it to attain human intelligence; and the same goes for a computer — you can't expect that given more power, algorithms will eventually converge on human-like intelligence.
Secondly, and perhaps more importantly, what evolution did succeed at first is achieving neural networks that are far more energy efficient than whatever computers are doing today. Even if indeed "computing power" paved the way for intelligence, what came first is extremely efficient "hardware". Nowadays, human seem to be skipping that part. Optimizing is hard, so why bother with it? We can afford bigger data centers, we can afford to waste more energy, we can afford to deprive people of drinking water, so let's just skip to the easy part!
And on top of that, we're trying to squash hundreds of millions of years of evolution into… a decade, perhaps? What could possibly go wrong?
[1] #NoAI #NoLLM #LLM
Oh, #GitHub is empathetic to #OpenSource projects impacted by all the #AI slop. They're willing to help, right?
#Copilot even once, and of course they're not going to let people actually block this piece of shit.
#LLM #NoAI #NoLLM #hypocrisy #Microsoft