2026-04-27 17:12:43
« J'ai découvert le modèle Open Weights GLM-5 »
#LLM #OpenWeights
The 'thinking' from a local Gemma 4 #llm reading bad handwriting is fascinating - I told it not to interpret stuff, it mostly didn't; but look at this thinking!
'Actually, looking at the 'e' in "the", it's a loop. The 'x' in "co-ax" is a cross. The letter in "axial" is a loop and a stroke. This is a very messy 'x' or a v…
« Comment je me renseigne sur un nouveau modèle LLM en 4 étapes »
#LLM
Exploring the use of VLMs for navigation assistance for people with blindness and low vision #LLM
I'm still thinking about a longer blog post about LLMs, and one of the things I keep thinking about them is how they not only cause direct harm to the community, but also make people more suspicious of one another. And then I've been pointed out this text:
"I'm Kenyan. I Don't Write Like ChatGPT. ChatGPT Writes Like Me."
"""
I am a writer. A writer who also happens to be Kenyan. And I have come to this thesis statement: I don't write like ChatGPT. ChatGPT, in its strange, disembodied, globally-sourced way, writes like me. Or, more accurately, it writes like the millions of us who were pushed through a very particular educational and societal pipeline, a pipeline deliberately designed to sandpaper away ambiguity, and forge our thoughts into a very specific, very formal, and very impressive shape.
"""
#AI #LLM
This is so good, I love it: #llm
« Anthropic sous-vend-il ses abonnements ou surtaxe-t-il son API ? »
#llm
This paper argues that frontier models approximate tenure-level academic outputs in social science and humanities topics with "minimal engineering effort". They developed some custom agent skills to extract the qualities of individual scholars from their published works.
#LLM #AIResearch #academia
Isn't it ironic that we've moved from "you need special skills to be a programmer" to "everyone can learn to be a programmer", to "everyone can use an #LLM to be a programmer", and now because of all the deskilling we're going to circle back into "you need special skills to be a programmer".
#AI #NoAI #NoLLM
#LLM saved me one hour of writing and all it took was two hours of your review!
Ich nutze meist Perplexity oder ChatGPT statt klassischer Suchmaschinen. Schnell, kompakt, bequem. Aber auch fehleranfällig. Ein Blick auf Chancen, Grenzen und die Frage, wie viel wir Antwortmaschinen wirklich überlassen können und sollten. #LLM #KIAgenten
I don’t think #LLM capabilities are where this article thinks they are, but I do think this is an interesting economical thinking exercise nevertheless
https://www.citriniresearch.com/p/2028gic
Large-scale model-enhanced vision-language navigation: Recent advances, practical applications, and future challenges #LLM
Some people may think of LLMs as the great equalizer. People who aren't programmers can vibecode working programs now. People who aren't artists can slop out something resembling art. However, it's the exact opposite.
When I was a kid, I also pretended to write programs. Of course, I didn't have such sophisticated toys ("kids could play with a stick for hours", as the hyperbole went). But then, I was fully aware that it's just make-believe and it didn't harm anybody.
#Vibecoding creates a horrible chasm of inequality. We have people who believe they're good programmers (even treating vibecoding as an enlightened religion) who shit tons of code at real human reviewers who now need to sift through. And then, we have projects embracing vibecoding and shitting new releases at unprecedented rate. And these releases again need to be reviewed by humans downstream.
#AI #LLM #NoAI #NoLLM
yesterday I upgraded my local #llm to qwen3.5 - and it works pretty well; this is using Unsloth's Qwen3.5-35B-A3B-Q4_K_M.gguf - I also had to upgrade to the latest llama.cpp (and it's got a few rough edges); but it seems as good as the Qwen3-Next-80B I was using, and it's also multimodal (with the mmproj gguf needed) and the multimodal is usefully fast at an image description on CPU only…
I was in the process of writing a short story when all this talk of #LLM spellchecking came up. So I wrote a story that you can't actually use an LLM spellchecker on because it breaks them:
#Writing #Fiction #SolarPunk
The case of “vegetative electron microscopy” illustrated here shows what is badly needed in current #LLM research and has implications far beyond. We need tools that help us curate huge corpora. We need to be able to trace #hallucinations back to the training data and understand what are the specific (to a sur…
The more I listen to the industry, the more I think software quality may be enough of a differentiator in the future to offset some of the #LLM damage
I'm finally trying out some local LLM models.
Ollama just told me that it's trained on GPL software, so any code it produces needs to be GPL.
Then in a second chat, it said the opposite.
#AiIsGoingGreat #LLM #AI
The claim "you won't be replaced by AI, but by a person using AI" is nonsense. The Block layoff victims were some of the most productive, #llm pilled people in the company, but it didn't save them, because that's not what layoffs are about.
The layoff script goes, as always:
- overhire
- lay everyone off
- pretend it's because of
If you think #vibecoding is fine, let me ask you a single question: would you use a medical device whose software was vibecoded? And by "medical device" I mean something where a bug could literally kill you.
If you answered "oh, gawd, no!" then consider that anytime you use an #LLM to contribute to or develop an #OpenSource project, there's a chance that this code will end up powering such a device. And even if it doesn't, you're setting a trend, and it will be even more likely that the software used by these devices will be vibecoded.
I have type 1 #diabetes. I also lead a physically active life. This is both a blessing and a curse. My doctors keep suggesting Constant Glucose Monitoring systems and insulin pumps to me. And I do realize that such hardware would likely improve my blood glucose, and definitely make my life much easier (especially with a closed loop system).
So why do my fingertips look like crap, and I keep using a glucometer and insulin pens? Because I don't want to risk my life to an unnecessarily complex technology.
Admittedly, I occasionally get things wrong and suffer consequences. Or I suspect I got them wrong and worry. Or meet an unexpected situation and need to figure out a way out. Or even accept having elevated glucose levels (as in nearing 200 mg/dl) because there's just no way to safely fit insulin doses on a particular day.
But still, I prefer having control and risking my own mistakes to a device that could suddenly start pumping insulin because of a bug. And that was even before the story of the application that stripped the decimal point and gave people ten times the dose. Or the one about CGMs giving wrong high glucose alerts. Or the whole vibecoding fancy.
Back then, I could have considered such a device. Now, I'm more worried than ever. And honestly, I'm hoping that relatively simple glucometers will remain available. To think that my worst fear used to be of a mechanical fault…
#AI #NoAI #NoLLM
Want answers 10X faster and 10X more accurate than LLMs? Use the DuckDuckGo CLI. I'm using that today to study for a cert. I had been using a number of LLMs but they are sooooo sloooooow.
#llm
Interesting LLM nuance: why does using phrases like "you're a pen-tester" cause chatbots to emit substantially different predictions, and basically "follow" that instruction? Because of how LLMs work, this implies that the training data has plenty of examples where real humans told each other that they were some role and the humans just immediately jumped into that role without question or intervening dialogue. But that's not something people do in normal conversation. Even in playing-with-kids contexts if you drop that out of the blue you're probably going to get "no I want to be a robot" or "but you were the elephant last time!" rather than immediate assumption of the assigned role.
It's possible that training LLMs to predict immediate role-assumption is something the big models spent a lot of manual effort on. But what I think is more likely is: it's the legacy of role-play forums! All those reams of pages of teenagers (yes, often horny) pretending to be Captain Kirk or their own incredibly cringe "cool" character (but honestly, why call it cringe, let kids be kids and have fun)...
So next time you "tell" a chatbot "you're a..." to get it to do what you want, I'm pretty sure you have an RP forum teen from the past to thank :)
#AI #LLMs
"Werden wir von ChatGPT empfohlen?" – Diese Frage höre ich in jedem zweiten Kundengespräch. 🎯
Das Problem: Die meisten können es nicht beantworten. Es gibt kein Search Console für KI-Systeme. Keine Impressions, keine Klicks.
Ich tracke das seit Monaten systematisch. Die Erkenntnis: Marken mit starkem Entity-Profil tauchen in LLM-Antworten auf. Der Rest wird ignoriert – egal wie gut der Content ist.
Roman elites drank from leaded cups because it made water sweeter. Radiation was thought at one time to have healing properties, so people would add uranium to their drinking water. Glowing dishes are still a collectors item. After the discovery of x-rays, shoe stores started installing them and using them on kids feet to size shoes. Lead was added to gasoline to improve engine performance, and paint to make it whiter. We all know about asbestos and DDT.
We look back at all of this and think, "how could people have been so incompetent back then?" Some of these things caused irreparable harm in their generation, some continue to cause harm today almost 100 years later.
If you wonder that, look at the whole #LLM thing and you have your answer.
So I wanted to write a longer #NoAI piece but apparently my blog is down (and this time, miraculously, it might not be #AI scrapers), so I'll give you a sneak peek of what I wanted to say in the more hyperbolic part on how the #LLM discourse has all the common features of libertarian discourse.
"According to Google, LLM-backed searches don't consume much more energy than regular searches" [ignoring model training, surely.]
− According to carbrains, cars are actually cheaper than public transport, provided that you compare gasoline cost with ticket prices, and ignore the cost of buying and owning a car. Not to mention all the indirect costs of space waste (roads, parking lots, garages), environment pollution, accidents…
"AI is just a tool, people decide if it's used for good or bad."
− Ah, yes, and "guns don't kill people."
"AI has its uses."
− So does asbestos.
"Let's not judge contributions by whether they were created using AI, but on their actual quality."
− "Let's not judge contributions by whether they were created using slave work…"
"I do not use AI myself, but I don't want to block others."
− "I do not keep slaves myself…"
#NoLLM #hyperbole
« Ma cartographie de l'écosystème LLM de mars 2026 »
#LLM
Just used an #llm to explain a 166 line C compiler error; damn that's impressive. It almost makes sense with that explanation.
The bright #LLM future, next part.
git.gentoo.org is now effectively dead, being DDoS-ed by almost a million different IPs every day. Most of them are just performing a single request at a totally random URL. How are people supposed to deal with that? How can we distinguish a legitimate user who hit some URL from a scraper that distributes its operations over thousands of IP addresses?
If you use LLM crap, you're part of the problem. You support these bastards. You should be ashamed of yourself.
#Gentoo #NoAI #NoLLM #AI
"AI is writing 90% of our code" sounds impressive before you realize that AI-generated code is orders of magnitude more verbose & less efficient than code written by a professional software engineer.
But "we ship 9 lines of fluff for each line of code that does something" doesn't sound as impressive.
#LLM
iTerm2 now lets an LLM view & drive a terminal?? That's a huge way to destroy trust. That's just as bad as letting an IDE or email app leak private information.
#llm #ai #enshitification #wtf
« J'ai découvert l'offre "Go" de OpenCode et je compte la tester dans un projet en parallèle de Claude Pro »
#LLM
RE: #AI
Schlauer, als die #KI erlaubt?
#Anthropic, der Entwickler des #LLM #Claude, hält dessen aktuelle …
Oh man, #LLM and licensing is going to he so much fun, does everybody miss ‘90s so much?
https://github.com/chardet/chardet/issues/327
Tracing the thoughts of a large language model
#LLM
Whenever people are commenting on another half-assed, crappy #LLM feat, claiming that there are "some" use cases for this "#AI", substitute "AI" with "genocide".
Because, you know, there are "use cases" for genocide too, and apparently a lot of people don't mind, as long as they can benefit from it and look the other way.
#NoAI
LLMs have no concept of "true" or "good." But they are trained to signal high-quality work. Meanwhile, bosses are pressuring workers: go faster, produce more, let the AI cook.
Study after study documents what this does to the human brain: cognitive surrender. We're "in the loop" but the bot calls the shots.
Read more in this week's issue of the Product Picnic newsletter:
Let's normalize calling anything output with an #LLM #slop.
It doesn't matter that you've only used an LLM to fix punctuation. It's slop.
It doesn't matter that you've spent an hour reviewing the slop to make sure it's good. It's still slop.
It doesn't matter that it's better than anything you wrote your entire life. It's slop.
If you didn't write it yourself, it's just a glorified LLM slop.
#AI #NoAI #NoLLM
"Microsoft and Stellantis want to use AI to help car owners"
H. E. L. P. is an interesting way to spell "hurt". 🤨
#Microsoft #Stellantis #cars #AI #LLM #sarcasm
More fallout from the chardet AI licensing kerfuffle.
#AI
[OT, Forbes] The state of the $1.7 trillion AI bubble: the end of thinking https://www.forbes.com/sites/gilpress/2026/02/27/the-state-of-the-17-trillion-ai-bubble-the-end-of-thinking/
J'ai découvert MimiMax M2.7, qui semble équivalent Š GLM-5 pour un tiers du prix
#TIL
Oh, #GitHub is empathetic to #OpenSource projects impacted by all the #AI slop. They're willing to help, right?
#Copilot even once, and of course they're not going to let people actually block this piece of shit.
#LLM #NoAI #NoLLM #hypocrisy #Microsoft
Whenever a #FreeSoftware project is suffering from onslaught of low quality LLM-generated pull requests, there will be a bunch of #LLM lovers complaining that people shouldn't be talking of "LLM-generated" being part of the problem, because "using AI isn't bad" in itself. Of course, they entirely ignore all the ethical and environmental concerns, and probably write crappy code themselves.
#AI #NoAI
#Gentoo is still one of the bright outposts in #FLOSS where human work is valued and #LLM contributions are banned. However, sometimes I feel that this matters very little.
After all, Gentoo is a distribution. While it has its own value, it cannot exist without all the software it is shipping. It makes no sense in isolation.
And let's be honest, I don't think you can avoid slop today. We are trying our best to sieve out the worst: the copywashing chardet, the vibecoded NIH Perl crypto packages… but it's just that.
As someone who bumps Python packages, let me tell you this: LLMs are omnipresent. I notice Claude in commit logs, I notice the blasphemy of agent instructions all over the place… and there's probably much more than I don't notice. With many core components giving in, you can't avoid it without literally freezing on old, vulnerable versions, or spending hours looking for alternatives or creating them.
FLOSS is dead. People don't care. They don't have conscience. All they care about is the sick idea of "productivity", i.e. generating more slop.
The few of us who do care can do very little. We will continue doing our best until they kill us (as they're literally slowly killing the whole humankind). But that's it. Maybe it will pass once the bubble pops, maybe it won't. Either way, the damage is beyond repair. We will never be able to trust one another like we did. We will never again be a community building a better world.
It's just like everything nowadays. It's hard to find a good washing machine (one that will actually be repairable), good shoes (that won't fall apart shortly after the warranty expires), good food. You need lots of money, and even then you have to sieve through all the scammers who just sell the same shit with higher profit margin. #OpenSource is just another branch of business where people are trying to "sell" you shit, and don't care anymore if it explodes in your face. They don't even care if they're actually making a profit.
#AI #NoAI #NoLLM #enshittification #AntiCapitalism
Yesterday, I've read a vibe coded script for the first time in my life, and I've cried.
It wasn't ugly. "Ugly" is not the right term. It was as if someone wasn't able to comprehend beauty, but badly tried to mimic it. It felt like "malicious compliance" to beauty. The kind of awful verbose pedantry that feels wrong every step of the way.
It's the kind of code you'd expect in a corporate environment when you know that the code would be read by the top suits who have no idea about coding, but judge it by the volume and expect science fiction level of make-believe.
It's the kind of code is abstracted away into the tiniest details. Every function returns a complex dataclass explaining precisely what it did, for no reason at all. What would be two lines of code is a function. What would be a function is a whole module. It's a caricature of good programming practices.
I was supposed to add modifying a second field on the same object via GitHub API. I've guessed it would take me about an hour to figure out the code enough to be able to do that — what ought to be 2-3 extra lines. I suspected I'd discover that most of the code does precisely nothing. Just meaningless API exchanges that are absolutely unnecessary. It felt like the kind of parody of bureaucracy where you have to file 10 forms to do something, and only one of them actually means anything.
What used to be "do one thing well" became "doing ten totally random things is fine, as long as one of them happens to be what I need, and the whole thing doesn't blow anything up in an obvious way".
Perhaps it's just because this way a throwaway script. Maybe "production" stuff takes more, err, prompt refining? Maybe it actually can produce stuff that's comprehensible.
But if that code was any indicator, then I'm not going to believe that any big LLM contributions are actually reviewed by humans. A review will take more time than rewriting from scratch. This is a ticking time bomb. That LLM-generated code isn't introducing exploits right now is either a statistical accident, or it's just that nobody bothers.
Clarification: I didn't "prompt" it or request one. I'm not a hypocrite.
#NoAI #NoLLM #AI #LLM
#LLM users be like:
Why are you accusing me of supporting slavery? I never said I support slavery. I merely buy cheap tobacco! It's not my fault that all the cheap tobacco is coming from slave-driven plantations! Find me a cheaper tobacco that's manufactured ethically, and I'll surely switch over!
Smokers are being persecuted again! All we wish for is for people to respect our constitutional right to poison everyone around us! Is it really that much?!
#AI #NoAI #NoLLM
I truly believe that LLMs are the worst thing that happened in IT over the recent years (or well, the culmination of the worst thing that's been poisoning the IT world), and I wholeheartedly support all the subversive actions against it, ranging from poisoning the training data to abusing support chatbots to make them unprofitable. However, at the same time I realize that all these actions are increasing the environmental harm caused by the #LLM folk.
It's like true guerrilla warfare. We're metaphorically burning down buildings, and I hate that it had to come to that.
#AI #NoAI #NoLLM
A standard that requires almost $300 to read is not a standard. It's extortion.
Looking at you, ISO...
#AI #LLM #ISO #Standards
#LLM users should be obliged to buy *expensive* scraping offsets, and the money should go to #FreeSoftware projects that have to cope with their infrastructure being *killed* by crappy #AI scrapers.
Yes, #Gentoo is suffering from another wave. And yes, if you use their projects and therefore support their business model, please don't use Gentoo.
Modern use of LLMs often involves giving them access to the local system: to read and write your project files, and to execute arbitrary commands, often unsupervised. So aren't people worried about a harness just doing what a remote #LLM tells it to do?
I think a statement I've heard lately summarizes the mindset well. It went something along the lines "I can't give you 100% guarantee, but I've noticed that LLMs are very good at following instructions, and they're getting better and better, so I don't worry about that anymore".
Like, it is completely fine to introduce a humongous security hole, because the probability that a model will *accidentally* do something horrible is decreasing.
#AI #NoAI #NoLLM #security
Let me tell you a parable.
There was a student who was given as assignment of writing an essay. The student found 10 similar essays online. He copied selected bits of different essays. He tediously reworded the result, removed some sentences, added some adjectives and adverbs, shifted some more sentences, added some glue — all with the single-minded goal of covering up the tracks. Eventually, a voluminous essay was complete.
The student has put a lot of effort into this; possibly even more that if he had written it himself. He did learn a bit about essays, though he didn't really practice writing one. He did practice some skills that would be useful in a future bullshit job, though. The essay passes all #plagiarism checks, even though it immediately raises red flags to any human reading it: the sudden style changes, contradictory statements, sentences that don't make much sense in their context. And if he was asked to defend it, he might be in trouble.
So, the student put an effort (though not the right kind of effort), produced a mediocre essay and learned something (though bullshit skills rather than creative skills). Now let's consider a different situation: rather than doing all that himself, the student paid somebody else to do it; and not to *write* an original essay, but to do all the shenanigans described above.
That's precisely what using LLMs is. You tell them to write an essay, so they find and mix random stuff, and produce a mediocre essay. You don't put an effort, you don't learn anything, perhaps you don't even read "your" essay. And it passes all the plagiarism checks.
#AI #LLM #NoAI #NoLLM #chardet
So how you'd feel if you learned that the guy from whom you've been copying all your homework recently, has been not-so-secretly helping fascist governments commit genocide? And he's quite proud of it too.
Oh right, you'd just say "it's not like doing my own homework will change anything". And then you'll give him your lunch money.
#AI #LLM #NoAI #NoLLM #Claude #Anthropic
Last night I had a #nightmare.
I dreamt that I've sent a pull request to a project, and it turned out that the whole CI pipeline is just LLMs dynamically slopping random tests against the PR. And of course these tests couldn't pass, and you could do nothing to make the PR actually pass tests.
#AI #LLM #NoAI #slop
The key takeaways from the early part of the #chardet thread (I didn't read beyond the ~30 first comments, I have my limits).
1. People there love cosplaying lawyers. Except when the other side also starts cosplaying lawyers, in which case they suddenly divert to suggesting asking professional lawyers.
2. Almost nobody there is concerned with ethics or morality.
3. There's a lot of GPL haters there. Like, they seem the kind of people who don't really care about licensing at all, just used MIT in their projects because it was cool and they heard something about license incompatibility and now bash at everything that's (L)GPL.
4. People don't get that LLMs are statistical models and can't build anything from the ground up. All they can do is remix, which implies they use existing code for inspiration.
5. The maintainer who did the rewrite is a total asshole, and is perfectly aware of it.
Honestly, I'm truly waiting for the subsidizing to end and companies start charging obscene amounts for the use of LLMs. Of course, the reality is that we're totally fucked. We have a lot of projects that adapted a lot of #slop, and people who are being increasingly addicted to this shit. The moment they can't afford it, we'd be left with lots of broken code nobody wants to maintain.
And I definitely don't want to put my effort into packaging crap if its maintainers don't even bother trying.
#AI #LLM #NoAI #NoLLM