2025-11-28 06:05:34
Motto zur Weihnachtszeit:
Ä tännchen is all you need.
#wortspiel #llm
Motto zur Weihnachtszeit:
Ä tännchen is all you need.
#wortspiel #llm
« J'ai découvert le modèle Open Weights GLM-5 »
#LLM #OpenWeights
Do LLMs make *anything* better? But they seem like the ultimate genie that we now can't put back in the bottle.
#LLM
« Anthropic sous-vend-il ses abonnements ou surtaxe-t-il son API ? »
#llm
#LLM saved me one hour of writing and all it took was two hours of your review!
I am not against AI. I am against technology built on copyright violation and sweatshop labor that is actively undermining our ability to save the planet from baking so that people can produce more propaganda and pollute the common well.
If the Venn Diagram seems like a circle, that's not my fault.
#AI #LLM
So I wanted to write a longer #NoAI piece but apparently my blog is down (and this time, miraculously, it might not be #AI scrapers), so I'll give you a sneak peek of what I wanted to say in the more hyperbolic part on how the #LLM discourse has all the common features of libertarian discourse.
"According to Google, LLM-backed searches don't consume much more energy than regular searches" [ignoring model training, surely.]
− According to carbrains, cars are actually cheaper than public transport, provided that you compare gasoline cost with ticket prices, and ignore the cost of buying and owning a car. Not to mention all the indirect costs of space waste (roads, parking lots, garages), environment pollution, accidents…
"AI is just a tool, people decide if it's used for good or bad."
− Ah, yes, and "guns don't kill people."
"AI has its uses."
− So does asbestos.
"Let's not judge contributions by whether they were created using AI, but on their actual quality."
− "Let's not judge contributions by whether they were created using slave work…"
"I do not use AI myself, but I don't want to block others."
− "I do not keep slaves myself…"
#NoLLM #hyperbole
This is so good, I love it: #llm
Ich nutze meist Perplexity oder ChatGPT statt klassischer Suchmaschinen. Schnell, kompakt, bequem. Aber auch fehleranfällig. Ein Blick auf Chancen, Grenzen und die Frage, wie viel wir Antwortmaschinen wirklich überlassen können und sollten. #LLM #KIAgenten
Funny how AI writing continues to sound basically the same now vs 2023 and across individuals.
This is despite a bazillion new models coming out, multiple competitor orgs building their own models, and thousands upon thousands of people spending hundreds of hours customizing their prompts, inputs, building personalized agents and flows….
Has anyone made a taxonomy of AI / LLM writing styles yet? I feel like I see about 3-4 distinct versions of “style”.
#writing #AI #LLM
I was in the process of writing a short story when all this talk of #LLM spellchecking came up. So I wrote a story that you can't actually use an LLM spellchecker on because it breaks them:
#Writing #Fiction #SolarPunk
I don’t think #LLM capabilities are where this article thinks they are, but I do think this is an interesting economical thinking exercise nevertheless
https://www.citriniresearch.com/p/2028gic
In the age of "#AI" assisted programming and "vibe coding", I don't feel like calling myself a programmer anymore. In fact, I think that "an artist" is more appropriate.
All the code I write is mine entirely. It might be buggy, it might be inconsistent, but it reflects my personality. I've put my metaphorical soul into it. It's a work of art.
If people want to call themselves "software developers", and want their work described as a glorified copy-paste, so be it. I'm a software artist now.
EDIT: "craftsperson" is also a nice term, per the comments.
#NoAI #NoLLM #LLM
Tell me, how far are we away from using an #LLM as a #fitnessfunctions with #geneticprogramming ? Anyone experimented with this already?
A friend who is a teacher claimed that LLMs can write consistent plots now and are allegedly used for stories in textbooks or course material in language learning classes.
I find that quite hard to believe because in my limited experience, what information the #LLM will "remember" is quite random and it will just make stuff up if it "forgot", i.e. it doesn't matter if I…
The more I listen to the industry, the more I think software quality may be enough of a differentiator in the future to offset some of the #LLM damage
I'm finally trying out some local LLM models.
Ollama just told me that it's trained on GPL software, so any code it produces needs to be GPL.
Then in a second chat, it said the opposite.
#AiIsGoingGreat #LLM #AI
idea: add #llm support to #syslog so when nothing interesting is happening then it generates some exciting log entries, and if something interesting is happening it hides in the noise. just to keep SoC people entertained.
#tormentnexus
A chat about peer review, editorial on LLM-generated review of manuscripts #LLM
I became a programmer because I found it much easier to program computers than to talk to people. Why would anyone in their sane mind claim that I'd be better off talking in human language to machines that pretend to be the kind of smug humans who have no clue about coding, but are going to fulfill all the assignments given by me by googling and copy-pasting whatever they can find?!
#NoAI #AI #LLM
The case of “vegetative electron microscopy” illustrated here shows what is badly needed in current #LLM research and has implications far beyond. We need tools that help us curate huge corpora. We need to be able to trace #hallucinations back to the training data and understand what are the specific (to a sur…
Searching the Internet in the past: you type a few keywords. You get a bunch of sites. You check these sites for the information you need.
Searching the Internet in the future: you type your question as a full sentence. You get an answer that may be complete bullshit. You ask for sources. You get a list of sources that may be entirely made up. You check the sources. They are an obvious #AI #slop…
#LLM #enshittification #NoAI #NoLLM
Want answers 10X faster and 10X more accurate than LLMs? Use the DuckDuckGo CLI. I'm using that today to study for a cert. I had been using a number of LLMs but they are sooooo sloooooow.
#llm
It's #EmacsConf #2025! I didn't watch live but I'm catching up. I found this interesting because it outlines 3 very different ways to interact with #llm's. I must admit I'm not yet confident enough to hand over the keys to an agent until I'm satisfied with the sandboxing.
<…Process creates friction, so we got rid of process. But that friction was necessary for holding workslop at bay.
Because without slowing down, we can't ask "is this good? is this right?" We can only ask "when will it be done?" And that's a world where #LLM outputs will always beat people.
Fortunately, an "optimized" process moves slowly, because prod…
Kevin Xu argues that it's misleading to characterise the US–China AI competition as a race, since there's mutual co-operation and co-optation going on all the time: #AIResearch #LLM #AIResearch
Roman elites drank from leaded cups because it made water sweeter. Radiation was thought at one time to have healing properties, so people would add uranium to their drinking water. Glowing dishes are still a collectors item. After the discovery of x-rays, shoe stores started installing them and using them on kids feet to size shoes. Lead was added to gasoline to improve engine performance, and paint to make it whiter. We all know about asbestos and DDT.
We look back at all of this and think, "how could people have been so incompetent back then?" Some of these things caused irreparable harm in their generation, some continue to cause harm today almost 100 years later.
If you wonder that, look at the whole #LLM thing and you have your answer.
🧠 #Headroom - The Context Optimization Layer for #LLM Applications #opensource #Python
…
I did it guys... i used chatgpt in a productive way.
I've been banging my head against the wall trying to get some perl XPath stuff to work... I asked it a specific question with the XML i had, and what it produced works. And it's reasonably succinct.
I stand ready to be flogged.
#AI #coding #Perl #LLM
iTerm2 now lets an LLM view & drive a terminal?? That's a huge way to destroy trust. That's just as bad as letting an IDE or email app leak private information.
#llm #ai #enshitification #wtf
Whenever people are commenting on another half-assed, crappy #LLM feat, claiming that there are "some" use cases for this "#AI", substitute "AI" with "genocide".
Because, you know, there are "use cases" for genocide too, and apparently a lot of people don't mind, as long as they can benefit from it and look the other way.
#NoAI
Tracing the thoughts of a large language model
#LLM
Absolutely on brand for Big Mouse
#AI #Copyright #LLM #Disney
https://www.bbc.com/news/articles/c5ydp1gdqwqo
Oh, #GitHub is empathetic to #OpenSource projects impacted by all the #AI slop. They're willing to help, right?
#Copilot even once, and of course they're not going to let people actually block this piece of shit.
#LLM #NoAI #NoLLM #hypocrisy #Microsoft
Whenever I see yet another #AI "AGENTS" file, trying to write instructions for *machines* in human language, like the #LLM statistical algorithm could actually reason about them, a Butlerian jihad opens in my pocket. And the fact of giving clear instructions like they were talking to an #ActuallyAutistic person is adding insult to the injury.
#NoAI
Whenever a #FreeSoftware project is suffering from onslaught of low quality LLM-generated pull requests, there will be a bunch of #LLM lovers complaining that people shouldn't be talking of "LLM-generated" being part of the problem, because "using AI isn't bad" in itself. Of course, they entirely ignore all the ethical and environmental concerns, and probably write crappy code themselves.
#AI #NoAI
In this edition of "Conversations with LLMs".
#LLM #technology #addiction #upselling #desperation
We should be using all the copper we possibly can to electrify the world as fast as possible.
Instead tech-bros are like: “Ooo lets use all the everythings to build idiotic slop machines.”
#AI #Copper #Electricity #EndFossilFuels #LLM #Bubble #climateEmergency
https://www.ctvnews.ca/business/article/how-tight-supply-ai-demand-propelled-copper-towards-us12000/?utm_source=flipboard&utm_medium=activitypub
#LLM users be like:
Why are you accusing me of supporting slavery? I never said I support slavery. I merely buy cheap tobacco! It's not my fault that all the cheap tobacco is coming from slave-driven plantations! Find me a cheaper tobacco that's manufactured ethically, and I'll surely switch over!
Smokers are being persecuted again! All we wish for is for people to respect our constitutional right to poison everyone around us! Is it really that much?!
#AI #NoAI #NoLLM
Cynicism, "AI"
I've been pointed out the "Reflections on 2025" post by Samuel Albanie [1]. The author's writing style makes it quite a fun, I admit.
The first part, "The Compute Theory of Everything" is an optimistic piece on "#AI". Long story short, poor "AI researchers" have been struggling for years because of predominant misconception that "machines should have been powerful enough". Fortunately, now they can finally get their hands on the kind of power that used to be only available to supervillains, and all they have to do is forget about morals, agree that their research will be used to murder millions of people, and a few more millions will die as a side effect of the climate crisis. But I'm digressing.
The author is referring to an essay by Hans Moravec, "The Role of Raw Power in Intelligence" [2]. It's also quite an interesting read, starting with a chapter on how intelligence evolved independently at least four times. The key point inferred from that seems to be, that all we need is more computing power, and we'll eventually "brute-force" all AI-related problems (or die trying, I guess).
As a disclaimer, I have to say I'm not a biologist. Rather just a random guy who read a fair number of pieces on evolution. And I feel like the analogies brought here are misleading at best.
Firstly, there seems to be an assumption that evolution inexorably leads to higher "intelligence", with a certain implicit assumption on what intelligence is. Per that assumption, any animal that gets "brainier" will eventually become intelligent. However, this seems to be missing the point that both evolution and learning doesn't operate in a void.
Yes, many animals did attain a certain level of intelligence, but they attained it in a long chain of development, while solving specific problems, in specific bodies, in specific environments. I don't think that you can just stuff more brains into a random animal, and expect it to attain human intelligence; and the same goes for a computer — you can't expect that given more power, algorithms will eventually converge on human-like intelligence.
Secondly, and perhaps more importantly, what evolution did succeed at first is achieving neural networks that are far more energy efficient than whatever computers are doing today. Even if indeed "computing power" paved the way for intelligence, what came first is extremely efficient "hardware". Nowadays, human seem to be skipping that part. Optimizing is hard, so why bother with it? We can afford bigger data centers, we can afford to waste more energy, we can afford to deprive people of drinking water, so let's just skip to the easy part!
And on top of that, we're trying to squash hundreds of millions of years of evolution into… a decade, perhaps? What could possibly go wrong?
[1] #NoAI #NoLLM #LLM
#LLM users should be obliged to buy *expensive* scraping offsets, and the money should go to #FreeSoftware projects that have to cope with their infrastructure being *killed* by crappy #AI scrapers.
Yes, #Gentoo is suffering from another wave. And yes, if you use their projects and therefore support their business model, please don't use Gentoo.
In this episode of "Conversations with LLMs". ⛔
#LLM #VibeCoding #softwareEngineering #ethics #responsibility
Never thought I'd see the day when an #LLM in the current crop chooses #honesty over #fabrication. I guess, in this case at least, I beat #Copilot into submission (with an earlier criticism). But pretty sure it'll soon be back to confidently fabricating answers.
#hallucination #ethics #technology
Anthropic Claude took out an ad to mock OpenAI ChatGPT for including, *checks the news again*, ads.
#irony #ads #advertisement #SuperBowl #technology #business #AI #LLM
So, "#AI boosted your productivity"? Well, are you a software developer or a factory worker?
Productivity is a measure of predictable output from repetitive processes. It is how much shit your factory floor produces. Of course, once attempts to boost productivity start affecting the quality of your product, things get hairy…
"Productivity" makes no sense for creative work. It makes zero sense for software developers. If your work is defined by productivity, then it makes no sense to use as #LLM to improve it. You can be replaced entirely.
Artists get that. The fact that many software developers don't suggests that the trade took a wrong turn at some point.
Inspired by #NoAI
Last night I had a #nightmare.
I dreamt that I've sent a pull request to a project, and it turned out that the whole CI pipeline is just LLMs dynamically slopping random tests against the PR. And of course these tests couldn't pass, and you could do nothing to make the PR actually pass tests.
#AI #LLM #NoAI #slop