2026-01-24 07:30:53
OpenAI and longtime US government contractor Leidos announce a partnership to roll out generative and agentic AI tools for specific missions at federal agencies (Miranda Nazzaro/FedScoop)
https://fedscoop.com/openai-chatgpt-le
OpenAI and longtime US government contractor Leidos announce a partnership to roll out generative and agentic AI tools for specific missions at federal agencies (Miranda Nazzaro/FedScoop)
https://fedscoop.com/openai-chatgpt-le
Think about how expensive Uber has become. Now look at this chart (sorry; borrowed it from Reddit; assuming it’s not completely wrong), and draw the simplest conclusion about how much this LLM stuff is going to cost when you’re not getting a handout to subsidize it.
Heck, if a straight line is too hard for you, just ask the LLM.
RE: https://infosec.exchange/@hacks4pancakes/116117950930554437
No app that I use personally has inflicted any LLM on me.
iTerm added an optional plugin. Whatever.
Half of the dozen-ish browsers I have installed have some LLM garbage ,…
Even if you're a prompting god, an LLM savant or otherwise an accomplished machine whisperer—
—it will inflate prices you have to pay for stuff too; computers, phones, hosting, Internet service, software, and generally anything with chips in it (or made in e.g. in a factory that uses stuff with chips in it) and any services that are provided by or are facilitated by something with chips in it.
(This means all goods and services).
#LLM saved me one hour of writing and all it took was two hours of your review!
I see somebody else is on this topic today! And yes, billionaires will use regulatory capture to the maximum extent they can get away with — so yes, I fully expect the AI lobby to advocate a tangled legal regime where LLM output is copyrighted but copying data to train an LLM is not a copyright violation.
https://social.coop/@cwebber/116266757533136607
I was using the Python csv library for a script but decided I should dig into the pandas DataFrame stuff instead.
It was more complex, and it took me awhile to figure things out, and I had to read a bunch of web pages explaining things.
But in the end, I am 100% happy I did it that way.
I did not want to ask some AI/LLM for the answers, or to write the code for me.
Because for me, the struggle and the journey is part of creating something worthwhile.
I have been thinking about how LLM agents pose a threat to open source projects and what strategies can offer us at least some protection. Nevertheless, this is likely to remain a challenge: https://cusy.io/en/blog/how-llm-agents-endanger-open-source-projects.html
from my link log —
PostgreSQL query cancellation / Ctrl-C in psql is insecure.
https://neon.com/blog/ctrl-c-in-psql-gives-me-the-heebie-jeebies
saved 2026-03-23
Is building an LLM inherently problematic? Not necessarily, but there's no good way to do it under capitalism. Is using an local LLM funding these evil companies? No. It's not.
Spelling and grammar checking is one of the few uses of LLMs that is not based on fundamentally failing to understand what an LLM actually is. A statistical model is gonna be *really good* at flagging things that are probably typos (low probability areas). There will be false positives, which is fine if you're actually paying attention...
I don’t think #LLM capabilities are where this article thinks they are, but I do think this is an interesting economical thinking exercise nevertheless
https://www.citriniresearch.com/p/2028gic
"What is a token"
Very nice article. You might like the popularization of how a LLM works, if you don't know already.
I personally appreciate the last third of it, about the meaning for coding tools, and about how single-purpose models could theoretically be so much more useful. (And - my addition - they could do so while being ethical and efficient too.)
All other (valid and invalid) arguments aside—the worst thing about "AI for coding" is that no one ever even mentions how this is better for the people who end up using the software produced.
(Spoiler: It isn't.)
No, what it's used for is product managers offloading product design decisions on programmers, because "with AI they can now just churn out features" and "we'll keep what sticks". (The first feature they're forced to churn out is to add useless LLM-based crap to applications. You know the feature: the one that all power-users of the software desperately go to Reddit for in an exercise of futility trying to find out how to permanently turn it off.)
It's a self-feeding feature creep and software bloat moloch—eating programmers and users.
This is such an unfortunate name though 😂
https://arxiv.org/html/2506.01732v1
This is so good, I love it: #llm
Something for my #TTRPG bubble - but not only the TTRPG bubble.
I'm the kind of person who really likes to have transcripts or summaries of TTRPG sessions, but also struggles with participating and taking notes at the same time.
LLM apps for creating automatic meeting transcripts looked really promising, but:
1. Are usually costly …
2. … create privacy concerns …
OK, so apparently I shouldn’t have said “beyond the obvious,” and the obvious needs stating:
(1) Copyright licenses very clearly •do• allow the copyright holder to determine who may use a work and for what purposes, at least when such use would be otherwise prohibited without a license. That is how the law works. Rightly or wrongly, empires are built on this: “Streaming service XYZ may offer this song for streaming but not for download until this date.” Copyleft is one example of this principle in action.
(1a) Thing the thing presents discriminatory licensing (such as in Daniel’s strawmen) is anti-discrimination law, not copyright law.
(2) The reason copyleft specifically might prevent LLM usage is that •if• LLM output can be considered a derived work of the training material, then the output must also be licensed in the same way. That seems to me a thin reed: courts so far haven’t been willing to treat LLM output as derived work, even when the output includes things that would surely be considered plagiarism and grossly illegal if done by a human. But I don’t see another path to protection, and courts are still sorting this out…so.
https://mastodon.sdf.org/@dlakelan/116267990581623218
Not gonna get into it right now (gotta go to bed) but labeling criticism of LLM based "AI" as "purity culture" and that one can just legitmize using any and all tech if one just somehow creates "free and open" versions of it is not a good take. Really not. Refusing to use LLMs on ethical grounds is also not a claim that problems are solved "by shopping carefully". That's a lot of straw men just to legitimize using an LLM to do spellcheck. Maybe jus…
This is what the LLM crowd sounds like to me whenever they go on about how “It’s not perfect but it’s good enough”.
https://cosocial.ca/@mhoye/116111505546606451
RE: https://mastodon.online/@mwichary/116261562353534865
I almost didn't read this fun interesting post on the mouse cursor sprite because I thought it's about the LLM-powered Cursor software
Wow, j'ai le déplaisir de découvrir en installant @… sur iOS qu'iels ont cédé Š la hype IA et qu'il y a une offre WallabagPlus pour utiliser un LLM (OpenAI !) pour faire des synthèses ou suggérer des tags...
C'est heureusement optionnel, mais j'aurais préféré que ce n'y soit pas...
The more I listen to the industry, the more I think software quality may be enough of a differentiator in the future to offset some of the #LLM damage
broke kagi's translate toy by setting language to "kzin". it ended up emitting an endless sequence of "shrr't'k'ri'ar'shrr't'k'ri'ar'shrr't'k'ri'ar'shrr't'k'ri'ar". yeah this is just the usual LLM party trick. (it's possible that real languages have some backend that isn't just an LLM, and there's a simple LLM fallback for "not otherwise recognized" language)
So, sit down a moment and please take this with all your brain and critism. I don't want to anger you, but to move the discorse forward.
Ok so:
I think if you avoid generative AI in your life, it's not going to send a message to anyone. If you are trying to win a morale argument, I'm sorry but no one cares and the world is going to shit anyway. The only reason why you should not use them are because you are empathic to the ones suffering and because you are trying to a…
In der Juristerei wird das Aufkommen der LLMs begeistert gefeiert. Man versucht sich zu profilieren. Man ist vielleicht besorgt, dass die Stundensätze herunter gehen könnten. Aber sonst? Und dann diese Studie, die zeigt, dass bei Benutzung von LLMs die cognitive Kapazität und damit auch die Qualität dauernd nach unten zeigt. Kurz: Ein LLM-Anwalt bietet teure 0815-Soße, die man auch ohne Anwalt haben kann.
GLM-5 is een krachtig model, open weights en volledig getraind op Huawei Ascend-chips, zonder gebruik te maken van NVIDIA-hardware. Onderstreept het belang van Europese investeringen in AI.
https://www.trendingtopics.eu/glm-5-the-wo
Ich nutze meist Perplexity oder ChatGPT statt klassischer Suchmaschinen. Schnell, kompakt, bequem. Aber auch fehleranfällig. Ein Blick auf Chancen, Grenzen und die Frage, wie viel wir Antwortmaschinen wirklich überlassen können und sollten. #LLM #KIAgenten
Canva COO Cliff Obrecht says the company hit $4B in ARR at the end of 2025, had 265M MAUs and 31M paid users, and expects to IPO in the next "couple of years" (Ivan Mehta/TechCrunch)
https://techcrunch.com/2026/02/18/canva-gets-to-4b-i…
With so many tokens that came before the current token, and so many possibilities that come after, it’s the job of the harness, system prompts, post-training, et al in concert with the human to weave that into something useful. Something valuable.
The LLM is the Temporal Loom.
And we’re The TVA.
So this is what that LLM is supposed to do...
$ echo "The quikc brown fox jumps over the lazey dog, but then the dog quickly retaliates and chase the fox back into the woods." | ollama run gnokit/improve-grammar
> The quick brown fox jumps over the lazy dog, but then the dog quickly retaliates and chases the fox back into the woods.
The obvious answer is copyleft-type licenses.
(1) Has anybody done legal analysis on that beyond the obvious? I don’t think LLM training on copyleft code has been tested in court yet…? (Even LLM training on more restrictively licensed works seems to be surviving court challenge….)
(2) Are there copyleft licenses (i.e. “derived works must be similarly licensed”) out there that don’t have the Stink of Stallman on them? Or is GPL v3 still just the way to go despite the smell?
2/2
I'm finally trying out some local LLM models.
Ollama just told me that it's trained on GPL software, so any code it produces needs to be GPL.
Then in a second chat, it said the opposite.
#AiIsGoingGreat #LLM #AI
SCOTUS Declines to Hear LLM-Backed AI Case Regarding Copyright - Conservancy Blog
<https://sfconservancy.org/blog/2026/mar/04/scotus-deny-cert-dc-circuit-thaler-appeal-llm-ai/> @…
yesterday I upgraded my local #llm to qwen3.5 - and it works pretty well; this is using Unsloth's Qwen3.5-35B-A3B-Q4_K_M.gguf - I also had to upgrade to the latest llama.cpp (and it's got a few rough edges); but it seems as good as the Qwen3-Next-80B I was using, and it's also multimodal (with the mmproj gguf needed) and the multimodal is usefully fast at an image description on CPU only…
Poland bans camera-packing cars made in China from military bases
https://www.theregister.com/2026/02/19/poland_china_car_ban/
Jag kör en lokal LLM istället för att bränna upp jorden. Det är praktiskt, men ibland skulle det vara bra att kunna ställa frågor som har med saker att göra som bara en webbsökning skulle klara av. Då får det helt enkelt ta mer tid att göra uppgiften - typ som det tog förr 😃
Tell me, how far are we away from using an #LLM as a #fitnessfunctions with #geneticprogramming ? Anyone experimented with this already?
Would be nice if someone familiar with @… apps sets up a tar pit solution to protect hosted applications like mastodon
https://tldr.nettime.org/@asrg/1138674
He's so close! Just a little but more and he might see it!
https://mastodon.gamedev.place/@joethephish/115905042848812954
Less sarcastically, an interactive interface that recognized natural language and suggested shell commands could be done well, but wouldn't really need an LLM at all (and the inherent risks/costs of the LLM version make it the wrong tool for the job).
Visual Basic's `On Error Resume Next` can be galaxy-brained to `On Error LLM("rewrite the program to stop doing that")`
A friend who is a teacher claimed that LLMs can write consistent plots now and are allegedly used for stories in textbooks or course material in language learning classes.
I find that quite hard to believe because in my limited experience, what information the #LLM will "remember" is quite random and it will just make stuff up if it "forgot", i.e. it doesn't matter if I…
“Security for LLM Agents” has now been published in @…: https://www.linux-magazine.com/Issues/2026/305/Securing-LLM-Agents
Everything I've written is my own, made by hand. I have used an LLM in this case, not to generate the text but to verify the payload. ;)
As usual, feedback is welcome. I have ADHD, mild dyslexia, and not a lot of free time. Grammar and spelling, especially typo checking, is always very much appreciated.
Edit:
There may be a few more mistakes than normal since I've kind of rushed it to hit while it's especially relevant.
Also... Open to formatting notes. I rushed that a bit too.
It's harder to get the RAM phase.
#DaddyJoke
« Ma cartographie de l'écosystème LLM de mars 2026 »
#LLM
LLM-erdachte Vokabel des Tages: Inbetriebnungsschritte.
(Übersetzung für "commissioning steps" von #DeepL)
#LäuftBeiUns
AI bros are just loving open source — loving it to death... maybe quite literally! (Godot being latest popular example[1])
More and more projects are impacted by floods of bogus AI pull requests and resulting discussions, stealing precious time and nerves away from their maintainers doing actual productive work. More buggy and insecure software (incl. commercial offerings) due to slopcoding, more websites getting attacked daily by AI crawlers in desperate search for any new bits (liter…
@… @… The "funny" part of course is that even if you're a diehard LLM fan, you should want the same things I want:
If your code is reasonably encapsulated, your LLM can fit the whole problem in the cont…
RE: https://mastodon.social/@stroughtonsmith/116097302666370371
Hahahahaha.
Nice alternate timeline the LLM is living in. 😂
While I'm talking about new ways to structure computer programs, what does it look like to _push_ data into an LLM? Instead of making it natural language fetching, with huge privilege to do stuff, can we structure things to, say, have one system supply some data context, and push it into an LLM to arrange processing, and that in turn pushes to other systems for action? It's not quite “code and data are separate", but I think the inversion might help mitigate a lot of the lethal trifecta. It also puts humans in the artisan-director seat, rather than the wannabe slave-master's seat, metaphorically.
In this world nothing can be said to be certain except death, taxes and LLM will dutifuly exfiltrate your data via a hidden prompt:
https://www.promptarmor.com/resources/claude-cowork-exfiltrates-files
Open Slopware
“Free/Open Source Software tainted by LLM developers/developed by genAI boosters, along with alternatives.”
#AI
"Claws" is becoming a term to describe OpenClaw-like agent systems that usually run on personal hardware and are a new layer on top of LLM agents (Andrej Karpathy/@karpathy)
https://x.com/karpathy/status/2024987174077432126
🧠 #Headroom - The Context Optimization Layer for #LLM Applications #opensource #Python
…
This text contains both prompt injection and possible training set data poisoning. So... Don't use it to train an LLM. Or do... Fuck around and find out, if that's your game. I'm not your dad.
I’m utterly convinced that the reason CEOs (even of small companies) are shoving LLM into all products and forcing their employees to use them (despite them being universally despised) is social pressure from their CEO peers and they don’t want to appear to be “luddites” (yes thank you I know that’s not what that word actually means).
RE: #AI
Si je continue en infosec, maintenant, j'annoncerai d'entrée de jeu que je suis "objecteur de conscience" sur l'utilisation des LLM. Cela me semble le terme le plus approprié.
Def Wikipedia:
> L'objection de conscience est un refus d'accomplir certains actes requis par une autorité lorsqu'ils sont jugés en contradiction avec des convictions intimes de nature religieuse, philosophique, politique, idéologique ou sentimentale.
Je vous encou…
When you see a system prompt for an LLM based system and you just see some dude (it's always a dude) somehow trying to beg a bag of statistics to behave a certain way. Pleading. Ordering. WITH CAPITAL LETTERS.
Fucking ridiculous.
Die Nummer mit den CVE-10-Events wird noch viel spannender werden, wenn Firmen wirklich in breitem Stil komplett unreflektiert LLM statt Programmierer einsetzen.
After my repeated posts / boosts arguing that in OSS we’ve overemphasized licenses and underemphasized community, governance, and sustainability…I actually have a license question:
What’s the current thinking on licenses that lay the legal groundwork for action against people using OSS source code for LLM training without seeking permission or offering compensation?
1/2
Why don’t the LLM harnesses that support MCP slap jq in front of the MCP response? Some of the MCPs are *very* verbose (looking at you Jira), and allowing the model to filter the response before it enters the context window would be very useful
Why run multiple agents talking at each other when they're all really the same LLM anyway? Just prompt the singular LLM to pretend to be multiple people in conversation, and your simulated team will materialize a finished smb server that's not terrible.*
* May not be reproducible. Odds of winning are unknowable.
https://
I was in the process of writing a short story when all this talk of #LLM spellchecking came up. So I wrote a story that you can't actually use an LLM spellchecker on because it breaks them:
#Writing #Fiction #SolarPunk
@… It's not entirely the same problem as earlier attempts at making natural-language compilers.
The "large" part of LLM gives it more context and a bit of "common sense". It's still a guess, but drawn from a more likely distribution.
The learned context is so strong they can sometimes surprise Babbage and give right answers…
People tell me that #AI code is fine, because you can run automatic tests. But tests can only tell you if code is doing the thing you want it to do.
To know what it SHOULD do, we used to have requirements. But now requirements are themselves vibe-coded slopotypes.
People were hoping that this would get them higher velocity, but testing the requirements in production only produces waste an…
RE: https://hachyderm.io/@thomasfuchs/116108931167564199
If only he could afford a human editor that tells him that he might have a abrasive opinion on something and he should punch up, not down—but what do I know, I’m not an LLM
But it’s okay, because the LLM promises those 17 lines changes aren’t slop. It spent 3 minutes making sure they aren’t slop.
Motherfucker, I could have told you in 0.3 seconds.
Xiaomi releases MiMo-V2-Pro, its new 1T-parameter foundation model, codenamed Hunter Alpha, which the company says benchmarks close to GPT-5.2 and Opus 4.6 (Carl Franzen/VentureBeat)
https://venturebeat.com/technology/xiaomi-stuns…
Let's hope for a guilty verdict, not a pay-off, er, "settlement."
https://techcrunch.com/2026/03/16/merriam-webster-openai-encyclopedia-brittanica-lawsuit/
I wonder how much of the LLM-for-coding hype is because the last 15 years in mainstream coding veered ever more enterprisey layer cakes that took all the fun out of programming
@… @… Currently we think before writing code, because writing it and changing it takes effort (and even for LLM-generated slop code it takes time, and burns money and rainforests, so it's better to get the spec right first)
✍️ Coding Agents are 3D Printers for software
I blogged more idle thoughts: https://www.codevoid.net/ruminations/2026/03/07/llm-coding-agents-are-3d-printers.html
Wait, I think I figured out the secret sauce:
If you blindly accept what the LLM generates on the first pass, and never try refining anything, you don’t have to waste time waiting for it to rework the code over and over again.
Might explain all the slop code my coworkers have been shipping lately.
MiniMax releases M2.7, a proprietary "self-evolving" LLM that the company used to build, monitor, and optimize the model's own reinforcement learning harnesses (Carl Franzen/VentureBeat)
https://venturebeat.com/technology/new
Oh phew, someone else on the team built a “Slop Detection” “skill” for the LLM so now it will magically stop being a statistical model and instead be a statistical model with ✨ slightly different inputs ✨.
Thank you kind stranger, I couldn’t have done it without you.
I know someone is going to tell me I’m just “doing it wrong”, or bragging but, I’ve written some *extremely basic* code this week with an LLM (I know I know, but it was mandated that I *try*).
I am absolutely certain I could have written this faster myself.
So do any of the people claiming "responsible use" of LLMs for coding use their own locally hosted LLM that has not been trained on (or based on a training set of) any data they have not personally vetted as being licensed to be used in such a way? (Both for training English and generating code?)
“Just run the LLM locally”
On your PC that costs $30,000