To add a single example here (feel free to chime in with your own):
Problem: editing code is sometimes tedious because external APIs require boilerplate.
Solutions:
- Use LLM-generated code. Downsides: energy use, code theft, potential for legal liability, makes mistakes, etc. Upsides: popular among some peers, seems easy to use.
- Pick a better library (not always possible).
- Build internal functions to centralize boilerplate code, then use those (benefits: you get a better understanding of the external API, and a more-unit-testable internal code surface; probably less amortized effort).
- Develop a non-LLM system that actually reasons about code at something like the formal semantics level and suggests boilerplate fill-ins based on rules, while foregrounding which rules it's applying so you can see the logic behind the suggestions (needs research).
Obviously LLM use in coding goes beyond this single issue, but there are similar analyses for each potential use of LLMs in coding. I'm all cases there are:
1. Existing practical solutions that require more effort (or in many cases just seem to but are less-effort when amortized).
2. Near-term researchable solutions that directly address the problem and which would be much more desirable in the long term.
Thus in addition to disastrous LLM effects on the climate, on data laborers, and on the digital commons, they tend to suck us into cheap-seeming but ultimately costly design practices while also crowding out better long-term solutions. Next time someone suggests how useful LLMs are for some task, try asking yourself (or them) what an ideal solution for that task would look like, and whether LLM use moves us closer to or father from a world in which that solution exists.
Generic Reduction-Based Interpreters (Extended Version)
Casper Bach
https://arxiv.org/abs/2508.11297 https://arxiv.org/pdf/2508.11297
It’s always “AI is great for generating boilerplate code” and never “why do we even need boilerplate code, maybe programming is broken”
This is a BFD.
Rand Paul and Tom Massie have been calling out Stephen Miller's gulag numbers for several weeks;
it got them both disinvited from the White House picnic.
And now Ron Johnson is joining in?
https://fed.brid.gy/r/https://bsk…
Sinto mais orgulho de ser português quando vejo isto do que quando a seleção ganha campeonatos, francamente.
Isto é um Homem. Com H grande.
https://bsky.brid.gy/r/https://bsky.app/profile/did:plc:bbp2b224lro3bfnzcqwwnkfo/post/3lr6lops…
TL;DR:🧵
Today I lost 2 hours because TypeORM ESM NestJS is a fragile combo when it comes to migrations.
No migration:status.
No ESM-compatible CLI.
No schema awareness.
Class name must match filename (with timestamp!).
So I wrote my own migration-status.ts script to compare database state with the migration folder.
Lesson: if you need INSERT INTO "schema"."table", don’t forget the schema.
ORMs give you boilerplate — and tra…
https://fed.brid.gy/r/https://bsky.app/profile/did:plc:bbp2b224lro3bfnzcqwwnkfo/post/3lqn3cekuas27
Every time I see Tom Homan's name, I laugh. That's exactly the kind of name that some lizard creature pretending…
day 1 after wisdom teeth extraction and I'm still feeling miserable LOL
@… 2. LLMs are prescriptivism incarnate. 2022 is forever established, there shall be no more evolution. They will always talk like people talked in 2022, value the same things as the average Reddit poster in 2022, generate the same boilerplate that was important in 2022.
If you don’t watch Stephen’s dispatches (or listen to them: there’s a podcast)…