Tootfinder

Opt-in global Mastodon full text search. Join the index!

Generating code via an LLM prompt at breakneck speed can, predictably, lead to less-than-stellar work.
Now, in an ironic twist, it seems that, having dispensed with more skilled coders for the cheap effectiveness of a chatbot-aided code monkey,
companies are having to hire additional contractors to fix the AI’s screw-ups.

@Techmeme@techhub.social
2025-08-13 08:20:56

Arintra, whose AI medical coding system translates clinical documentation into insurance codes for healthcare providers, raised a $21M Series A led by Peak XV (Erin Brodwin/Axios)
axios.com/pro/health-tech-deal

@tante@tldr.nettime.org
2025-10-10 13:20:49

I do not think that Meta's Metaverse failed cause the devs weren't efficient enough. It's conceptually flawed and underbaked at best, without vision and direction. A toy Mark no longer cares too much about.
(And even getting 5% more efficiency out of LLMs is a stretch, studies show improvement of 1 or 2% at best - because you have to spend a lot of time to clean up the generated mess, especially if you are building frameworks and foundational technologies)

@metacurity@infosec.exchange
2025-08-10 10:26:44

Among college graduates ages 22-27, computer science and computer engineering majors face unemployment rates of 6.1% and 7.5%, respectively, over double the unemployment rate among recent biology and art history graduates, which is just 3%.
Goodbye, $165,000 Tech Jobs. Student Coders Seek Work at Chipotle

@tiotasram@kolektiva.social
2025-08-02 13:31:53

Vibe coders: "Of course I carefully check everything the LLM generates! I'm not a fool."
Also vibe coders: "I code in Python without using type hints. A linter? What's that?"
#AI #LLMs #VibeCoding

@hynek@mastodon.social
2025-08-06 17:21:22

It is so profoundly bizarre to see both:
- The insinuation that coders writ large don’t want to use AI tools and have to be forced
- and the claim that somehow AI is the bee’s knees because all of genX are using it after passing on crypto or whatever.
Both takes are obvious nonsense and this might be the most myopic instance of bubbles I’ve seen so far.

@arXiv_csSE_bot@mastoxiv.page
2025-08-11 09:17:00

Position: Intelligent Coding Systems Should Write Programs with Justifications
Xiangzhe Xu, Shiwei Feng, Zian Su, Chengpeng Wang, Xiangyu Zhang
arxiv.org/abs/2508.06017

@Techmeme@techhub.social
2025-09-10 08:02:18

At the Man vs. Machine hackathon, co-hosted by AI nonprofit METR to test if AI helps people code faster and better, the top prize went to an "AI-supported" team (Kylie Robison/Wired)
wired.com/story/san-francisco-

@thomasfuchs@hachyderm.io
2025-09-30 16:33:14

I've been seeing a term that fash coders use for themselves more and more:
"Builders."
(It's implying that people who don't agree with their fascist worldviews are "tearing down" society and the world and have nothing to contribute and are therefore expendable.)

@fortune@social.linux.pizza
2025-09-19 03:00:02

* knghtbrd is each day more convinced that most C coders don't know what
the hell they're doing, which is why C has such a bad rap
<Culus> kb: Most C coders don't know what they are doing, it just makes it
easier to hide :P
<Culus> see for instance, proftpd :P

@metacurity@infosec.exchange
2025-08-26 11:13:58

bloomberg.com/news/articles/20
AI Makes It Harder for Entry-Level Coders to Find Jobs, Study Says

@chpietsch@fedifreu.de
2025-09-28 09:32:59

Cory Doctorow @… on #AI:
[T]he AI bubble is driven by monopolists who've conquered their markets and have no more growth potential, who are desperate to convince investors that they can continue to grow by moving into…

@newsie@darktundra.xyz
2025-10-10 12:21:46

Meta Tells Workers Building Metaverse to Use AI to ‘Go 5x Faster’ 404media.co/meta-tells-workers

@kubikpixel@chaos.social
2025-08-29 16:30:14

Will Coding AI Tools Ever Reach Full Autonomy?
🧑‍💻 #ai #code

@grumpybozo@toad.social
2025-08-28 22:33:55

Most of the responses seem to be for coders.
I hope I won’t ever need to do another sysadmin interview but I wonder how silly it would be to have someone using a LLM to respond.
I ask wildly open questions because I don’t care as much about what they know as I do about how they think. Come back with what reads/sounds like LLM output and I don’t care where it came from or even if it’s technically correct.

@tiotasram@kolektiva.social
2025-08-02 13:28:40

How to tell a vibe coder of lying when they say they check their code.
People who will admit to using LLMs to write code will usually claim that they "carefully check" the output since we all know that LLM code has a lot of errors in it. This is insufficient to address several problems that LLMs cause, including labor issues, digital commons stress/pollution, license violation, and environmental issues, but at least it's they are checking their code carefully we shouldn't assume that it's any worse quality-wise than human-authored code, right?
Well, from principles alone we can expect it to be worse, since checking code the AI wrote is a much more boring task than writing code yourself, so anyone who has ever studied human-computer interaction even a little bit can predict people will quickly slack off, stating to trust the AI way too much, because it's less work. I'm a different domain, the journalist who published an entire "summer reading list" full of nonexistent titles is a great example of this. I'm sure he also intended to carefully check the AI output, but then got lazy. Clearly he did not have a good grasp of the likely failure modes of the tool he was using.
But for vibe coders, there's one easy tell we can look for, at least in some cases: coding in Python without type hints. To be clear, this doesn't apply to novice coders, who might not be aware that type hints are an option. But any serious Python software engineer, whether they used type hints before or not, would know that they're an option. And if you know they're an option, you also know they're an excellent tool for catching code defects, with a very low effort:reward ratio, especially if we assume an LLM generates them. Of the cases where adding types requires any thought at all, 95% of them offer chances to improve your code design and make it more robust. Knowing about but not using type hints in Python is a great sign that you don't care very much about code quality. That's totally fine in many cases: I've got a few demos or jam games in Python with no type hints, and it's okay that they're buggy. I was never going to debug them to a polished level anyways. But if we're talking about a vibe coder who claims that they're taking extra care to check for the (frequent) LLM-induced errors, that's not the situation.
Note that this shouldn't be read as an endorsement of vibe coding for demos or other rough-is-acceptable code: the other ethical issues I skipped past at the start still make it unethical to use in all but a few cases (for example, I have my students use it for a single assignment so they can see for themselves how it's not all it's cracked up to be, and even then they have an option to observe a pre-recorded prompt session instead).

@trochee@dair-community.social
2025-08-20 03:56:27
Content warning: A good catch-all term for vibe coders, "AI artists", "prompt engineers" and "I used this LLM to write my thesis"

"Slop jockey" is my favorite neologism
Thanks to @… for responding to @… 's prompt.
Will be yoinking for my personal vocabulary; it's very pithy.

@Nathan@social.lostinok.com
2025-08-21 13:25:48

This feels eerily like the mistakes the tech industry made during the dot com crash when they stopped hiring all junior devs for several years and decimated the talent pipeline. The industry suffered with long development timelines and huge salary costs for a long time.

@arXiv_csSE_bot@mastoxiv.page
2025-07-24 09:27:00

Investigating Training Data Detection in AI Coders
Tianlin Li, Yunxiang Wei, Zhiming Li, Aishan Liu, Qing Guo, Xianglong Liu, Dongning Sun, Yang Liu
arxiv.org/abs/2507.17389

@Techmeme@techhub.social
2025-07-25 13:35:41

Anysphere launches Bugbot, an AI-powered tool that integrates with GitHub to detect coding errors introduced by humans or AI agents, for $40 per month per user (Lauren Goode/Wired)
wired.com/story/cursor-release

@grumpybozo@toad.social
2025-07-22 23:13:55

One of the reasons I love FOSS and small commercial developers is that they don’t have teams of quasi-competitive coders and/or “product managers” who have formal incentives to make visible changes to software
E.g. Apple keeps changing their decor with apparently no real goal or concept of there being objectively better or worse visual styles. Someone made Liquid Glass & probably got a bonus for doing so. Quality of UX be damned, the look must always be fresh & new.

@Techmeme@techhub.social
2025-08-26 10:25:58

Stanford researchers: over the past three years, employment has dropped 13% for entry-level workers starting out in fields that are the most exposed to AI (Rachel Metz/Bloomberg)
bloomberg.com/news/articles/20

@Techmeme@techhub.social
2025-07-24 08:11:00

A profile of vibe coding startup Lovable, which became the fastest-growing software startup in history, reaching $100M in annualized revenue in eight months (Iain Martin/Forbes)
forbes.com/sites/iainmartin/20

@arXiv_csSE_bot@mastoxiv.page
2025-08-21 09:04:20

What You See Is What It Does: A Structural Pattern for Legible Software
Eagon Meng, Daniel Jackson
arxiv.org/abs/2508.14511 arxiv.org/pdf/2…

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding