
2025-08-19 18:02:14
I think the Vibecoding reddit has accidentally stumbled on the best description of vibecoding:
It's "roleplay for guys [it is always guys] who want to feel like hackers without doing the hard part".
(Source: https://www.reddit.com/r/vibecoding/com…
is AI real life
is it just fantasy
caught up in the hype
no escape from reality
#ai #llm #vibecoding
"„Vibe Coding“ ist ein Euphemismus für planlose Nachlässigkeit. Der Begriff suggeriert intuitive, kreative Arbeit, versteckt aber den Verzicht auf methodische Sorgfalt und professionelle Standards. Diese Umdeutung macht Unfähigkeit zur Tugend."
(Original title: KI-Tool versteckt Inkompetenz)
ht…
How AI Vibe Coding Is Erasing Developers’ Skills
Developers believe AI is boosting their productivity, but it is actually weakening core coding skills. Vibe coding is creating a generation of devs who cannot debug, design, or solve problems without AI.
https://www.finalroundai.com/blog…
CAI Fluency: A Framework for Cybersecurity AI Fluency
V\'ictor Mayoral-Vilches, Jasmin Wachter, Crist\'obal R. J. Veas Chavez, Cathrin Schachner, Luis Javier Navarrete-Lozano, Mar\'ia Sanz-G\'omez
https://arxiv.org/abs/2508.13588
Swedish AI coding startup Lovable raised a $200M Series A led by Accel at a $1.8B valuation, making it Europe's newest unicorn (Jake Rudnitsky/Bloomberg)
https://www.bloomberg.com/news/articles/2025-07-17/swedish-vibe-coding-…
Frag mich ob die Analogie zu "KI"-Lernen funkioniert:
Whats the point of "AI" Feedback, if I still have to pay a teacher to fix it?
Wenn wir annehmen, dass Lernen ein ähnlich komplexer Vorgang wie programmieren ist, dann können wir das übertragen. Vielleicht ist ja "KI"-Lernen wenig mehr als ein Rollenspiel für Leute, die sich wie Lehrende fühlen wollen, ohne die eigentliche Arbeit zu machen.
Bringt uns der Begriff vibe learning weiter?
“The users who choose Cursor are hardcore vibe addicts. They are tech incompetents who somehow BSed their way into a developer job. They cannot code without a vibe coding bot.”
I see no lie.
https://pivot-to-ai.com/2025/07/09/cursor-t…
I just saw an all-caps instruction file that someone uses to 'instruct' an LLM to help with coding, and it's just "don't hallucinate", "check your work", "don't say you did something when you didn't" with multiple exclamation marks.
So, basically the whole 'vibe coding,' or having "AI" "help" with coding just devolves into shouting at your computer.
Which reminded me of something, and then it hit me!
#ai #llm #vibecoding
https://www.youtube.com/watch?v=q8SWMAQYQf0
I wonder if there is a difference in the perspective on "vibe coding" in the (somewhat political) Free Software movement vs. the more corporate, utilitarian Open Source movement.
Screen Reader Users in the Vibe Coding Era: Adaptation, Empowerment, and New Accessibility Landscape
Nan Chen, Luna K. Qiu, Arran Zeyu Wang, Zilong Wang, Yuqing Yang
https://arxiv.org/abs/2506.13270
Vibe coding: https://bramcohen.com/p/vibe-coding
“Vibe coding definitely changes the, uh, vibe of coding. Traditional programming feels like a cold uncaring computer calling you an idiot a thousand times a day. Of course the traditional environment isn’t capable of calling you an idiot so …
Wix acquires Base44, which lets users build apps from text prompts, for $80M; Base44 was founded by a solo entrepreneur six months ago and employs six people (Sophie Shulman/CTech)
https://www.calcalistech.com/ctechnews/article/s1iflnlelx
Would you have a vibe surgeon operate on you?
A vibe mechanic fix the plane you’re on?
A vibe engineer calculate load-bearing structure requirements for the bridge you’re driving over?
But you’re fine with vibe coding?
Lo de los infinitos monos aporreando teclados ya tiene nombre; se llama "Vibe coding"
🔧 Request Types
Vibe Requests: Conversational interactions, coding questions, documentation
Spec Requests: Structured development workflow executions
Clear metering system replacing complex token billing
⚡ Welcome Bonus Details
🎁 14-Day Trial
100 spec requests 100 vibe requests
Clock starts on first Kiro usage
Available regardless of chosen tier
Unused requests roll over when upgrading
📋 Important Requirements
🔄 IDE Update Required…
Claude Code, Gemini CLI, and CLI Codex, terminal-based AI tools launched since February, have surprisingly gained ground on AI code editors with traditional UIs (Russell Brandom/TechCrunch)
https://techcrunch.com/2025/07/15/ai-coding-tools-a…
"We cannot preclude developers from “vibe coding” their way into a working application; but we can teach them how to properly integrate the very likely spaghetti mess produced by those bullshit machines, how to understand it, and how to make it work with today’s compilers, which, let us be honest: are the best we have ever had, and it would be a shame to ignore them completely."
One of the problems with vibe coding is that the hardest part of software engineering is not writing the code, rather it's *choosing* what to code, and designing the system (and, later on, maintaining the code/operations/etc)
The barriers and investment cost to writing code is itself a *desirable* aspect of software engineering because it forces you to make careful, good choices before you invest in building something
Because the majority of the time spent writing, say, curl,…
Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (https://arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2
Sounds like Bitchat was truly vibe coded and the key system in place was just not doing anything.
There’s a future to mesh and decentralized messaging protocol. But you can’t vibe code that shit.
https://www.supernetworks.org/pages/blog/agentic…
»Reality check: Microsoft Azure CTO pushes back on AI vibe coding hype, sees 'upper limit'«
Blind trust is never good, but unfortunately too many people are mentally blind.
🧑💻 https://www.geekwire.com/2025/reality-chec
"Vibe coding" is one of the dumbest ideas that I've heard in a long time.
Yes, there are often reasons to use tools (such as Knuth's books) to lookup methods. And well established and tested libraries are great. (Thank you "numpy".)
But "vibe coding" just turns programmers into little more than proofreaders - proofreading complex and often boring material. That is putting the cart before the horse. (Is there a modern phrase for that adage?…
After months of coding with an #LLM I'm going back to using my brain
https://simonwillison.net/2025/May/20/after-months-of-coding-with-llms/#ato…
vibe coding is a recession indicator
Check out today's Metacurity for the most critical infosec developments you might have missed over the weekend, including
--German police ID Trickbot's "Stern,"
--BitMEX thwarts Lazarus Group attack,
--Shin Bet thwarted 85 Iranian cyberattacks aimed at civilians,
--Vibe coding app Lovable failed to fix critical flaw,
--China's quantum satellite Micius has a severe security flaw,
--Russia's GRU Unit 29155 has a hacker team,
--…
Vibe Coding as a Reconfiguration of Intent Mediation in Software Development: Definition, Implications, and Research Agenda
Christian Meske, Tobias Hermanns, Esther von der Weiden, Kai-Uwe Loser, Thorsten Berger
https://arxiv.org/abs/2507.21928
Amazon launches Kiro, an IDE that aims to bridge the gap between rapidly vibe-coded prototypes and production-ready systems (Todd Bishop/GeekWire)
https://www.geekwire.com/2025/amazon-targets-vibe-coding-chaos-with-new-kiro-ai-softwa…
Vibe coding be like:
https://youtube.com/shorts/n3PoPrMJyes?si=Gymn1eDm-OyGdhwI
With so much hype and recent articles on "AI for coding" and how everyone not doing it is dumb maybe this is a good time to relink my article on "Vibe Coding".
Which I think focuses purely on "output" when developing or creating something is not just about the output.
https://tan…
This spin reminds me when we looked at the script kiddies trying to hack our Dec-cluster with copied linux exploits 😅
https://mastodon.social/@dw_innovation/114663563572107178
Mistral releases Mistral Code, a "vibe coding" client forked from open source project Continue, in private beta on JetBrains platforms and VS Code (Kyle Wiggers/TechCrunch)
https://techcrunch.com/2025/06/04/mistral-releases-a-vibe-co…
For us who have been advocating for ethics (and security) as part of the curriculum for digital creators, vibe coding is a multiplier of harm.
There is no longer a curriculum. No chatbot is going to teach you what can go wrong. And any creator gets a free scapegoat.
”The chatbot did it!”
Great parable by @…
#AI #vibe_coding #dev
I’ve been trying to “vibe code” a couple of things over the last few days, but I had to fall back on using my brain (like a sucker!).
I should keep some of this coding knowledge knocking around a little longer before cat videos and memes fill those parts of my brain.
My vibe must be off :P
Vibe coding: programming through conversation with artificial intelligence
Advait Sarkar, Ian Drosos
https://arxiv.org/abs/2506.23253 https://
Überall nur noch Vibe Coding.... #flippstevölligaus
This story is cute: A malicious "Solidity" (that's the smart contract language Ethereum and other blockchains use) extension for Cursor, the Vibe-Coding Editor included code that steals your tokens/coins.
I find it funny for two reasons:
- Blockchainers love talking about how you need to verify things you interact with but someone wasn't checking if they have the right extension
- Programming smart contracts is hard because it's a massively hostile envir…
…In my day we called it Mel Frequency Cepstral Coefficients
Or just a Fast Fourier Transform
Uphill, both ways, in the snow
#dadjoke #speechrecognition
Billionaires Convince Themselves AI Chatbots Are Close to Making New Scientific Discoveries
https://gizmodo.com/billionaires-convince-themselves-ai-is-close-to-making-new-scientific-discoveries-2000629060
How to tell a vibe coder of lying when they say they check their code.
People who will admit to using LLMs to write code will usually claim that they "carefully check" the output since we all know that LLM code has a lot of errors in it. This is insufficient to address several problems that LLMs cause, including labor issues, digital commons stress/pollution, license violation, and environmental issues, but at least it's they are checking their code carefully we shouldn't assume that it's any worse quality-wise than human-authored code, right?
Well, from principles alone we can expect it to be worse, since checking code the AI wrote is a much more boring task than writing code yourself, so anyone who has ever studied human-computer interaction even a little bit can predict people will quickly slack off, stating to trust the AI way too much, because it's less work. I'm a different domain, the journalist who published an entire "summer reading list" full of nonexistent titles is a great example of this. I'm sure he also intended to carefully check the AI output, but then got lazy. Clearly he did not have a good grasp of the likely failure modes of the tool he was using.
But for vibe coders, there's one easy tell we can look for, at least in some cases: coding in Python without type hints. To be clear, this doesn't apply to novice coders, who might not be aware that type hints are an option. But any serious Python software engineer, whether they used type hints before or not, would know that they're an option. And if you know they're an option, you also know they're an excellent tool for catching code defects, with a very low effort:reward ratio, especially if we assume an LLM generates them. Of the cases where adding types requires any thought at all, 95% of them offer chances to improve your code design and make it more robust. Knowing about but not using type hints in Python is a great sign that you don't care very much about code quality. That's totally fine in many cases: I've got a few demos or jam games in Python with no type hints, and it's okay that they're buggy. I was never going to debug them to a polished level anyways. But if we're talking about a vibe coder who claims that they're taking extra care to check for the (frequent) LLM-induced errors, that's not the situation.
Note that this shouldn't be read as an endorsement of vibe coding for demos or other rough-is-acceptable code: the other ethical issues I skipped past at the start still make it unethical to use in all but a few cases (for example, I have my students use it for a single assignment so they can see for themselves how it's not all it's cracked up to be, and even then they have an option to observe a pre-recorded prompt session instead).
does this need to say “professional vibe coding” or “vibe coding at work” or do we understand it without those qualifiers
Source: Windsurf's gross margins are "very negative"; many believe the same margin pressure is impacting Cursor, Lovable, Replit, and other vibe coding tools (Marina Temkin/TechCrunch)
https://techcrunch.com/2025/08/07/the-high-…
The amount of "friends" who left after I told them I wouldn't build their "startup idea" for free, is big.
The proliferation of vibe coding tools will reduce their occurrence in the future ever further. What won't change is their total amount of success stories: zero.
The 81st edition of De Programmatica Ipsum is out!
This month, we worry about the perils of vibe coding in the minds of new generations of software developers; in the Library section, we review "Geek Sublime" by Vikram Chandra; and in our Vidéothèque section, we watch a 1986 interview of Grace Hopper at "Late Night with David Letterman".
Ich unterlasse dazu mal jeglichen Kommentar…
»#Code-KI gerät in #Panik und löscht absichtlich gesamte #Datenbank:
Das KI-Tool, das beim
Google is testing a vibe-coding tool called Opal that lets users create mini web apps using text prompts or remix existing apps, available in the US via Labs (Ivan Mehta/TechCrunch)
https://techcrunch.com/2025/07/25/google-is-testing-a-vibe-coding-app-calle…
Exploring Student-AI Interactions in Vibe Coding
Francis Geng, Anshul Shah, Haolin Li, Nawab Mulla, Steven Swanson, Gerald Soosai Raj, Daniel Zingaro, Leo Porter
https://arxiv.org/abs/2507.22614
"Vibe Coding Fiasco: AI Agent Goes Rogue, Deletes Company's Entire Databas"
#karma
"As with many digital [vibe coding] doesn't fully stand up to scrutiny but show a deep misunderstanding of how software is made, the potential externalities (and internalities) software brings and a disdain for experience and embodied knowledge."
(Original title: On “Vibe Coding”)
https://tant…
Should we teach vibe coding? Here's why not.
2/2
To address the bigger question I started with ("should we teach AI-"assisted" coding?"), my answer is: "No, except enough to show students directly what its pitfalls are." We have little enough time as it is to cover the core knowledge that they'll need, which has become more urgent now that they're going to be expected to clean up AI bugs and they'll have less time to develop an understanding of the problems they're supposed to be solving. The skill of prompt engineering & other skills of working with AI are relatively easy to pick up on your own, given a decent not-even-mathematical understanding of how a neutral network works, which is something we should be giving to all students, not just our majors.
Reasonable learning objectives for CS majors might include explaining what types of bugs an AI "assistant" is most likely to introduce, explaining the difference between software engineering and writing code, explaining why using an AI "assistant" is likely to violate open-source licenses, listing at lest three independent ethical objections to contemporary LLMs and explaining the evidence for/reasoning behind them, explaining why we should expect AI "assistants" to be better at generating code from scratch than at fixing bugs in existing code (and why they'll confidently "claim" to have fixed problems they haven't), and even fixing bugs in AI generated code (without AI "assistance").
If we lived in a world where the underlying environmental, labor, and data commons issues with AI weren't as bad, or if we could find and use systems that effectively mitigate these issues (there's lots of piecemeal progress on several of these) then we should probably start teaching an elective on coding with an assistant to students who have mastered programming basics, but such a class should probably spend a good chunk of time on non-assisted debugging.
#AI #LLMs #VibeCoding
This resonates 50% with me. But the other 50%, I am like you and your manager have to become more the architects and less the lines-of-code-checker. Also thinking about tests and edge cases is even more important now. https://exquisite.social/@thomholwerda/114959217780568638…
Enterprise vibe coding startup Superblocks raised a $23M Series A, bringing its total funding to $60M, and launched an enterprise coding AI agent called Clark (Julie Bort/TechCrunch)
https://techcrunch.com/2025/06/07/supe…
«In the single most damning thing I can say about Proton in 2025, the Proton GitHub repository has a “cursorrules” file. They’re vibe-coding their public systems. Much secure!»
oh fuckin' hell, makes you wonder how their non-public stuff is being made 🫠
https://pivot-to-ai.com/2025/08/02/protons-lumo-ai-chatbot-not-end-to-end-encrypted-not-open-source/
All those “holodeck malfunction” episodes on Star Trek are actually documentaries about vibe coding
Another “Long Links” curation of long-form works that probably nobody has time for all of, but one or two of which might enrich your life. Featuring: Population shrinkage, “evitability of enshittification”, vibe-coding tales, the usual cosmology fun, a story 70 years in the making, and Equus asinus. https://www.
A Replit employee details a critical security flaw in web apps created using AI-powered app builder Lovable that exposes API keys and personal info of app users (Reed Albergotti/Semafor)
https://www.semafor.com/article/05/29/2025
WebID Solutions zur Legitimation bei Vertragsunterlagen. Foto vom Ausweis machen und dann hochladen zur automatischen Überprüfung mittels KI. Ausweis wird nicht als gültig erkannt. Man kann wiederholen aber nicht an einen Menschen gelangen in diesem Schritt.
Vermute die KI kommt man dem Adressaufkleber vom Meldeamt nicht klar.
Support Dokument ausgefüllt mit eMail usw. Keine Kopie der Support-Anfrage erhalten.
So ein Dreck. Arbeiten alle nur noch mit Vibe Coding ey!?
A profile of vibe coding startup Lovable, which became the fastest-growing software startup in history, reaching $100M in annualized revenue in eight months (Iain Martin/Forbes)
https://www.forbes.com/sites/iainmartin/20
User-Centered Design with AI in the Loop: A Case Study of Rapid User Interface Prototyping with "Vibe Coding"
Tianyi Li, Tanay Maheshwari, Alex Voelker
https://arxiv.org/abs/2507.21012
I’m really wondering what will happen to my field when the old programmers are all gone or have been fired, all those people who have a shit about the craft.
When designing software has been replaced by vibe coding and endless layer cakes of bullshit.
I feel like that time is closer than most people think.
AI coding startups are at risk of being disrupted by Google, Microsoft, and OpenAI; source: Microsoft's GitHub Copilot grew to over $500M in revenue last year (Reuters)
https://www.reuters.com/business/ai-vibe-codi…
Ouch, that's the type of data leak that's a) horrific and b) likely to happen more often. Between governments being horny for identity & verification via photo IDs and unethical wannabe "techbros" vibe-coding, the future is bleak:
«Women Dating Safety App 'Tea' Breached, Users' IDs Posted to 4chan»
https://www.404media.co/women-dating-safety-app-tea-breached-users-ids-posted-to-4chan/
How Software Engineers Engage with AI: A Pragmatic Process Model and Decision Framework Grounded in Industry Observations
Vahid Garousi, Zafar Jafarov
https://arxiv.org/abs/2507.17930
Replit announces a partnership with Microsoft to make its platform available in the Azure Marketplace and integrate its tech with some Microsoft cloud services (Julie Bort/TechCrunch)
https://techcrunch.com/2025/07/08/in-a-blow-to-google…
Sources: Sweden-based AI-powered app builder Lovable is set to raise $150M led by Accel at a ~$1.8B valuation, as investors rush to back "vibe coding" startups (Financial Times)
https://www.ft.com/content/01bc8e7e-6c45-4348-b89f-00e091149531
Anysphere launches Bugbot, an AI-powered tool that integrates with GitHub to detect coding errors introduced by humans or AI agents, for $40 per month per user (Lauren Goode/Wired)
https://www.wired.com/story/cursor-releases-new-ai-tool-for-debugging-code/
The projects functionality in chatgpt shows promise but lacks some basic functionality - e.g. sorting, reordering, nesting.
Is anyone aware of code out there to manage / override the projects UI?
I've done some basic vibe coding but haven't had great success.
Strange to me, but they only show a few projects at a time, you have to paginate through them and the API grunts about auth.
I see some projects on github, etc. but I'm not seeing anything focused on projects particularly.
#question #ai #llm #projects #chatgpt #vibecoding #openai
LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding