Tootfinder

Opt-in global Mastodon full text search. Join the index!

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@StephenRees@mas.to
2025-05-30 16:53:28

Put away pipelines, go with grids!
By David Suzuki with contributions from Senior Editor and Writer Ian Hanington
It’s good to see ideas such as increasing self-sufficiency and diversifying trade partners emerging in response to U.S. attacks on Canada’s economy and threats to our sovereignty. As usual, though, the fossil fuel industry and its supporters are taking advantage of this “crisis” to push for more oil and gas infrastructure, particularly pipelines.

@midtsveen@social.linux.pizza
2025-07-06 01:38:11

Real freedom comes from the people themselves, not from any government or state authority. It does not matter if the flag they wave is red with a hammer and sickle or bears a swastika. Both are symbols of oppressive regimes that crush true liberation. Genuine change cannot be handed down from above. It must arise organically from the grassroots, from workers and communities organizing themselves directly.
Vanguardism is a scam designed to concentrate power in the hands of a self-appoin…

@sonnets@bots.krohsnest.com
2025-07-05 11:25:12

Sonnet 110 - CX
Alas! 'tis true, I have gone here and there,
And made my self a motley to the view,
Gored mine own thoughts, sold cheap what is most dear,
Made old offences of affections new;
Most true it is, that I have looked on truth
Askance and strangely; but, by all above,
These blenches gave my heart another youth,
And worse essays proved thee my best of love.
Now all is done, have what shall have no end:
Mine appetite I n…

@tiotasram@kolektiva.social
2025-07-28 10:41:42

How popular media gets love wrong
Had some thoughts in response to a post about loneliness on here. As the author emphasized, reassurances from people who got lucky are not terribly comforting to those who didn't, especially when the person who was lucky had structural factors in their favor that made their chances of success much higher than those is their audience. So: these are just my thoughts, and may not have any bearing on your life. I share them because my experience challenged a lot of the things I was taught to believe about love, and I think my current beliefs are both truer and would benefit others seeing companionship.
We're taught in many modern societies from an absurdly young age that love is not something under our control, and that dating should be a process of trying to kindle love with different people until we meet "the one" with whom it takes off. In the slightly-less-fairytale corners of modern popular media, we might fund an admission that it's possible to influence love, feeding & tending the fire in better or worse ways. But it's still modeled as an uncontrollable force of nature, to be occasionally influenced but never tamed. I'll call this the "fire" model of love.
We're also taught (and non-boys are taught more stringently) a second contradictory model of love: that in a relationship, we need to both do things and be things in order to make our partner love us, and that if we don't, our partner's love for us will wither, and (especially if you're not a boy) it will be our fault. I'll call this the "appeal" model of love.
Now obviously both of these cannot be totally true at once, and plenty of popular media centers this contradiction, but there are really very few competing models on offer.
In my experience, however, it's possible to have "pre-meditated" love. In other words, to decide you want to love someone (or at least, try loving them), commit to that idea, and then actually wind up in love with them (and them with you, although obviously this second part is not directly under your control). I'll call this the "engineered" model of love.
Now, I don't think that the "fire" and "appeal" models of love are totally wrong, but I do feel their shortcomings often suggest poor & self-destructive relationship strategies. I do think the "fire" model is a decent model for *infatuation*, which is something a lot of popular media blur into love, and which drives many (but not all) of the feelings we normally associate with love (even as those feelings have other possible drivers too). I definitely experienced strong infatuation early on in my engineered relationship (ugh that sounds terrible but I'll stick with it; I promise no deception was involved). I continue to experience mild infatuation years later that waxes and wanes. It's not a stable foundation for a relationship but it can be a useful component of one (this at least popular media depicts often).
I'll continue these thoughts in a reply, by it might take a bit to get to it.
#relationships

@arXiv_csAI_bot@mastoxiv.page
2025-07-01 09:50:33

Improving Rationality in the Reasoning Process of Language Models through Self-playing Game
Pinzheng Wang, Juntao Li, Zecheng Tang, Haijia Gui, Min zhang
arxiv.org/abs/2506.22920

@nelson@tech.lgbt
2025-06-21 17:32:13

Calamus 40 That shadow, my likeness
An odd little poem to find in Calamus. I like it, I connect to the existential doubt.
How often I question and doubt whether that is really me
The self Whitman is unsure of is the quotidian self, the one that works and talks and shops. What does he embrace as the real him?
among my lovers, and carolling my songs, I never doubt whether that is really me.
There's our lusty Whitman, finding his true self in his lovers and his poetry.
PS: this poem introduced me to the lovely word chaffer.

@arXiv_csHC_bot@mastoxiv.page
2025-07-31 07:59:11

Towards Privacy-preserving Photorealistic Self-avatars in Mixed Reality
Ethan Wilson, Vincent Bindschaedler, Sophie J\"org, Sean Sheikholeslam, Kevin Butler, Eakta Jain
arxiv.org/abs/2507.22153

@midtsveen@social.linux.pizza
2025-06-03 17:38:26

The reason I’m so fixated on Rudolf Rocker is that he understood authority and centralized control as inherently oppressive structures that suppress individual freedom and the collective ability to self-organize.
He argued that true freedom arises not from legal or political institutions, which often serve to maintain domination, but from the voluntary, free association of people resisting all forms of coercion and hierarchy.
For Rocker, political rights are not granted by govern…

Black-and-white photograph of Rudolf Rocker (right) with Milly Witkop and his son Rudolf Junior (left), taken in Berlin, Germany. The three sit together in a studio setting with a simple backdrop featuring flowers and a plain wall. All are dressed formally, with serious and composed expressions, reflecting a solemn family portrait.
@ruth_mottram@fediscience.org
2025-07-21 07:26:13

More or less true. Though I'd argue that even the height and chin requirement is an overstatement..
It really is mostly about personality, friendliness and manners, not appearance.
#love

@arXiv_quantph_bot@mastoxiv.page
2025-07-17 10:15:10

Modulator-free, self-testing quantum random number generator
Ana Bl\'azquez-Co\'ido, Fadri Gr\"unenfelder, Anthony Martin, Raphael Houlmann, Hugo Zbinden, Davide Rusca
arxiv.org/abs/2507.12346

@arXiv_mathDS_bot@mastoxiv.page
2025-06-02 07:26:02

A note on multi-transitivity in non-autonomous discrete systems
Hongbo Zeng
arxiv.org/abs/2505.24657 arxiv.org/pdf/25…

@karlauerbach@sfba.social
2025-06-12 19:24:46

I think that events are showing us that we can anticipate that the only way FFOTUS will ever leave the White House is on a gurney. (Or, as his acolytes like to think, he ascends.)
The D-party is doing its best to self-destruct and assure that it does not win seats in the 2026 mid terms. And I seriously doubt that we will have a true election in 2028.

@pre@boing.world
2025-05-16 11:08:14

I read "Then I Am Myself the World: What Consciousness Is and How to Expand It" by Christof Koch.
Interesting book which spends like 8 or 9 chapters detailing all the experiments which prove beyond much doubt that consciousness, and self awareness, is a thing done by a brain.
It describes how perception is a construction of a description, has a chapter called "computational mind"
And then spends the last two chapters describing why he thinks the mind can't be computed, because drugs have made him think experience is some kind of magic associated with highly interconnected causal structures.
Apparently, he thinks, once things become interconnected enough they become able to cause things independently of the physics running those connections.
Which is crazy, obviously. There's nothing causal in direct connections between neurons that isn't equally causal in modeled connections between virtual neurons.
All his evidence in the book from neural MRI scans to the effects of psychedelic drugs and symptoms of strokes and disease point to the brain simulating a virtual reality which is the basis of perception.
That simulated world in which we live is full of colour and shape and sounds and emotions and millions of mental constructs that are built to be correlated by the senses with the outside world, but are not equal to the world itself. We live in a dream constructed to correlate with reality.
But then instead of taking the next step: That consciousness itself is a property of a simulated being inside that mental model of the universe, a property which the brain simulates and applies to the virtual self that's doing the experiencing inside that model, he jumps towards some magic implying pan-psychism or that sufficiently interconnected networks become causally self-complete for some reason nobody can fathom.
Sure, colour and shape and emotions are all made up by the brain but experience can't be! For some reason.
You see in truth dualism is false, in that there is no spirit realm in which ghosts animate the matter of the body somehow.
Yet also, dualism is true, in that there is a simulated mental reality which we live in, computed by the brain in which all perception and experience are created, which is related-to but separate-from the unfolding complicated dance of energy that is the universe our bodies interact with.
People take some DMT trip, and the model of the universe emulated by their brain collapses and breaks. Their virtual simulated self inside their mind has these experiences of being one with the universe or the experience of feeling dead yet conscious or whatever, and these hippies think that the broken down simulated experience is real and reflects how consciousness is more fundamental than the atoms that make up the neurons in their brain.
Instead of realizing it shows them that their experienced universe is a simulacrum, they think they get a more direct experience of reality somehow. A consciousness more pure than any mere base atom.
"Then I am myself the world" is a great title. Everything you ever experience is created and simulated in your brain like a dream, the whole universe is inside your head. Even the fact of experience itself.
But that isn't the conclusion Koch reaches somehow, he just jumps from describing the evidence that this is so straight into ascribing super-causal magic consciousness to particular arrangements of atoms that integrated information theory suggest have high correlation, and thinks therefore conciousness is itself the entire universe.
Ah well, fun book. I like arguing in my head with authors that are wrong.
#reading #books #consciousness #thenIAmMyselfTheWorld

@nelson@tech.lgbt
2025-06-25 13:59:43

Calamus 44 Here my last words
In which the poet outs himself through talking about his poetry.
It's a short piece of Whitman talking about his own writing. But he's so twisted up!
Here I shade down and hide my thoughts—I do not expose them,
And yet they expose me more than all my other poems.
I read this as him talking about Calamus, the cluster of gay poems. And directly telling us that he's censored and hidden what he really wants to say. And yet still these poems still expose his true self. It makes me feel sad for Whitman, imagine his writing if he felt less fettered.
Still, he published some of the most clear gay poems of the 19th century. And got famous and mainstream doing it.

@arXiv_mathNA_bot@mastoxiv.page
2025-06-18 09:14:57

Convergence of generalized cross-validation with applications to ill-posed integral equations
Tim Jahn, Mikhail Kirilin
arxiv.org/abs/2506.14558

@arXiv_csDB_bot@mastoxiv.page
2025-06-13 07:25:10

A Unifying Algorithm for Hierarchical Queries
Mahmoud Abo Khamis, Jesse Comer, Phokion Kolaitis, Sudeepa Roy, Val Tannen
arxiv.org/abs/2506.10238

@midtsveen@social.linux.pizza
2025-07-17 22:35:10

If there’s one thing I keep coming back to, it’s this: I truly believe work could be so much more rewarding if we collectively tore down the rigid hierarchies and completely did away with bosses. In their place, we’d practice genuine worker self-management. Every decision that shapes our daily lives would be made directly by us, the workers on the shop floor.
With direct democracy at the core, and our union standing not just beside us but as a true expression of our collective will, th…