Tootfinder

Opt-in global Mastodon full text search. Join the index!

@mgorny@social.treehouse.systems
2025-08-15 03:55:04

Summary of today's dreams:
1. I changed my shorts but forgot to replace the paper towels in my pockets. I wouldn't have thought my brain would treat this as such a #nightmare. I mean, lack of paper towels, not fresh shorts. The terror when you put your hand in the pocket, and it's empty.
2. I forgot to put my garbage out last night. It was around 7 AM (having precise dreams is important), it was raining and I was hurrying to put them out, worrying it's too late already. Then, I've discovered there's a huge dumpster in front of my gate, filled to the brim with loose cat litter. And I was thinking, the garbage company won't take such a tonnage of loose litter like that.
#dream

@arXiv_csCY_bot@mastoxiv.page
2025-06-16 07:24:09

The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI
Barbara Oakley, Michael Johnston, Ken-Zen Chen, Eulho Jung, Terrence J. Sejnowski
arxiv.org/abs/2506.11015

@arXiv_csIR_bot@mastoxiv.page
2025-07-17 08:21:10

Looking for Fairness in Recommender Systems
C\'ecile Log\'e
arxiv.org/abs/2507.12242 arxiv.org/pdf/2507.12242…

@mgorny@social.treehouse.systems
2025-08-14 14:58:04

Today I saw a scrawl in Piła, saying (in translation from Polish): "God does not like cowards."
I was thinking that a good riposte would be: "Cowards made themselves a God, for they were afraid of reality."
#ateism

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@jorgecandeias@mastodon.social
2025-08-02 18:02:15

Uh-huh, OK, makes sense, hm... that the heck, Germans and a few others? But wait... what's that? What?! Now that's just plain weird, Denmark! What are you thinking?
mapsontheweb.zoom-maps.com/pos

@arXiv_csAI_bot@mastoxiv.page
2025-08-11 08:59:10

Don't Forget Imagination!
Evgenii E. Vityaev, Andrei Mantsivoda
arxiv.org/abs/2508.06062 arxiv.org/pdf/2508.06062

@arXiv_csCL_bot@mastoxiv.page
2025-08-01 10:18:01

Beyond Passive Critical Thinking: Fostering Proactive Questioning to Enhance Human-AI Collaboration
Ante Wang, Yujie Lin, Jingyao Liu, Suhang Wu, Hao Liu, Xinyan Xiao, Jinsong Su
arxiv.org/abs/2507.23407

@mgorny@social.treehouse.systems
2025-07-03 16:53:09

When you choose the armor for the female character based on its physical defense, but it actually turns out grossly indecent. I mean, how is it supposed to protect her if there's a huge uncovered area in the middle?
But thinking about it, my earlier best armor was swimming oil, i.e. walking around half-naked.
(in #Xenoblade Chronicles 3D)

@arXiv_csHC_bot@mastoxiv.page
2025-07-30 09:03:21

Thinking Like a Scientist: Can Interactive Simulations Foster Critical AI Literacy?
Yiling Zhao, Audrey Michal, Nithum Thain, Hari Subramonyam
arxiv.org/abs/2507.21090

@timelfen@assemblag.es
2025-06-19 16:03:07

Limn Issue 12: Climate's Interiors is out now online and in print.
Read anywhere with the link below.
limn.press/issue/climates-inte

@mgorny@social.treehouse.systems
2025-07-31 19:42:41

You know what truly annoys me?
Well, a lot of things, but here I'm thinking of people who enjoy killing little creatures.
And I'm not talking about killing some animal because it's actually dangerous, or is causing some real harm, or even because it stings painfully or is annoying like a fly. I'm talking about people who laugh when they kill a spider or a "bug" (which usually means some harmless beetle), because it dared show in their homestead.
Not to mention all the people who would prefer that all the harmless Polish snakes (and slow worms, because if it has no legs, then it's obviously a "snake") or they'll "bite the children".

@arXiv_csMA_bot@mastoxiv.page
2025-07-22 07:54:00

EduThink4AI: Translating Educational Critical Thinking into Multi-Agent LLM Systems
Xinmeng Hou, Zhouquan Lu, Wenli Chen, Hai Hu, Qing Guo
arxiv.org/abs/2507.15015

@arXiv_csIR_bot@mastoxiv.page
2025-07-24 08:20:59

R4ec: A Reasoning, Reflection, and Refinement Framework for Recommendation Systems
Hao Gu, Rui Zhong, Yu Xia, Wei Yang, Chi Lu, Peng Jiang, Kun Gai
arxiv.org/abs/2507.17249

@arXiv_csAI_bot@mastoxiv.page
2025-07-25 07:30:41

I2I-STRADA -- Information to Insights via Structured Reasoning Agent for Data Analysis
SaiBarath Sundar, Pranav Satheesan, Udayaadithya Avadhanam
arxiv.org/abs/2507.17874

@arXiv_qbioNC_bot@mastoxiv.page
2025-07-28 08:12:31

Technological folie \`a deux: Feedback Loops Between AI Chatbots and Mental Illness
Sebastian Dohn\'any, Zeb Kurth-Nelson, Eleanor Spens, Lennart Luettgau, Alastair Reid, Christopher Summerfield, Murray Shanahan, Matthew M Nour
arxiv.org/abs/2507.19218

@arXiv_csCL_bot@mastoxiv.page
2025-07-22 12:23:30

Interaction as Intelligence: Deep Research With Human-AI Partnership
Lyumanshan Ye, Xiaojie Cai, Xinkai Wang, Junfei Wang, Xiangkun Hu, Jiadi Su, Yang Nan, Sihan Wang, Bohan Zhang, Xiaoze Fan, Jinbin Luo, Yuxiang Zheng, Tianze Xu, Dayuan Fu, Yunze Wu, Pengrui Lu, Zengzhi Wang, Yiwei Qin, Zhen Huang, Yan Ma, Zhulin Hu, Haoyang Zou, Tiantian Mi, Yixin Ye, Ethan Chern, Pengfei Liu

@unchartedworlds@scicomm.xyz
2025-07-24 14:46:34
Content warning: Tiago Forte on thinking about climate change

Good to see Tiago Forte talking about this. A lot of people read his stuff.
(He's a writer/teacher best known for the "Building a Second Brain" framework.)
"It was that summer when climate change stopped being an abstract concept and became viscerally personal for me. I realized that this wasn’t a one-time freak event—every summer we could expect deteriorating air quality from rampant wildfires. ...
"This convergence of physical heat, failing infrastructure, and human vulnerability isn’t just a temporary inconvenience. It’s a preview of the fundamental challenge that Jeff Goodell explores in The Heat Will Kill You First, a book that forced me to confront an uncomfortable truth: all our routines for productive living and working are built on the assumption of a stable climate. It no longer makes sense for me to teach people how to build productive systems without taking into account the increasing instability of our wider environment."
#TiagoForte #ClimateChange #ClimateDiary #environment #books #heatwave