
2025-06-24 11:57:40
Robots and Children that Learn Together : Improving Knowledge Retention by Teaching Peer-Like Interactive Robots
Imene Tarakli, Samuele Vinanzi, Richard Moore, Alessandro Di Nuovo
https://arxiv.org/abs/2506.18365
Robots and Children that Learn Together : Improving Knowledge Retention by Teaching Peer-Like Interactive Robots
Imene Tarakli, Samuele Vinanzi, Richard Moore, Alessandro Di Nuovo
https://arxiv.org/abs/2506.18365
Do What? Teaching Vision-Language-Action Models to Reject the Impossible
Wen-Han Hsieh, Elvis Hsieh, Dantong Niu, Trevor Darrell, Roei Herzig, David M. Chan
https://arxiv.org/abs/2508.16292
"A new state law forbids education increasing ‘awareness’ of issues relating to race. How are educators supposed to teach history?"
Teaching the Holocaust Just Got Harder in Mississippi
https://www.thebulwark.com/p/teaching-the-holocaust-just-got-harder-in-mississippi-hb-1193-dei-slavery-tate-reeves
If I were teaching a Con Law or Criminal Procedure class I would put up this video and ask the students to identify how many Constitutional violations are occurring.
(The first one to hit me was the violation, right at the start of the video of the 5th amendment's right of non self-incrimination.)
https://www.
If you’re wondering how to get a toddler to brush their teeth, I can recommend teaching them the words to “Witchdoctor”, by Cartoonies.
ooooh-eeee (brush fronts of teeth)
ooh-aah-aah (brush backs)
ting-tang (brush tops and bottoms of molars)
walla-walla-bing-bang (brush tongue)
And repeat.
Under no circumstances should you let them hear the actual song, which is awful.
A Large-Scale Real-World Evaluation of LLM-Based Virtual Teaching Assistant
Sunjun Kweon, Sooyohn Nam, Hyunseung Lim, Hwajung Hong, Edward Choi
https://arxiv.org/abs/2506.17363
Stranger Danger was never the answer.
https://www.huffpost.com/entry/teaching-kids-safety-awareness_l_68a61241e4b0da9c591c73ff
Machine learning-based multimodal prognostic models integrating pathology images and high-throughput omic data for overall survival prediction in cancer: a systematic review
Charlotte Jennings (National Pathology Imaging Cooperative, Leeds Teaching Hospitals NHS Trust, Leeds, UK), Andrew Broad (National Pathology Imaging Cooperative, Leeds Teaching Hospitals NHS Trust, Leeds, UK), Lucy Godson (National Pathology Imaging Cooperative, Leeds Teaching Hospitals NHS Trust, Leeds, UK), Emily…
I just re-read @… 's manifesto on liberal arts education.
It's really about #education in general, including vocational education, and including the fact that school is a subset of education.
Well worth your time.
Pictorial and Documentary Guide for Research, Teaching, and Education through Astronomy, Physics, and Mathematics Pursued under the Umbrella of the United Nations (1974-2024)
Hans J. Haubold, Arak M. Mathai
https://arxiv.org/abs/2507.17283
A Comparison of Three Approaches to Teaching Expressiveness
https://bulletproofmusician.com/a-comparison-of-three-approaches-to-teaching-expressiveness/?utm_source=flipb…
Can the AI bubble please burst next week at the latest?
I'm already done with faculty talk praising AI this and AI that and forcing teaching to fully go tech-bro-style. Next term starts September 15, I just don’t want to set up “AI-safe exams” and having students use AI for everything at the same time
Oh, and don’t be so negative, we’re all using X and Y and the newest Z, it’s awesome! It just hallucinate or lies a bit, but so friendly, the nice bot, don’t you see? Magic!
MAARTA:Multi-Agentic Adaptive Radiology Teaching Assistant
Akash Awasthi, Brandon V. Chang, Anh M. Vu, Ngan Le, Rishi Agrawal, Zhigang Deng, Carol Wu, Hien Van Nguyen
https://arxiv.org/abs/2506.17320
"Administrators don’t have the guts to ban cellphones but expect teachers to spend every day swinging wildly at the slithering AI tech we’re supposed to embrace"
https://sunny.garden/@himantra/115039571729383600
Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: https://chelseatroy.com/2024/08/28/does-ai-benefit-the-world/ which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.
Programming by Backprop: LLMs Acquire Reusable Algorithmic Abstractions During Code Training
Jonathan Cook, Silvia Sapora, Arash Ahmadian, Akbir Khan, Tim Rocktaschel, Jakob Foerster, Laura Ruis
https://arxiv.org/abs/2506.18777
Similarly, hostility to education cloaks itself as support by saying that education should be useful, should be practical, should be focused only on what students need, should be narrowed to what students need, should narrow students, should narrow students into being only what capitalism needs.
I wrote extensively about this dangerous line of thought here:
https://innig.net/teaching/liberal-arts-manifesto
6/
Agile and Student-Centred Teaching of Agile/Scrum Concepts
Maria Spichkova
https://arxiv.org/abs/2506.14369 https://arxiv.org/pdf/250…
A "watch your replay videos" reflection assignment on comparing programming without versus with generative AI: learning about programming, critical AI use and limitations, and reflection
Sarah "Magz" Fernandez, Greg L Nelson
https://arxiv.org/abs/2507.17226
At #AIED2025 we are getting a preview of the report of the #EuropeanDigitalEducationHub, providing practical examples of how XAI-Ed could be used
Im having a mental battle of cheap vs lazy. I have an old Thinkpad laptop i use for teaching. (T40?) Circa 2015ish. But it works just fine, no issues with Fedora, and giving Garuda a try for now on it. Its a good laptop. Problem is that its heavy. And I have to carry a work laptop (MS Surface, win11) as well. And two laptops in the bag is a bit of weight. So I am debating getting a refurb X1 Carbon. But the cheap part of me is going "NOOOO". Ugh, internal conflicts.
Most educators have moved from denial about AI to cautious use, but the real cutting-edge is creators misusing it, twisting it, revealing off-label uses and glitches. At least that's what I claim in this U of T podcast https://edtech.engineering.utoronto.ca/made-for-u-o…
Teaching Complex Systems based on Microservices
Renato Cordeiro Ferreira (University of S\~ao Paulo), Thatiane de Oliveira Rosa (University of S\~ao Paulo, Federal Institute of Tocantins), Alfredo Goldman (University of S\~ao Paulo), Eduardo Guerra (Free University of Bozen-Bolzano)
https://arxiv.org/abs/2506.16492
Teaching people how to use LLMs is not "upskilling", it's the opposite.
An elementary proof of Newman's eta-quotient theorem
David Savitt
https://arxiv.org/abs/2507.16225 https://arxiv.org/pdf/2507.162…
Replaced article(s) found for physics.ed-ph. https://arxiv.org/list/physics.ed-ph/new
[1/1]:
- Coupled Oscillators, Frequency Transfer and the Higgs Mechanism's Teaching
M. O. Tahim, C. R. Muniz, M. S. Cunha, R. I. O. Junior
Another piece of sad US science news: Fulbright board is resigning to protest government interference. When I lived in Beijing as a student, I shared an apartment with a Fulbright scholar. She was so accomplished and amazing, and did really important work (in addition to volunteer teaching at a local middle school, impacting countless families - I wonder how much that achieved for US-China relations)
https://nfdi.social/@PetraSteiner/114712084471545669
Medicaid Cuts Set to Drain Revenue at Elite Teaching Hospitals (Bloomberg)
https://www.bloomberg.com/news/articles/2025-08-07/medicaid-cuts-set-to-drain-revenue-at-elite-teaching-hospitals
http://www.memeorandum.com/250807/p64#a250807p64
What is happening? Have I gone through the looking glass?? C-suite types are saying things about AI that…actually make sense?!?
AWS CEO Matt Garman: “My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.” https://mastodon.social/@fromjason/115067646625547469
My slides on The Paradoxes of Open Data in Libraries, Archives and Museums #DH2025 panel on Openness in GLAM: Analysing, Reflecting, and D…
Reflection, co-founded by ex-Google researchers, unveils Asimov, an AI agent that reads a company's codebase, docs, and more to help software engineering teams (Will Knight/Wired)
https://www.wired.com/story/former-top-google-researchers-have-m…
Bridging MOOCs, Smart Teaching, and AI: A Decade of Evolution Toward a Unified Pedagogy
Bo Yuan, Jiazi Hu
https://arxiv.org/abs/2507.14266 https://
My wife is smarter than me and better than me at almost everything, except a couple things. One of them is regrets. I've been working on teaching her how not to regret anything, ever. Over a series of years. I think it's working fairly well. Sometimes. 😂
T-TExTS (Teaching Text Expansion for Teacher Scaffolding): Enhancing Text Selection in High School Literature through Knowledge Graph-Based Recommendation
Nirmal Gelal, Chloe Snow, Ambyr Rios, Hande K\"u\c{c}\"uk McGinty
https://arxiv.org/abs/2506.12075
I agree!
#Anarchy #Anarchism
I overheard a woman ranting that it is bad that they are not teaching cursive in school. Her reasoning?
"If they can't write cursive, then they can't read cursive. If they can't read cursive, then they can't read The Constitution?"
This is 100% nonsense reasoning. I learned to write in cursive ~40 years ago. I've read the Constitution at least 4 times, but never by reading an image of the document - I only read it in print in a book or a website.
It takes a village to write a book: Mapping anonymous contributions in Stephen Langton's Quaestiones Theologiae
Jan Maliszewski
https://arxiv.org/abs/2508.12830 https://
Interesting interview: Why a professor of fascism left the US: ‘The lesson of 1933 is – you get out’
https://www.theguardian.com/us-news/2025/jun/16/why-a-professor-of-fascism-left-the-us-the-lesson-of-1933-i…
A "simple generalization" of a #JavaScript "Stored Map" ended up taking 2 days of work (done in 4 days) but happy with the result: Able to drop-in replace the non-general version *and* satisfy a different use case. Plus, I learned a lot and gathered a ton of #teaching material. 😊
#softwareEngineering #webdev
This👇needs to be remembered. https://substack.com/@johncleese/note/c-125827219?r=e7vv
This👇needs to be remembered. https://substack.com/@johncleese/note/c-125827219?r=e7vv
Designing conflict-based communicative tasks in Teaching Chinese as a Foreign Language with ChatGPT
Xia Li (LIDILEM)
https://arxiv.org/abs/2506.09089 https…
Teaching LLMs to Speak Spectroscopy
Nesar Ramachandra, Yuan-Sen Ting, Zechang Sun, Azton Wells, Salman Habib
https://arxiv.org/abs/2508.10075 https://arxiv…
'Not pushing the panic button,' Cowboys HC Schottenheimer not pulling plug on QB Milton https://cowboyswire.usatoday.com/story/sports/nfl/cowboys/2025/08/18/schottenheimer-cowboys-qb-milton-struggles-ravens…
Teaching Introductory Functional Programming Using Haskelite
Pedro Vasconcelos (University of Porto)
https://arxiv.org/abs/2508.03640 https://arxiv.org/pdf…
from my link log —
Bril: an intermediate language for teaching compilers.
https://www.cs.cornell.edu/~asampson/blog/bril.html
saved 2024-07-27
Teaching Physical Awareness to LLMs through Sounds
Weiguo Wang, Andy Nie, Wenrui Zhou, Yi Kai, Chengchen Hu
https://arxiv.org/abs/2506.08524 https://
You know what's the difference between a human programmer and an "#AI coding assistant"?
Sure, human programmers make mistakes. And rookies often make "worse" mistakes than an #LLM can come up with. However, the difference is that humans can actually learn. Teaching them comes with a payoff; not always and not necessarily for your project, but there's a good chance that they'll become better programmers and contribute back to the community.
Sure, human programmers sometimes plagiarize. And of course they need to look at some code while they learn. However, they actually can think on their own and come up with something original. And they can learn that plagiarism is wrong.
And most importantly, usually they don't lie if they don't have to, and there are limits to their smugness. You can build a healthy community with them.
You can't build a community with unethical bullshit-spitting machines.
#programming #FreeSoftware #OpenSource
When topics of discourse prove difficult to approach, stories can help us on our way. This famed short story about a graceful city with an unexpected abberation helps me reflect on where I want to apply my capacity for writing, sketching and teaching.
https://axbom.com/omelas/
#DigitalEthics
In the end, “designing assignments that reward insight over polish, creating policies that prioritize learning over automation, and teaching students to question not just what AI produces but how and why they are using it” will have to mean: you can’t pass by using #GenAI.
But since GenAI can already do a pretty good impression of many things, we’ll have to make our courses extremely challe…
In the end, “designing assignments that reward insight over polish, creating policies that prioritize learning over automation, and teaching students to question not just what AI produces but how and why they are using it” will have to mean: you can’t pass by using #GenAI.
But since GenAI can already do a pretty good impression of many things, we’ll have to make our courses extremely challe…
A simple formalization of alpha-equivalence
Kalmer Apinis, Danel Ahman
https://arxiv.org/abs/2507.10181 https://arxiv.org/pdf/2507.10…
Offensive Lineman Speaks on Teaching and Competing Against Raiders' Young Talent https://www.si.com/nfl/raiders/las-vegas-alex-cappa-pete-carroll-chip-kelly-training-camp
Teaching with AI: Human Days and AI Days
In my previous post I outlined a plan for a no-tech pedagogy that would prevent students from using AI to do the assignments. However, my colleagues tell me that current policy at the university where I used to teach requires the use of AI in some classes. They have also eliminated the budget for photocopying handouts and texts. Everything must be delivered through the Learning Management System, which in this case is Canvas.
Has #DH2025 inspired you to get more involved in Digital Humanities communities? Sign up for the @… newsletter for updates from our 14 DH organisations and 10 special interest groups (SIGs)
CFP: Edited Collection on Contingent Teaching from WAC Clearinghouse https://call-for-papers.sas.upenn.edu/cfp/2025/07/28/cfp-edited-collection-on-contingent-teaching-from-wac-clearinghouse
VeS: Teaching Pixels to Listen Without Supervision
Sajay Raj
https://arxiv.org/abs/2507.22008 https://arxiv.org/pdf/2507.22008
Developing a ChatGPT-Based Tool for Physics Experiment Teaching
Yifeng Liu, Min Li, Zhaojun Zhang, Youkang Fang, Meibao Qin
https://arxiv.org/abs/2508.13011 https://
MimicFunc: Imitating Tool Manipulation from a Single Human Video via Functional Correspondence
Chao Tang, Anxing Xiao, Yuhong Deng, Tianrun Hu, Wenlong Dong, Hanbo Zhang, David Hsu, Hong Zhang
https://arxiv.org/abs/2508.13534
Breakable Machine: A K-12 Classroom Game for Transformative AI Literacy Through Spoofing and eXplainable AI (XAI)
Olli Hilke, Nicolas Pope, Juho Kahila, Henriikka Vartiainen, Teemu Roos, Tuomo Parkki, Matti Tedre
https://arxiv.org/abs/2508.14201
Careful Queries, Credible Results: Teaching RAG Models Advanced Web Search Tools with Reinforcement Learning
Yuqin Dai, Shuo Yang, Guoqing Wang, Yong Deng, Zhanwei Zhang, Jun Yin, Pengyu Zeng, Zhenzhe Ying, Changhua Meng, Can Yi, Yuchen Zhou, Weiqiang Wang, Shuai Lu
https://arxiv.org/abs/2508.07956…
Feedback Indicators: The Alignment between Llama and a Teacher in Language Learning
Sylvio R\"udian, Yassin Elsir, Marvin Kretschmer, Sabine Cayrou, Niels Pinkwart
https://arxiv.org/abs/2508.11364
Some excellent talks in #DH2025 LP-07 on 'What Happens When "Hacking" Becomes Easy? Teaching Python in 2025'
Filipa (?): 'A claim about abundance in the future is often a disguised claim about scarcity in the present'
Patrick: 'what do we do when our students can reach for a 'not learning' button? Things that could have been done at home may need …
ALEA IACTA EST: A Declarative Domain-Specific Language for Manually Performable Random Experiments
Baltasar Tranc\'on y Widemann, Markus Lepper
https://arxiv.org/abs/2506.11794
Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (https://arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2
From Data to Insight: Using Contextual Scenarios to Teach Critical Thinking in Data Visualisation
Jonathan C. Roberts, Peter Butcher, Panagiotis D. Ritsos
https://arxiv.org/abs/2508.08737
Enhancing Physics Hand-on Lab through Online Educational Tools
Marina Babayeva
https://arxiv.org/abs/2506.16193 https://arxiv.org/pdf…
Few-shot transfer of tool-use skills using human demonstrations with proximity and tactile sensing
Marina Y. Aoyama, Sethu Vijayakumar, Tetsuya Narita
https://arxiv.org/abs/2507.13200
ParaStudent: Generating and Evaluating Realistic Student Code by Teaching LLMs to Struggle
Mihran Miroyan, Rose Niousha, Joseph E. Gonzalez, Gireeja Ranade, Narges Norouzi
https://arxiv.org/abs/2507.12674
A Humanoid Social Robot as a Teaching Assistant in the Classroom
Thomas Sievers
https://arxiv.org/abs/2508.05646 https://arxiv.org/pdf/2508.05646
Hey, folks who understand alt text and limited vision accessibility:
I have need to write alt text for the diagram below. The alt text needs to be comprehensible to somebody who is encountering this kind of diagram for the ••very first time••. I could describe the images using the relevant jargon, but that would only serve people who already know the thing this activity is teaching them!
Any suggestions for how I could write good alt text for something like this? Is it possible? (The horizontal black bars are minus signs, i.e. subtraction. This is clear from context in the text, but probably not clear in the image.)
PLEASE NOTE: I am looking for people with ••relevant accessibility expertise••, not just random best shots from people who (like me) don’t really know much about this kind of problem.
Notes from the Physics Teaching Lab: NMR Experiments at 21 Gauss
Kenneth G. Libbrecht
https://arxiv.org/abs/2508.10738 https://arxiv.org/pdf/2508.10738
Teaching Introduction to Programming in the times of AI: A case study of a course re-design
Nikolaos Avouris, Kyriakos Sgarbas, George Caridakis, Christos Sintoris
https://arxiv.org/abs/2508.06572
Ashes or Breath: Exploring Moral Dilemmas of Life and Cultural Legacy through Mixed Reality Gaming
Black Sun, Ge Kacy Fu, Shichao Guo
https://arxiv.org/abs/2508.13074 https://…
Coupled Oscillators, Frequency Transfer and the Higgs Mechanism's Teaching
M. K. Tahim, C. R. Muniz, M. S. Cunha, R. I. Oliveira Junior
https://arxiv.org/abs/2508.08640 http…
Co-Creative Learning via Metropolis-Hastings Interaction between Humans and AI
Ryota Okumura, Tadahiro Taniguchi, Akira Taniguchi, Yoshinobu Hagiwara
https://arxiv.org/abs/2506.15468
Sociotechnical Imaginaries of ChatGPT in Higher Education: The Evolving Media Discourse
Yinan Sun, Ali Unlu, Aditya Johri
https://arxiv.org/abs/2508.14692 https://
Teaching Problem Solving in Undergraduate Physics Courses: An Endorsement for Deliberate Practice
Kelly Miller, Olivia Miller, Georgia Lawrence
https://arxiv.org/abs/2508.08133 …
Teaching at Scale: Leveraging AI to Evaluate and Elevate Engineering Education
Jean-Francois Chamberland, Martin C. Carlisle, Arul Jayaraman, Krishna R. Narayanan, Sunay Palsole, Karan Watson
https://arxiv.org/abs/2508.02731
Can you see how I learn? Human observers' inferences about Reinforcement Learning agents' learning processes
Bernhard Hilpert, Muhan Hou, Kim Baraka, Joost Broekens
https://arxiv.org/abs/2506.13583
From Misunderstandings to Learning Opportunities: Leveraging Generative AI in Discussion Forums to Support Student Learning
Stanislav Pozdniakov, Jonathan Brazil, Oleksandra Poquet, Stephan Krusche, Santiago Berrezueta-Guzman, Shazia Sadiq, Hassan Khosravi
https://arxiv.org/abs/2508.11150
Learning by Teaching: Engaging Students as Instructors of Large Language Models in Computer Science Education
Xinming Yang, Haasil Pujara, Jun Li
https://arxiv.org/abs/2508.05979
Networked Observatory for Virtual Astronomy (NOVA): Teaching astronomy with AI
Jorge Pinochet
https://arxiv.org/abs/2507.08195 https://arxiv.org/pdf/2507.0…
Research on Comprehensive Classroom Evaluation System Based on Multiple AI Models
Cong Xie, Li Yang, Daben Wang, Jing Xiao
https://arxiv.org/abs/2506.23079
Teaching Critical Visualization: A Field Report
Andrew McNutt, Shiyi He, Sujit Kumar Kamaraj, Purbid Bambroo, Nastaran Jadidi, John Bovard, Chang Han
https://arxiv.org/abs/2508.02592
Singing Syllabi with Virtual Avatars: Enhancing Student Engagement Through AI-Generated Music and Digital Embodiment
Xinxing Wu
https://arxiv.org/abs/2508.11872 https://
Using Video Games to Teach Kepler's Laws and Orbital Dynamics
Brian DiGiorgio Zanger
https://arxiv.org/abs/2508.13259 https://arxiv.org/pdf/2508.13259
Listening with Language Models: Using LLMs to Collect and Interpret Classroom Feedback
Sai Siddartha Maram, Ulia Zaman, Magy Seif El-Nasr
https://arxiv.org/abs/2508.11707 https:…
Inverted Classroom in der Einf\"uhrungsveranstaltung Programmierung
Ulrich von Zadow, Natalie Kiesler
https://arxiv.org/abs/2506.10057 https://…
Teaching Sustainable Creative Technologies
Chelsea Thompto
https://arxiv.org/abs/2507.05320 https://arxiv.org/pdf/2507.05320
Rookie Mistakes: Measuring Software Quality in Student Projects to Guide Educational Enhancement
Marco De Luca, Sergio Di Martino, Sergio Di Meglio, Anna Rita Fasolino, Luigi Libero Lucio Starace, Porfirio Tramontana
https://arxiv.org/abs/2507.12488
The Transition Matrix -- A classification of navigational patterns between LMS course sections
Tobias Hildebrandt, Lars Mehnen
https://arxiv.org/abs/2506.13275