Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csAI_bot@mastoxiv.page
2025-08-14 08:59:12

Mathematical Computation and Reasoning Errors by Large Language Models
Liang Zhang, Edith Aurora Graf
arxiv.org/abs/2508.09932 arxiv.org/pd…

@yaxu@post.lurk.org
2025-08-10 18:29:03

Two out of three night trains on this trip has had our sleeper carriage cancelled or replaced last minute (causing 1hr delay).. I wonder if there's a bed bug problem in Europe at the moment

@arXiv_nlincd_bot@mastoxiv.page
2025-08-13 08:04:52

Discovery of 10,059 new three-dimensional periodic orbits of general three-body problem
Xiaoming Li, Shijun Liao
arxiv.org/abs/2508.08568 a…

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@arXiv_mathOC_bot@mastoxiv.page
2025-06-12 09:38:51

A Saddle Point Algorithm for Robust Data-Driven Factor Model Problems
Shabnam Khodakaramzadeh, Soroosh Shafiee, Gabriel de Albuquerque Gleizer, Peyman Mohajerin Esfahani
arxiv.org/abs/2506.09776

@arXiv_mathNA_bot@mastoxiv.page
2025-08-13 08:27:32

Fast adaptive tubal rank-revealing algorithm for t-product based tensor approximation
Qiaohua Liu, Jiehui Gu
arxiv.org/abs/2508.08557 arxiv…

@arXiv_econTH_bot@mastoxiv.page
2025-07-09 08:47:22

A Directed Lazy Random Walk Model to Three-Way Dynamic Matching Problem
Souvik Roy, Agamani Saha
arxiv.org/abs/2507.06126

@arXiv_hepth_bot@mastoxiv.page
2025-07-10 09:26:31

Hodge Duals in Spherical Compactifications
Arash Azizi
arxiv.org/abs/2507.06324 arxiv.org/pdf/2507.06324

@mszll@datasci.social
2025-07-03 09:08:01

Probably a good day 🥵 to remind of our CoolWalks study: nature.com/articles/s41598-025
- Above all, stop burning fossil fuels
- Buildings (and trees) cast a lot of shade in cities. We systematically quantify the benefits for 🚶🚴
- Make shade plans, b…

Example of three different links connecting two nodes in the street network, from shortest and least shaded (1, blue) to longest and most shaded (3, green).
@arXiv_mathRA_bot@mastoxiv.page
2025-08-07 07:50:03

On three-dimensional associative algebras
U. Bekbaev, I. Rakhimov
arxiv.org/abs/2508.04104 arxiv.org/pdf/2508.04104

@arXiv_quantph_bot@mastoxiv.page
2025-08-04 09:51:10

Inference of maximum parsimony phylogenetic trees with model-based classical and quantum methods
Jiawei Zhang, Yibo Chen, Yang Zhou, Jun-Han Huang
arxiv.org/abs/2508.00468

@arXiv_mathNA_bot@mastoxiv.page
2025-07-10 08:57:41

A posteriori error estimates for a $C^1$ virtual element method applied to the thin plate vibration problem
Franco Dassi, Andres E. Rubiano, Iv\'an Vel\'asquez
arxiv.org/abs/2507.06846

@arXiv_csCG_bot@mastoxiv.page
2025-07-03 07:47:20

Empirical Analysis Of Heuristic and Approximation Algorithms for the The Mutual-Visibility Problem
Vanja Stojanovi\'c, Bor Panger\v{s}i\v{c}
arxiv.org/abs/2507.01076

@arXiv_astrophSR_bot@mastoxiv.page
2025-06-04 13:52:46

This arxiv.org/abs/2505.09708 has been replaced.
initial toot: mastoxiv.page/@arXiv_…

@arXiv_csCL_bot@mastoxiv.page
2025-07-31 09:54:01

Investigating Hallucination in Conversations for Low Resource Languages
Amit Das, Md. Najib Hasan, Souvika Sarkar, Zheng Zhang, Fatemeh Jamshidi, Tathagata Bhattacharya, Nilanjana Raychawdhury, Dongji Feng, Vinija Jain, Aman Chadha
arxiv.org/abs/2507.22720

@arXiv_eessSY_bot@mastoxiv.page
2025-06-05 09:46:58

This arxiv.org/abs/2501.01817 has been replaced.
initial toot: mastoxiv.page/@arXiv_ees…

@arXiv_mathST_bot@mastoxiv.page
2025-08-05 09:46:40

Estimation of Algebraic Sets: Extending PCA Beyond Linearity
Alberto Gonz\'alez-Sanz, Gilles Mordant, \'Alvaro Samperio, Bodhisattva Sen
arxiv.org/abs/2508.01976

@arXiv_csRO_bot@mastoxiv.page
2025-06-24 11:57:20

Robotic Manipulation of a Rotating Chain with Bottom End Fixed
Qi Jing Chen, Shilin Shan, Quang-Cuong Pham
arxiv.org/abs/2506.18355

@arXiv_csCE_bot@mastoxiv.page
2025-06-24 08:37:40

A predictor-corrector scheme for approximating signed distances using finite element methods
Amina El Bachari, Johann Rannou, Vladislav A. Yastrebov, Pierre Kerfriden, Susanne Claus
arxiv.org/abs/2506.17830

@arXiv_csAI_bot@mastoxiv.page
2025-07-01 10:22:03

Are Large Language Models Capable of Deep Relational Reasoning? Insights from DeepSeek-R1 and Benchmark Comparisons
Chi Chiu So, Yueyue Sun, Jun-Min Wang, Siu Pang Yung, Anthony Wai Keung Loh, Chun Pong Chau
arxiv.org/abs/2506.23128

@arXiv_csGT_bot@mastoxiv.page
2025-07-22 08:08:00

Probing EFX via PMMS: (Non-)Existence Results in Discrete Fair Division
Jaros{\l}aw Byrka, Franciszek Malinka, Tomasz Ponitka
arxiv.org/abs/2507.14957

@arXiv_mathAP_bot@mastoxiv.page
2025-07-24 08:12:39

Formation of vacuum state and delta-shock in the solution of two-dimensional Riemann problem for zero pressure gas dynamics
Anamika Pandey, T. Raja Sekhar
arxiv.org/abs/2507.17213

@arXiv_astrophIM_bot@mastoxiv.page
2025-07-21 08:45:00

Supervised Extraction of the Thermal Sunyaev$-$Zel'dovich Effect with a Three-Dimensional Convolutional Neural Network
Cameron T. Pratt, Zhijie Qu, Joel N. Bregman
arxiv.org/abs/2507.13400

@arXiv_eessSP_bot@mastoxiv.page
2025-07-21 09:18:30

Distortion-Aware Hybrid Beamforming for Integrated Sensing and Communication
Zeyuan Zhang, Yue Xiu, Phee Lep Yeoh, Guangyi Liu, Zixing Wu, Ning Wei
arxiv.org/abs/2507.14018

@arXiv_mathNA_bot@mastoxiv.page
2025-08-08 08:03:22

Toroidal area-preserving parameterizations of genus-one closed surfaces
Marco Sutti, Mei-Heng Yueh
arxiv.org/abs/2508.05111 arxiv.org/pdf/2…

@arXiv_hepth_bot@mastoxiv.page
2025-06-27 09:34:29

Three-point functions from integrability in $\mathcal{N}=2$ orbifold theories
Dennis le Plat, Torben Skrzypek
arxiv.org/abs/2506.21323

@arXiv_mathAP_bot@mastoxiv.page
2025-07-23 09:37:32

Global existence and optimal time-decay rates of the compressible Navier-Stokes equations with density-dependent viscosities
Jie Fan, Xiangdi Huang, Anchun Ni
arxiv.org/abs/2507.16436

@hex@kolektiva.social
2025-07-21 01:50:28

Epstein shit and adjacent, Rural America, Poverty, Abuse
Everyone who's not a pedophile thinks pedophiles are bad, but there's this special obsessed hatred you'll find among poor rural Americans. The whole QAnon/Epstein obsession may not really make sense to folks raised in cities. Like, why do these people think *so much* about pedophiles? Why do they think that everyone in power is a pedophile? Why would the Pizzagate thing make sense to anyone? What is this unhinged shit? A lot of folks (who aren't anarchists) might be inclined to ask "why can't these people just let the cops take care of it?"
I was watching Legal Eagle's run down on the Trump Epstein thing earlier today and I woke up thinking about something I don't know if I've ever talked about. Now that I'm not in the US, I'm not at any risk of talking about it. I don't know how much I would have been before, but that's not something I'm gonna dig into right now. So let me tell you a story that might explain a few things.
I'm like 16, maybe 17. I have my license, so this girl I was dating/not dating/just friends with/whatever would regularly convince me to drive her and her friends around. I think she's like 15 at the time. Her friends are younger than her.
She tells me that there's a party we can go to where they have beer. She was told to invite her friends, so I can come too. We're going to pick her friends up (we regularly fill the VW Golf well beyond the legal limit and drive places) and head to the party.
So I take these girls, at least is 13 years old, down to this party. I'm already a bit sketched out bringing a 13 year old to a party. We drive out for a while. It's in the country. We drive down a long dark road. Three are some barrel fires and a shack. This is all a bit strange, but not too abnormal for this area. We're a little ways outside of a place called Mill City (in Oregon).
We park and walk towards the shack. This dude who looks like a rat comes up and offers us beer. He laughs and talks to the girl who invited me, "What's he doing here? You're supposed to bring your girl friends." She's like, "He's our ride." I don't remember if he offered me a beer or not.
We go over to this shed and everyone starts smoking, except me because I didn't smoke until I turned 18. The other girls start talking about the rat face dude, who's wandered over by the fire with some other guys. They're mainly teasing one of the 13 year old girls about having sex with him a bunch of times. They say he's like, 32 or something. The other girls joke about him only having sex with 13 year olds because he's too ugly to have sex with anyone closer to his own age.
Somewhere along the line it comes out that he's a cop. I never forgot that, it's absolutely seared in to my memory. I can picture his face perfectly still, decades later, and them talking about how he's a deputy, he was in his 30's, and he was having sex with a 13 year old girl. I was the only boy there, but there were a few older men. This was a chunk of the good ol' boys club of the town. I think there were a couple of cops besides the one deputy, and a judge or the mayor or some kind of big local VIP.
I kept trying to get my friend to leave, but she wanted to stay. Turns out under age drinking with cops seems like a great deal if you're a kid because you know you won't get busted. I left alone, creeped the fuck out.
I was told later that I wasn't invited and that I couldn't talk about it, I've always been good at compartmentalization, so I never did.
Decades later it occurred to me what was actually happening. I'm pretty sure that cop was giving meth he'd seized as evidence to these kids. This wasn't some one-off thing. It was regular. Who knows how many decades it went on after I left, or how many decades it had been going on before I found out. I knew this type of thing had happened at least a few times before because that's how that 13 year old girl and that 32 year old cop had hooked up in the first place.
Hearing about Epstein's MO, targeting these teenage girls from fucked up backgrounds, it's right there for me. I wouldn't be surprised if they were involved in sex trafficking of minors or some shit like that... but who would you call if you found out? Half the sheriff's department was there and the other half would cover for them.
You live in the city and shit like that doesn't happen, or at least you don't think it happens. But rural poor folks have this intuition about power and abuse. It's right there and you know it.
Trump is such a familiar character for me, because he's exactly that small town mayor or sheriff. He'll will talk about being tough on crime and hunting down pedophiles, while hanging out at a party that exists so people can fuck 8th graders.
The problem with the whole thing is that rural folks will never break the cognitive dissonance between "kill the peods" and "back the blue." They'll never go kill those cops. No, the pedos must be somewhere else. It must be the elites. It must be outsiders. It can't be the cops and good ol' boys everyone respects. It can't be the mayor who rigs the election to win every time. It can't be the "good upstanding" sheriff. Nah, it's the Clintons.
To be fair, it's probably also the Clitnons, a bunch of other politicians, billionaires, etc. Epstein was exactly who everyone thought he was, and he didn't get away with it for so long without a whole lot of really powerful help.
There are still powerful people who got away with involvement with #Epstein. #Trump is one of them, but I don't really believe that he's the only one.
#USPol #ACAB