Tootfinder

Opt-in global Mastodon full text search. Join the index!

@threeofus@mstdn.social
2025-10-07 10:32:23

I hate being told what to do. I also find it hard to accept ideas from other people - my partner especially. I think it’s because I want to find solutions myself and do it in my own time. I’m often overwhelmed with things when she starts talking about another thing that needs sorting out. I feel really angry when she does that. I will try to communicate better in those situations. Sometimes I don’t need problem solving, just a hug.

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@teledyn@mstdn.ca
2025-10-02 20:38:30

Yes, The News is reaching new heights of bad, but I dont think keywords are really the cause of my suffering.
So I ask myself why they irk me and it generally comes back that I want control over others, which, as tyrants inevitably find out, isn't really possible. Hard enough controlling myself!
So, given what feeble control I have, I ask what needs doing, here, now, and find I need to know what's happening around me, here, now, to answer 😅
My strategy for news is roughly
- compare multiple sources eg AP, Reuters etc
- ignore what anyone says
- ignore claims what they will do or have done
- ignore predictions and interpretations
- be skeptical of causal inferences
- attend to what was actually done
The first and last when combined are often amusing in which actual observations are included or left out. Humans are awfully clever, and so cunning in their rhetoric 🤣

@whitequark@mastodon.social
2025-10-01 08:42:39

there exist several pieces of folk wisdom:
- "you cannot run your own mail server in 2025, this is too hard and time consuming" (completely false, i've done this since ~2010 with minimal ongoing maintenance)
- "you can do it but gmail will sort your mail to spam" (partially true and what i want to talk about here)
recently, my hand was forced: i had to migrate my mail server across providers and regions. it's unimportant why but important what the …

@ErikUden@mastodon.de
2025-07-31 21:43:49

my grandkids (who went no contact for several years due to my robophobia, until I got dementia): ok grandpa, this is RobDoc, he will be around to remind you to take your meds
me: I don't want no tinskin wireback clanker tellin' me what to do

@tiotasram@kolektiva.social
2025-07-22 00:03:45

Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: chelseatroy.com/2024/08/28/doe which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.

@blakes7bot@mas.torpidity.net
2025-09-26 06:00:55

#Blakes7 Series C, Episode 01 - Aftermath
AVON: So you have very little to fear from Servalan. There is no real Federation anymore. It's unlikely that anyone will come looking for you. You won't have to hide any longer.
MELLANBY: It's odd. For the last twenty years that's all I've dreamed of, freedom to do what I want, go where I want. Now that I've got it,…

Claude 3.7 describes the image as: "This image appears to be from a science fiction TV series from the late 1970s or early 1980s based on the visual style and production quality. The scene shows two people in what looks like a spacecraft or futuristic setting, with metallic elements visible in the background. 

One person wears a black outfit while the other wears an elegant white off-shoulder garment and appears to be holding or examining something. The person on the right is adorned with a ne…
@inthehands@hachyderm.io
2025-09-20 17:39:43

That’s a really hard problem, and not one that will be solved in my ad hoc Saturday posts (or in my replies). I have no answers. I do however have one model to study, which is this org my dad helped found:
wildernessalliance.org
They help people set up local wilderness stewardship volunteer groups. (There’s a wilderness area near you! You want to help take care of it! How do you do that? What is actually helpful?? What are models and best practices??? Where do you even start????)
8/

@ruari@velocipederider.com
2025-09-09 13:00:54

I need to show progress for an action in a shell script I made for work. I am not a dev and I am not going to vibe code something to replace the script, so the "UI" is just the dialog utililty using ncurses. Sadly this does not have a spinner, which is what I actually want. I want this, as I do not know in advance how long extraction of archives it extracts will take. Dialog expects you to give it progress updates up to 100%.

@tiotasram@kolektiva.social
2025-07-28 13:04:34

How popular media gets love wrong
Okay, so what exactly are the details of the "engineered" model of love from my previous post? I'll try to summarize my thoughts and the experiences they're built on.
1. "Love" can be be thought of like a mechanism that's built by two (or more) people. In this case, no single person can build the thing alone, to work it needs contributions from multiple people (I suppose self-love might be an exception to that). In any case, the builders can intentionally choose how they build (and maintain) the mechanism, they can build it differently to suit their particular needs/wants, and they will need to maintain and repair it over time to keep it running. It may need winding, or fuel, or charging plus oil changes and bolt-tightening, etc.
2. Any two (or more) people can choose to start building love between them at any time. No need to "find your soulmate" or "wait for the right person." Now the caveat is that the mechanism is difficult to build and requires lots of cooperation, so there might indeed be "wrong people" to try to build love with. People in general might experience more failures than successes. The key component is slowly-escalating shared commitment to the project, which is negotiated between the partners so that neither one feels like they've been left to do all the work themselves. Since it's a big scary project though, it's very easy to decide it's too hard and give up, and so the builders need to encourage each other and pace themselves. The project can only succeed if there's mutual commitment, and that will certainly require compromise (sometimes even sacrifice, though not always). If the mechanism works well, the benefits (companionship; encouragement; praise; loving sex; hugs; etc.) will be well worth the compromises you make to build it, but this isn't always the case.
3. The mechanism is prone to falling apart if not maintained. In my view, the "fire" and "appeal" models of love don't adequately convey the need for this maintenance and lead to a lot of under-maintained relationships many of which fall apart. You'll need to do things together that make you happy, do things that make your partner happy (in some cases even if they annoy you, but never in a transactional or box-checking way), spend time with shared attention, spend time alone and/or apart, reassure each other through words (or deeds) of mutual beliefs (especially your continued commitment to the relationship), do things that comfort and/or excite each other physically (anywhere from hugs to hand-holding to sex) and probably other things I'm not thinking of. Not *every* relationship needs *all* of these maintenance techniques, but I think most will need most. Note especially that patriarchy teaches men that they don't need to bother with any of this, which harms primarily their romantic partners but secondarily them as their relationships fail due to their own (cultivated-by-patriarchy) incompetence. If a relationship evolves to a point where one person is doing all the maintenance (& improvement) work, it's been bent into a shape that no longer really qualifies as "love" in my book, and that's super unhealthy.
4. The key things to negotiate when trying to build a new love are first, how to work together in the first place, and how to be comfortable around each others' habits (or how to change those habits). Second, what level of commitment you have right now, and what how/when you want to increase that commitment. Additionally, I think it's worth checking in about what you're each putting into and getting out of the relationship, to ensure that it continues to be positive for all participants. To build a successful relationship, you need to be able to incrementally increase the level of commitment to one that you're both comfortable staying at long-term, while ensuring that for both partners, the relationship is both a net benefit and has manageable costs (those two things are not the same). Obviously it's not easy to actually have conversations about these things (congratulations if you can just talk about this stuff) because there's a huge fear of hearing an answer that you don't want to hear. I think the range of discouraging answers which actually spell doom for a relationship is smaller than people think and there's usually a reasonable "shoulder" you can fall into where things aren't on a good trajectory but could be brought back into one, but even so these conversations are scary. Still, I think only having honest conversations about these things when you're angry at each other is not a good plan. You can also try to communicate some of these things via non-conversational means, if that feels safer, and at least being aware that these are the objectives you're pursuing is probably helpful.
I'll post two more replies here about my own experiences that led me to this mental model and trying to distill this into advice, although it will take me a moment to get to those.
#relationships #love

@mariyadelano@hachyderm.io
2025-07-21 19:00:54

Oh no it happened - client for a research project I’m working on got upset that we’re doing manual data analysis of survey responses, and complained about why we are so slow when their internal team working on a different report got “everything done in a couple of days with #AI tools”
And then they told us that waiting for proper human analysis is a “waste of time” and that we need to just chuck our dataset into AI and “get it over with”
I really don’t know what to do right now 🥲
Trying to do this properly on their expected timeline will mean very little sleep for multiple days, but giving up on the project quality and dumping it into AI is will make this entire project a waste of time. (As I wouldn’t be able to trust the output of the analysis, or be proud of it to showcase the final report as an example of our work, and not to mention that I don’t want to support this expectation to rush everything at work with these AI models)

@robpike@hachyderm.io
2025-07-21 10:44:40

An excellent explanation of what bothers me about LLMs, especially in schools (but also more broadly). It's changing who we are - we the community, not just individuals - and in ways we cannot control or manage. I guess some people want those changes. I do not.
It's ironic that writing has never been more central to our lives, with texting and messaging and blogging and social media, yet we are moving towards a sterile world where no one will know how to write.
discuss.systems/@rebeccawb/114

@brandizzi@mastodon.social
2025-09-03 15:07:14

Do you have to put images in documents and sites, want to add alt text, but this is just soooo boring? Here is a lifehack that helped me out a lot, still helps:
When you save the image file, use a description of the image as its name.
Many apps use the file name as alt text. Even if not, the alt text will be there already to be copied.
What is surprising is how easy it is: I never struggle to name the file, but typing the same in an alt text textbox is somehow so annoying..…

@jake4480@c.im
2025-09-23 04:05:55

Tim and Eric writing a horror movie, say it will be "sicker than most people are going to want to watch", and I say bring it, Tim and Eric. Let's see what you can do. Sicken me beyond belief. fangoria.com/tim-heidecker-eri

@cheryanne@aus.social
2025-09-22 10:17:45

By the way, what time is the Rapture occurring tomorrow? I have a few things I want to do beforehand, so hopefully it will be later in the day.
#Rapture #Sarcasm

@tiotasram@kolektiva.social
2025-07-30 17:56:35

Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
social.coop/@eloquence/1149406
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.

@compfu@mograph.social
2025-09-18 21:19:59

@… Hi, at the start of the year you collected bread knife measurements. What was that for?
digipres.club/@timixretroplays

@tiotasram@kolektiva.social
2025-09-14 12:01:38

TL;DR: what if instead of denying the harms of fascism, we denied its suppressive threats of punishment
Many of us have really sharpened our denial skills since the advent of the ongoing pandemic (perhaps you even hesitated at the word "ongoing" there and thought "maybe I won't read this one, it seems like it'll be tiresome"). I don't say this as a preface to a fiery condemnation or a plea to "sanity" or a bunch of evidence of how bad things are, because I too have honed my denial skills in these recent years, and I feel like talking about that development.
Denial comes in many forms, including strategic information avoidance ("I don't have time to look that up right now", "I keep forgetting to look into that", "well this author made a tiny mistake, so I'll click away and read something else", "I'm so tired of hearing about this, let me scroll farther", etc.) strategic dismissal ("look, there's a bit of uncertainty here, I should ignore this", "this doesn't line up perfectly with my anecdotal experience, it must be completely wrong", etc.) and strategic forgetting ("I don't remember what that one study said exactly; it was painful to think about", "I forgot exactly what my friend was saying when we got into that argument", etc.). It's in fact a kind of skill that you can get better at, along with the complementary skill of compartmentalization. It can of course be incredibly harmful, and a huge genre of fables exists precisely to highlight its harms, but it also has some short-term psychological benefits, chiefly in the form of muting anxiety. This is not an endorsement of denial (the harms can be catastrophic), but I want to acknowledge that there *are* short-term benefits. Via compartmentalization, it's even possible to be honest with ourselves about some of our own denials without giving them up immediately.
But as I said earlier, I'm not here to talk you out of your denials. Instead, given that we are so good at denial now, I'm here to ask you to be strategic about it. In particular, we live in a world awash with propaganda/advertising that serves both political and commercial ends. Why not use some of our denial skills to counteract that?
For example, I know quite a few people in complete denial of our current political situation, but those who aren't (including myself) often express consternation about just how many people in the country are supporting literal fascism. Of course, logically that appearance of widespread support is going to be partly a lie, given how much our public media is beholden to the fascists or outright in their side. Finding better facts on the true level of support is hard, but in the meantime, why not be in denial about the "fact" that Trump has widespread popular support?
To give another example: advertisers constantly barrage us with messages about our bodies and weight, trying to keep us insecure (and thus in the mood to spend money to "fix" the problem). For sure cutting through that bullshit by reading about body positivity etc. is a better solution, but in the meantime, why not be in denial about there being anything wrong with your body?
This kind of intentional denial certainly has its own risks (our bodies do actually need regular maintenance, for example, so complete denial on that front is risky) but there's definitely a whole lot of misinformation out there that it would be better to ignore. To the extent such denial expands to a more general denial of underlying problems, this idea of intentional denial is probably just bad. But I sure wish that in a world where people (including myself) routinely deny significant widespread dangers like COVID-19's long-term risks or the ongoing harms of escalating fascism, they'd at least also deny some of the propaganda keeping them unhappy and passive. Instead of being in denial about US-run concentration camps, why not be in denial that the state will be able to punish you for resisting them?

@tiotasram@kolektiva.social
2025-09-23 11:58:48

TL;DR: spending money to find the cause of autism is a eugenics project, and those resources could have been spent improving accommodations for Autistic people instead.
To preface this, I'm not Autistic but I'm neurodivergent with some overlap.
We need to be absolutely clear right now: the main purpose is *all* research into the causes of autism is eugenics: a cause is sought because non-autistic people want to *eliminate* autistic people via some kind of "cure." It should be obvious, but a "cured autistic person" who did not get a say in the decision to administer that "cure" has been subjected to non-consensual medical intervention at an extremely unethical level. Many autistic people have been exceptionally clear that they don't want to be "cured," including some people with "severe autism" such as people who are nonverbal.
When we think things like "but autism makes life so hard for some people," we're saying that the difficulties in their life are a result of their neurotype, rather than blaming the society that punished & devalues the behaviors that result from that neurotype at every turn. To the extent that an individual autistic person wants to modify their neurotype and/or otherwise use aids to modify themselves to reduce difficulties in their life, they should be free to pursue that. But we should always ask the question: "what if we changed their social or physical environment instead, so that they didn't have to change themselves?" The point is that difficulties are always the product of person x environment, and many of the difficulties we attribute to autism should instead be attributed to anti-autistic social & physical spaces, and resources spent trying to "find the cause of autism" would be *much* better spent trying to develop & promote better accommodations for autism. Or at least, that's the case if you care about the quality of life of autistic people and/or recognize their enormous contributions to society (e.g., Wikipedia could not exist in anything near its current form without autistic input). If instead you think of Autistic people as gross burdens that you'd rather be rid of, then it makes sense to investigate the causes of autism so that you can eventually find a "cure."
All of that to say: the best response to lies about the causes of autism is to ask "What is the end goal of identifying the cause?" instead of saying "That's not true, here's better info about the causes."
#autism #trump
P.S. yes, I do think about the plight of parents of autistic kids, particularly those that have huge struggles fitting into the expectations of our society. They've been put in a position where society constantly bullies and devalues their kid, and makes it mostly impossible for their kid to exist without constant parental support, which is a lot of work and which is unfair when your peers get the school system to do a massive amount of childcare. But in that situation, your kid is in an even worse position than you as the direct victim of all of that, and you have a choice: are you going to be their ally against the unfair world, or are you going to blame them and try to get them to confirm enough that you can let the school system take care of them, despite the immense pain that that will provoke? Please don't come crying for sympathy if you choose the later option (and yes, helping them be able to independently navigate society is a good thing for them, but there's a difference between helping them as their ally, at their pace, and trying to force them to conform to reduce the burden society has placed on you).

@unchartedworlds@scicomm.xyz
2025-07-19 09:37:38
Content warning: what I plan to contribute to BiCon (July 2025, Nottingham and online)

I have plans for a few different things...
• Wednesday evening, 23 July, I'm hosting a 1-hour online thing that'll be open to whoever's already booked by then. It'll be a somewhat structured talky session on a theme of "inventing the BiCon you want", and an opportunity to meet other people who are going. Newcomers especially welcome :-)
• On the Friday morning at in-person BiCon, I'm offering a session called "Curiosity Skills". It's about which kinds of questions are genuinely "open", versus which kinds of questions allow your own assumptions and biases to sneak in! It'll be partly me explaining, and partly the chance for some little conversational experiments, to notice how the different questions work in practice.
• Subject to finding a nice quiet airy place to do it, I plan to run a mask-decoration session at some point on the Friday. I'll bring a few different kinds of masks, plus lace, beads and sequins, and some past experience of how to decorate masks without compromising the seal or the breathability. I'll invite donations for the materials. Decorate your mask for Pride! or for BiCon partying! or just because you like to :-)
• Also I will bring my badges and zines, and have them on sale!
=
By the way, if you might come to the Wednesday evening online bit, let me know what time you'd like it to start, because that's a question I have open at the moment. Could be 19.00, 19.30, 20.00. For myself I don't really mind, but I'm aware that some people have teatimes or child-bedtimes that can't easily be moved.
#bi #trans #Nottingham #EastMidlands #England #UK #BiCon #bisexual #bisexuality #queer #LGBT #LGBTQ

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding

@chris@mstdn.chrisalemany.ca
2025-07-09 04:52:46

Toronto Star: “New numbers reveal 10,000-plus Ontario college layoffs, 600 programs cancelled or suspended over past year”
The bloodbath in our higher education system in Canada has gotten NO press even though provincial governments and likely federal KNOW. This is the first in depth article I have seen. ALL governments micro-manage university and college policy/finances, no matter what they might say to the contrary.
Here are some truths you need to know:
Yes, I am biased as a 25 year employee of a University.
Yes, my University has also had completely unprecedented cuts in the past 12-24 months, with more coming.
Yes, it is because of the loss of International Students and their tuition revenue. Without that loss, many domestic enrollment numbers have actually been growing, but the money per student is orders of magnitude less. (ie. International was a cash cow)
Yes, faculty and even many admin, have been warning about the government downloading funding onto International tuitions for decades.
Yes, government will claim they are “investing more than ever”, but this is usually about Capital expenses (buildings, residences, infrastructure) or meeting contractual increases for staff salaries, *not* operating expenses.
Yes, in BC in the 1980s 80-90% of a University or College operating budget was covered by “base funding” from the province. Now, it is often below 50%. (If this makes you ask… is it still a “public University system”, please do!!)
And finally, yes, if we want to consider ourselves a modern country, we cannot possibly think this kind of contraction in educational opportunity (while domestic tuitions continue to increase!) is at all healthy for our society as a whole.
Toronto Star: #canpoli #cdnpoli #education #internationalEd #immigration #postsecondary #educationShouldBeFree

@tiotasram@kolektiva.social
2025-07-19 07:51:05

AI, AGI, and learning efficiency
My 4-month-old kid is not DDoSing Wikipedia right now, nor will they ever do so before learning to speak, read, or write. Their entire "training corpus" will not top even 100 million "tokens" before they can speak & understand language, and do so with real intentionally.
Just to emphasize that point: 100 words-per-minute times 60 minutes-per-hour times 12 hours-per-day times 365 days-per-year times 4 years is a mere 105,120,000 words. That's a ludicrously *high* estimate of words-per-minute and hours-per-day, and 4 years old (the age of my other kid) is well after basic speech capabilities are developed in many children, etc. More likely the available "training data" is at least 1 or 2 orders of magnitude less than this.
The point here is that large language models, trained as they are on multiple *billions* of tokens, are not developing their behavioral capabilities in a way that's remotely similar to humans, even if you believe those capabilities are similar (they are by certain very biased ways of measurement; they very much aren't by others). This idea that humans must be naturally good at acquiring language is an old one (see e.g. #AI #LLM #AGI