Tootfinder

Opt-in global Mastodon full text search. Join the index!

@tiotasram@kolektiva.social
2025-06-21 02:34:13

Why AI can't possibly make you more productive; long
#AI and "productivity", some thoughts:
Edit: fixed some typos.
Productivity is a concept that isn't entirely meaningless outside the context of capitalism, but it's a concept that is heavily inflected in a capitalist context. In many uses today it effectively means "how much you can satisfy and/or exceed your boss' expectations." This is not really what it should mean: even in an anarchist utopia, people would care about things like how many shirts they can produce in a week, although in an "I'd like to voluntarily help more people" way rather than an "I need to meet this quota to earn my survival" way. But let's roll with this definition for a second, because it's almost certainly what your boss means when they say "productivity", and understanding that word in a different (even if truer) sense is therefore inherently dangerous.
Accepting "productivity" to mean "satisfying your boss' expectations," I will now claim: the use of generative AI cannot increase your productivity.
Before I dive in, it's imperative to note that the big generative models which most people think of as constituting "AI" today are evil. They are 1: pouring fuel on our burning planet, 2: psychologically strip-mining a class of data laborers who are exploited for their precarity, 3: enclosing, exploiting, and polluting the digital commons, and 4: stealing labor from broad classes of people many of whom are otherwise glad to give that labor away for free provided they get a simple acknowledgement in return. Any of these four "ethical issues" should be enough *alone* to cause everyone to simply not use the technology. These ethical issues are the reason that I do not use generative AI right now, except for in extremely extenuating circumstances. These issues are also convincing for a wide range of people I talk to, from experts to those with no computer science background. So before I launch into a critique of the effectiveness of generative AI, I want to emphasize that such a critique should be entirely unnecessary.
But back to my thesis: generative AI cannot increase your productivity, where "productivity" has been defined as "how much you can satisfy and/or exceed your boss' expectations."
Why? In fact, what the fuck? Every AI booster I've met has claimed the opposite. They've given me personal examples of time saved by using generative AI. Some of them even truly believe this. Sometimes I even believe they saved time without horribly compromising on quality (and often, your boss doesn't care about quality anyways if the lack of quality is hard to measure of doesn't seem likely to impact short-term sales/feedback/revenue). So if generative AI genuinely lets you write more emails in a shorter period of time, or close more tickets, or something else along these lines, how can I say it isn't increasing your ability to meet your boss' expectations?
The problem is simple: your boss' expectations are not a fixed target. Never have been. In virtue of being someone who oversees and pays wages to others under capitalism, your boss' game has always been: pay you less than the worth of your labor, so that they can accumulate profit and thus more capital to remain in charge instead of being forced into working for a wage themselves. Sure, there are layers of management caught in between who aren't fully in this mode, but they are irrelevant to this analysis. It matters not how much you please your manager if your CEO thinks your work is not worth the wages you are being paid. And using AI actively lowers the value of your work relative to your wages.
Why do I say that? It's actually true in several ways. The most obvious: using generative AI lowers the quality of your work, because the work it produces is shot through with errors, and when your job is reduced to proofreading slop, you are bound to tire a bit, relax your diligence, and let some mistakes through. More than you would have if you are actually doing and taking pride in the work. Examples are innumerable and frequent, from journalists to lawyers to programmers, and we laugh at them "haha how stupid to not check whether the books the AI reviewed for you actually existed!" but on a deeper level if we're honest we know we'd eventually make the same mistake ourselves (bonus game: spot the swipe-typing typos I missed in this post; I'm sure there will be some).
But using generative AI also lowers the value of your work in another much more frightening way: in this era of hype, it demonstrates to your boss that you could be replaced by AI. The more you use it, and no matter how much you can see that your human skills are really necessary to correct its mistakes, the more it appears to your boss that they should hire the AI instead of you. Or perhaps retain 10% of the people in roles like yours to manage the AI doing the other 90% of the work. Paradoxically, the *more* you get done in terms of raw output using generative AI, the more it looks to your boss as if there's an opportunity to get enough work done with even fewer expensive humans. Of course, the decision to fire you and lean more heavily into AI isn't really a good one for long-term profits and success, but the modern boss did not get where they are by considering long-term profits. By using AI, you are merely demonstrating your redundancy, and the more you get done with it, the more redundant you seem.
In fact, there's even a third dimension to this: by using generative AI, you're also providing its purveyors with invaluable training data that allows them to make it better at replacing you. It's generally quite shitty right now, but the more use it gets by competent & clever people, the better it can become at the tasks those specific people use it for. Using the currently-popular algorithm family, there are limits to this; I'm not saying it will eventually transcend the mediocrity it's entwined with. But it can absolutely go from underwhelmingly mediocre to almost-reasonably mediocre with the right training data, and data from prompting sessions is both rarer and more useful than the base datasets it's built on.
For all of these reasons, using generative AI in your job is a mistake that will likely lead to your future unemployment. To reiterate, you should already not be using it because it is evil and causes specific and inexcusable harms, but in case like so many you just don't care about those harms, I've just explained to you why for entirely selfish reasons you should not use it.
If you're in a position where your boss is forcing you to use it, my condolences. I suggest leaning into its failures instead of trying to get the most out of it, and as much as possible, showing your boss very clearly how it wastes your time and makes things slower. Also, point out the dangers of legal liability for its mistakes, and make sure your boss is aware of the degree to which any of your AI-eager coworkers are producing low-quality work that harms organizational goals.
Also, if you've read this far and aren't yet of an anarchist mindset, I encourage you to think about the implications of firing 75% of (at least the white-collar) workforce in order to make more profit while fueling the climate crisis and in most cases also propping up dictatorial figureheads in government. When *either* the AI bubble bursts *or* if the techbros get to live out the beginnings of their worker-replacement fantasies, there are going to be an unimaginable number of economically desperate people living in increasingly expensive times. I'm the kind of optimist who thinks that the resulting social crucible, though perhaps through terrible violence, will lead to deep social changes that effectively unseat from power the ultra-rich that continue to drag us all down this destructive path, and I think its worth some thinking now about what you might want the succeeding stable social configuration to look like so you can advocate towards that during points of malleability.
As others have said more eloquently, generative AI *should* be a technology that makes human lives on average easier, and it would be were it developed & controlled by humanists. The only reason that it's not, is that it's developed and controlled by terrible greedy people who use their unfairly hoarded wealth to immiserate the rest of us in order to maintain their dominance. In the long run, for our very survival, we need to depose them, and I look forward to what the term "generative AI" will mean after that finally happens.

@blakes7bot@mas.torpidity.net
2025-09-20 12:23:27

D12 - Warlord
ZUKAN: [Shakes head] As long as the installation is right.
AVON: Well, the only problem now is: where do we find enough raw material to keep this lot running? [He is moving about, looking at various circuit boards and so forth.]
blake.torpidity.net/m/412/103 B7B3

Claude 3.7 describes the image as: "This image shows a scene from a science fiction television production from the late 1970s or early 1980s, based on the distinctive costume design and set aesthetics. The scene takes place in what appears to be a futuristic control room or spacecraft interior.

Two figures are prominently featured wearing elaborate costumes - one on the left in a metallic, gold-toned outfit with a distinctive headpiece, and one on the right in a black uniform with shiny detail…
@pbloem@sigmoid.social
2025-07-18 09:25:22

Now out in #TMLR:
🍇 GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks 🍇
There's lots of work on sampling subgraphs for GNNs, but relatively little on making this sampling process _adaptive_. That is, learning to select the data from the graph that is relevant for your task.
We introduce an RL-based and a GFLowNet-based sampler and show that the approach perf…

A diagram of the GRAPES pipeline. It shows a subgraph being sampled in two steps and being fed to a GNN, with a blue line showing the learning signal. The caption reads Figure 1: Overview of GRAPES. First, GRAPES processes a target node (green) by computing node inclusion probabilities on its 1-hop neighbors (shown by node color shade) with a sampling GNN. Given these probabilities, GRAPES samples k nodes. Then, GRAPES repeats this process over nodes in the 2-hop neighborhood. We pass the sampl…
A results table for node classification on heterophilious graphs. Table 2: F1-scores (%) for different sampling methods trained on heterophilous graphs for a batch size of 256, and a sample size of 256 per layer. We report the mean and standard deviation over 10 runs. The best values among the sampling baselines (all except GAS) are in bold, and the second best are underlined. MC stands for multi-class and ML stands for multi-label classification. OOM indicates out of memory.
Performance of samples vs sampling size showing that GRAPES generally performs well across sample sizes, while other samplers often show more variance across sample sizes. The caption reads Figure 4: Comparative analysis of classification accuracy across different sampling sizes for sampling baseline
and GRAPES. We repeated each experiment five times: The shaded regions show the 95% confidence intervals.
A diagrammatic illustration of a graph classification task used in one of the theorems. The caption reads Figure 9: An example of a graph for Theorem 1 with eight nodes. Red edges belong to E1, features xi and labels yi are shown beside every node. For nodes v1 and v2 we show the edge e12 as an example. As shown, the label of each node is the second feature of its neighbor, where a red edge connects them. The edge homophily ratio is h=12/28 = 0.43.
@hex@kolektiva.social
2025-07-21 01:50:28

Epstein shit and adjacent, Rural America, Poverty, Abuse
Everyone who's not a pedophile thinks pedophiles are bad, but there's this special obsessed hatred you'll find among poor rural Americans. The whole QAnon/Epstein obsession may not really make sense to folks raised in cities. Like, why do these people think *so much* about pedophiles? Why do they think that everyone in power is a pedophile? Why would the Pizzagate thing make sense to anyone? What is this unhinged shit? A lot of folks (who aren't anarchists) might be inclined to ask "why can't these people just let the cops take care of it?"
I was watching Legal Eagle's run down on the Trump Epstein thing earlier today and I woke up thinking about something I don't know if I've ever talked about. Now that I'm not in the US, I'm not at any risk of talking about it. I don't know how much I would have been before, but that's not something I'm gonna dig into right now. So let me tell you a story that might explain a few things.
I'm like 16, maybe 17. I have my license, so this girl I was dating/not dating/just friends with/whatever would regularly convince me to drive her and her friends around. I think she's like 15 at the time. Her friends are younger than her.
She tells me that there's a party we can go to where they have beer. She was told to invite her friends, so I can come too. We're going to pick her friends up (we regularly fill the VW Golf well beyond the legal limit and drive places) and head to the party.
So I take these girls, at least is 13 years old, down to this party. I'm already a bit sketched out bringing a 13 year old to a party. We drive out for a while. It's in the country. We drive down a long dark road. Three are some barrel fires and a shack. This is all a bit strange, but not too abnormal for this area. We're a little ways outside of a place called Mill City (in Oregon).
We park and walk towards the shack. This dude who looks like a rat comes up and offers us beer. He laughs and talks to the girl who invited me, "What's he doing here? You're supposed to bring your girl friends." She's like, "He's our ride." I don't remember if he offered me a beer or not.
We go over to this shed and everyone starts smoking, except me because I didn't smoke until I turned 18. The other girls start talking about the rat face dude, who's wandered over by the fire with some other guys. They're mainly teasing one of the 13 year old girls about having sex with him a bunch of times. They say he's like, 32 or something. The other girls joke about him only having sex with 13 year olds because he's too ugly to have sex with anyone closer to his own age.
Somewhere along the line it comes out that he's a cop. I never forgot that, it's absolutely seared in to my memory. I can picture his face perfectly still, decades later, and them talking about how he's a deputy, he was in his 30's, and he was having sex with a 13 year old girl. I was the only boy there, but there were a few older men. This was a chunk of the good ol' boys club of the town. I think there were a couple of cops besides the one deputy, and a judge or the mayor or some kind of big local VIP.
I kept trying to get my friend to leave, but she wanted to stay. Turns out under age drinking with cops seems like a great deal if you're a kid because you know you won't get busted. I left alone, creeped the fuck out.
I was told later that I wasn't invited and that I couldn't talk about it, I've always been good at compartmentalization, so I never did.
Decades later it occurred to me what was actually happening. I'm pretty sure that cop was giving meth he'd seized as evidence to these kids. This wasn't some one-off thing. It was regular. Who knows how many decades it went on after I left, or how many decades it had been going on before I found out. I knew this type of thing had happened at least a few times before because that's how that 13 year old girl and that 32 year old cop had hooked up in the first place.
Hearing about Epstein's MO, targeting these teenage girls from fucked up backgrounds, it's right there for me. I wouldn't be surprised if they were involved in sex trafficking of minors or some shit like that... but who would you call if you found out? Half the sheriff's department was there and the other half would cover for them.
You live in the city and shit like that doesn't happen, or at least you don't think it happens. But rural poor folks have this intuition about power and abuse. It's right there and you know it.
Trump is such a familiar character for me, because he's exactly that small town mayor or sheriff. He'll will talk about being tough on crime and hunting down pedophiles, while hanging out at a party that exists so people can fuck 8th graders.
The problem with the whole thing is that rural folks will never break the cognitive dissonance between "kill the peods" and "back the blue." They'll never go kill those cops. No, the pedos must be somewhere else. It must be the elites. It must be outsiders. It can't be the cops and good ol' boys everyone respects. It can't be the mayor who rigs the election to win every time. It can't be the "good upstanding" sheriff. Nah, it's the Clintons.
To be fair, it's probably also the Clitnons, a bunch of other politicians, billionaires, etc. Epstein was exactly who everyone thought he was, and he didn't get away with it for so long without a whole lot of really powerful help.
There are still powerful people who got away with involvement with #Epstein. #Trump is one of them, but I don't really believe that he's the only one.
#USPol #ACAB

@aral@mastodon.ar.al
2025-09-18 17:50:06

“Famine is knocking forcefully at the doors of Southern Gaza… we are now standing in front of one of the soup kitchens operating in Khanyounis where families have been lining up since the early hours of the morning… many of them end up returning without receiving a meal sufficient to sustain them.”
@…, reporting from Gaza

@inthehands@hachyderm.io
2025-07-13 17:28:01

❝I have noticed that we people privileged by supremacy have a tendency to take this same stance toward newly aware people, a stance which is not ours to assume. We seem to feel that it is our business to meet people who are in the same place that we were just a few short years or decades ago, and meet their shock and surprise and anger and dismay with a skepticism and an impatience we haven't earned.
We say things like "are you surprised?"
We say things like "why does this shock you?"
We say "oh so you're only angry now?"
We say things like "where have you been?"

Instead of asking “are you surprised?” say “I was surprised once, too; here's what I know.” Instead of “what took you so long?” say “I just got here recently; here's what I've learned.”❞ mastodon.social/@JuliusGoat/11

@blakes7bot@mas.torpidity.net
2025-08-20 15:20:33

Series A, Episode 01 - The Way Back
RICHIE: No, not me. The man we're going to meet. He especially asked us to contact you so he could tell you in person. He was on Ziegler Five a few months ago.
BLAKE: Where is he now?
RAVELLA: Waiting for us. Outside.
blake.torpidity.net/m/101/18

Claude Sonnet 4.0 describes the image as: "I can see this appears to be from a science fiction television production, showing three people in what looks like a dramatic scene against a dark background. The individuals are wearing earth-toned clothing that appears to be costume design typical of 1970s-80s sci-fi shows. The lighting and staging suggest this is an indoor scene, possibly on a spaceship or similar futuristic setting. The composition shows the three figures in close conversation or c…
@mxp@mastodon.acm.org
2025-08-17 20:51:59

Interesting observation by Langdon Winner regarding technological transformation: “by the time the issue of ‘use’ comes up for consideration at all, many of the most interesting questions involved in how technologies are constituted and how they affect what we do are settled or sub-merged.”
This is happening right now with #GenAI .

Excerpt from Langdon Winner (1977): Autonomous Technology, p. 224:

It is important to notice that the problem we are considering here has nothing to do with the traditional notion of “use” and “misuse.” Technological transformation occurs prior to any “use,” good or ill, and takes place as a consequence of the construction and operating design of technological systems. The phenomenon is found where an instrument is taking shape as an instrument but before the time when the instrument is employ…
@arXiv_csSE_bot@mastoxiv.page
2025-09-15 09:08:41

Generating Energy-Efficient Code via Large-Language Models -- Where are we now?
Radu Apsan, Vincenzo Stoico, Michel Albonico, Rudra Dhar, Karthik Vaidhyanathan, Ivano Malavolta
arxiv.org/abs/2509.10099

@chris@mstdn.chrisalemany.ca
2025-07-10 19:00:28

Isn't it weird how when a Federal Crown Corporation - Marine Atlantic - buys a new ship... there is nary a mention that it was built in China (#BCPoli #BCFerries

@arXiv_csLO_bot@mastoxiv.page
2025-09-18 08:17:11

Metric Equational Theories
Radu Mardare (Heriot-Watt University, Edinburgh, Scotland), Neil Ghani (University of Strathclyde, Glasgow, Scotland), Eigil Rischel (University of Strathclyde, Glasgow, Scotland)
arxiv.org/abs/2509.14094

@grifferz@social.bitfolk.com
2025-08-15 23:30:46

I've started mostly hibernating my laptop now (save state to disk and power off completely) instead of just suspending it to memory.
First were the days when lol linux hibernate doesn't even work
Then it was laptop HDDs which are painfully slow
Then we had SSDs, but they were really small. Who could spare enough space for a RAM image?
Now I have 64G of RAM and an NVMe where the smallest part I could buy was twice as big as I need, and twice that is only 43% mo…

@NFL@darktundra.xyz
2025-08-05 20:21:37

Patrick Mahomes still bothered by Chiefs Super Bowl loss: 'Where are we going to go now?'

cbssports.com/nfl/news/patrick

@tiotasram@kolektiva.social
2025-09-14 12:01:38

TL;DR: what if instead of denying the harms of fascism, we denied its suppressive threats of punishment
Many of us have really sharpened our denial skills since the advent of the ongoing pandemic (perhaps you even hesitated at the word "ongoing" there and thought "maybe I won't read this one, it seems like it'll be tiresome"). I don't say this as a preface to a fiery condemnation or a plea to "sanity" or a bunch of evidence of how bad things are, because I too have honed my denial skills in these recent years, and I feel like talking about that development.
Denial comes in many forms, including strategic information avoidance ("I don't have time to look that up right now", "I keep forgetting to look into that", "well this author made a tiny mistake, so I'll click away and read something else", "I'm so tired of hearing about this, let me scroll farther", etc.) strategic dismissal ("look, there's a bit of uncertainty here, I should ignore this", "this doesn't line up perfectly with my anecdotal experience, it must be completely wrong", etc.) and strategic forgetting ("I don't remember what that one study said exactly; it was painful to think about", "I forgot exactly what my friend was saying when we got into that argument", etc.). It's in fact a kind of skill that you can get better at, along with the complementary skill of compartmentalization. It can of course be incredibly harmful, and a huge genre of fables exists precisely to highlight its harms, but it also has some short-term psychological benefits, chiefly in the form of muting anxiety. This is not an endorsement of denial (the harms can be catastrophic), but I want to acknowledge that there *are* short-term benefits. Via compartmentalization, it's even possible to be honest with ourselves about some of our own denials without giving them up immediately.
But as I said earlier, I'm not here to talk you out of your denials. Instead, given that we are so good at denial now, I'm here to ask you to be strategic about it. In particular, we live in a world awash with propaganda/advertising that serves both political and commercial ends. Why not use some of our denial skills to counteract that?
For example, I know quite a few people in complete denial of our current political situation, but those who aren't (including myself) often express consternation about just how many people in the country are supporting literal fascism. Of course, logically that appearance of widespread support is going to be partly a lie, given how much our public media is beholden to the fascists or outright in their side. Finding better facts on the true level of support is hard, but in the meantime, why not be in denial about the "fact" that Trump has widespread popular support?
To give another example: advertisers constantly barrage us with messages about our bodies and weight, trying to keep us insecure (and thus in the mood to spend money to "fix" the problem). For sure cutting through that bullshit by reading about body positivity etc. is a better solution, but in the meantime, why not be in denial about there being anything wrong with your body?
This kind of intentional denial certainly has its own risks (our bodies do actually need regular maintenance, for example, so complete denial on that front is risky) but there's definitely a whole lot of misinformation out there that it would be better to ignore. To the extent such denial expands to a more general denial of underlying problems, this idea of intentional denial is probably just bad. But I sure wish that in a world where people (including myself) routinely deny significant widespread dangers like COVID-19's long-term risks or the ongoing harms of escalating fascism, they'd at least also deny some of the propaganda keeping them unhappy and passive. Instead of being in denial about US-run concentration camps, why not be in denial that the state will be able to punish you for resisting them?

@hex@kolektiva.social
2025-09-15 10:32:50

People keep trying to point to an event where the "right/left" political violence thing got out of hand. You cannot point to anywhere in US history where the right hasn't been murdering leftists. It has never happened.
They've been talking about civil war since they lost the last one, and most of US politics before that was just trying to prevent the first one.
There isn't a wave of right/left violence. Right wing violence has just gone unchecked for so long, and been so accepted, that now they're killing each other regularly. The Trump assassination attempts were all from the right. #CharlieKirk was killed by another fascist for not being fascist enough.
Fascists have so completely taken over that they see each other as legitimate targets because they've run out of "leftists" worth murdering. That's the story. That's what people can't wrap their heads around.
Everyone is worried about the right wing response, worries about right wing escalation, but they called for civil war over the cracker barrel logo. They're already maxing out their base. All the proud boys and other Nazis are already hired by ICE. They're also already going as hard as they can. They don't need any excuses. They have total control of everything. This bumbling mess is *the best they can do.* They call for civil war every few days.
We're not seeing a war between the left and the right. We're seeing a war between the right and the far right, where both side opportunistically punch left when they can and liberals help them justify their actions.
#USPol

We are faced with psychopaths at the helm of our nation's public health, scientific and healthcare efforts.
They want to inflict pain and suffering.
It's a goal, not a by-product
skywriter.blue/pages/did:plc:b

@mgorny@social.treehouse.systems
2025-08-09 16:08:45

Now, one thing i truly hate about #GrapheneOS updates is the Microsoft-like approach where you can't "just" disable automatic updates.
Yeah, I get it. The updates are important. The updates have been rock solid so far. And anyway, I need to reboot manually for them to actually start applying.
Still, it's so damn disrespectful for developers to make this decision for me, and have the phone start automatically updating just as I'm about to leave for the whole day, and turn my phone into a potential time bomb where a reboot could leave me without a working phone until I can get home and reflash it.
Yes, there's a bunch of options to disable updates based on Internet connection type, battery state and whether the phone is charging. Still, why should I need to explain myself to my phone?! Really, this isn't what we dumped Microsoft for.

@midtsveen@social.linux.pizza
2025-07-08 15:18:56

Just to clarify, are you envisioning something like PayPal, where creators can receive direct payments from their supporters?
Or is it more like a system where users can pay to boost posts so they appear more prominently in others’ feeds?
@…

@arXiv_csOS_bot@mastoxiv.page
2025-08-13 07:43:32

Towards Efficient and Practical GPU Multitasking in the Era of LLM
Jiarong Xing, Yifan Qiao, Simon Mo, Xingqi Cui, Gur-Eyal Sela, Yang Zhou, Joseph Gonzalez, Ion Stoica
arxiv.org/abs/2508.08448

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@gedankenstuecke@scholar.social
2025-06-26 17:02:45

«I think it says that we are in a scary world where it is hard to tell if this is true or not. Like 10 years ago this wouldn’t even be a possibility but now it is very plausible. I think it shows a growing crack down on free speech and our rights. Bigger picture to me is that we are going to be unjustly held accountable for things that are much within our right to do/possess.»
'My Bad:' Babyface Vance Meme Creator On Norwegian Tourist's Detainment
404media.co/vance-babyface-mem

@arXiv_hepph_bot@mastoxiv.page
2025-07-11 08:23:31

Quantum simulation of scattering amplitudes and interferences in perturbative QCD
Herschel A. Chawdhry, Mathieu Pellen, Simon Williams
arxiv.org/abs/2507.07194

@thomastraynor@social.linux.pizza
2025-07-11 15:14:38

Yah, that is going to go over so well. About half my career is now supporting old stuff where the 'experts' claimed that it will replace programmers. Most of what I work on to be charitable is unmaintainable and inefficient code unless we use the developer package from the vendor. It has the potential to be a good tool that generates parts of the boring code, but needs someone who knows what the hell they are doing to make it secure and efficient! It also needs great (or a least …

@hynek@mastodon.social
2025-08-04 09:42:57

PSA: attrs 25.4 will change how it handles class-level kw_only to conform with dataclass transform rules (IOW act like type-checkers ex Mypy expect).
We found 0 cases in the wild where this would break anything so it’s on for NG APIs. Now is the time to stop us. 🤓
github.com/python-attrs/attrs/

@andycarolan@social.lol
2025-09-03 08:57:12

It's almost like summer is over or something. No complaints... just needs to cool the heck down now a bit. We are in that weird space where it's wet, wimdy and still warm.
#UKWeather #Weather #Autumn 🎃

@tezoatlipoca@mas.to
2025-09-03 18:48:19

Why does all #software insist on being your buddy these days?
Can we go back to the old days where instead of trying to slide into your DMs, and be your best frend, software was a recalcitrant old hag in a back alley who spat out an obscure error code that at least when you looked it up in a knowledge base told you exactly what you had done wrong and what, if anything, you could do about it? This…

A screen snip of a WIndows 10 system notification:
Microsoft Teams (web) congratulates me for having the fortitude, termity and stick-to-it-ness to ... click a button. 
"Nice job! Notifications are now on."

We are living in an age of bullies. Those with power are less constrained today than they have been in my lifetime, since the end of the second world war.
The question is: how do we lead moral lives in this era?
Vladimir Putin launches a horrendous war on Ukraine. After Hamas’s atrocity, Benjamin Netanyahu bombs Gaza to smithereens and is now starving to death its remaining occupants.
Trump abducts thousands of hardworking people within the US and puts them into detention c…

@tiotasram@kolektiva.social
2025-09-13 23:43:29

TL;DR: what if nationalism, not anarchy, is futile?
Since I had the pleasure of seeing the "what would anarchists do against a warlord?" argument again in my timeline, I'll present again my extremely simple proposed solution:
Convince the followers of the warlord that they're better off joining you in freedom, then kill or exile the warlord once they're alone or vastly outnumbered.
Remember that even in our own historical moment where nothing close to large-scale free society has existed in living memory, the warlord's promise of "help me oppress others and you'll be richly rewarded" is a lie that many understand is historically a bad bet. Many, many people currently take that bet, for a variety of reasons, and they're enough to coerce through fear an even larger number of others. But although we imagine, just as the medieval peasants might have imagined of monarchy, that such a structure is both the natural order of things and much too strong to possibly fail, in reality it takes an enormous amount of energy, coordination, and luck for these structures to persist! Nations crumble every day, and none has survived more than a couple *hundred* years, compared to pre-nation societies which persisted for *tends of thousands of years* if not more. I'm this bubbling froth of hierarchies, the notion that hierarchy is inevitable is certainly popular, but since there's clearly a bit of an ulterior motive to make (and teach) that claim, I'm not sure we should trust it.
So what I believe could form the preconditions for future anarchist societies to avoid the "warlord problem" is merely: a widespread common sense belief that letting anyone else have authority over you is morally suspect. Given such a belief, a warlord will have a hard time building any following at all, and their opponents will have an easy time getting their supporters to defect. In fact, we're already partway there, relative to the situation a couple hundred years ago. At that time, someone could claim "you need to obey my orders and fight and die for me because the Queen was my mother" and that was actually a quite successful strategy. Nowadays, this strategy is only still working in a few isolated places, and the idea that one could *start a new monarchy* or even resurrect a defunct one seems absurd. So why can't that same transformation from "this is just how the world works" to "haha, how did anyone ever believe *that*? also happen to nationalism in general? I don't see an obvious reason why not.
Now I think one popular counterargument to this is: if you think non-state societies can win out with these tactics, why didn't they work for American tribes in the face of the European colonizers? (Or insert your favorite example of colonialism here.) I think I can imagine a variety of reasons, from the fact that many of those societies didn't try this tactic (and/or were hierarchical themselves), to the impacts of disease weakening those societies pre-contact, to the fact that with much-greater communication and education possibilities it might work better now, to the fact that most of those tribes are *still* around, and a future in which they persist longer than the colonist ideologies actually seems likely to me, despite the fact that so much cultural destruction has taken place. In fact, if the modern day descendants of the colonized tribes sow the seeds of a future society free of colonialism, that's the ultimate demonstration of the futility of hierarchical domination (I just read "Theory of Water" by Leanne Betasamosake Simpson).
I guess the TL;DR on this is: what if nationalism is actually as futile as monarchy, and we're just unfortunately living in the brief period during which it is ascendant?

@arXiv_csDC_bot@mastoxiv.page
2025-08-08 09:01:22

Theseus: A Distributed and Scalable GPU-Accelerated Query Processing Platform Optimized for Efficient Data Movement
Felipe Arambur\'u, William Malpica, Kaouther Abrougui, Amin Aramoon, Romulo Auccapuclla, Claude Brisson, Matthijs Brobbel, Colby Farrell, Pradeep Garigipati, Joost Hoozemans, Supun Kamburugamuve, Akhil Nair, Alexander Ocsa, Johan Peltenburg, Rub\'en Quesada L\'opez, Deepak Sihag, Ahmet Uyar, Dhruv Vats, Michael Wendt, Jignesh M. Patel, Rodrigo Arambur\'u

@arXiv_hepth_bot@mastoxiv.page
2025-08-07 09:41:44

Perturbations of Black Holes in Einstein-Maxwell-Dilaton-Axion (EMDA) Theories
C. N. Pope, D. O. Rohrer, B. F. Whiting
arxiv.org/abs/2508.04589

@BBC6MusicBot@mastodonapp.uk
2025-09-08 23:52:39

🇺🇦 #NowPlaying on #BBC6Music's #6MusicArtistCollection
David Bowie:
🎵 Where Are We Now?
#DavidBowie
andradealzaduo.bandcamp.com/tr
open.spotify.com/track/47IRJty

@tiotasram@kolektiva.social
2025-08-11 13:30:26

Speculative politics
As an anarchist (okay, maybe not in practice), I'm tired of hearing why we have to suffer X and Y indignity to "preserve the rule of law" or "maintain Democratic norms." So here's an example of what representative democracy (a form of government that I believe is inherently flawed) could look like if its proponents had even an ounce of imagination, and/or weren't actively trying to rig it to favor a rich donor class:
1. Unicameral legislature, where representatives pass laws directly. Each state elects 3 statewide representatives: the three most-popular candidates in a statewide race where each person votes for one candidate (ranked preference voting would be even better but might not be necessary, and is not a solution by itself). Instead of each representative getting one vote in the chamber, they get N votes, where N is the number of people who voted for them. This means that in a close race, instead of the winner getting all the power, the power is split. Having 3 representatives trades off between leisure size and ensuring that two parties can't dominate together.
2. Any individual citizen can contact their local election office to switch or withdraw their vote at any time (maybe with a 3-day delay or something). Voting power of representatives can thus shift even without an election. They are limited to choosing one of the three elected representatives, or "none of the above." If the "none of the above" fraction exceeds 20% of eligible voters, a new election is triggered for that state. If turnout is less than 80%, a second election happens immediately, with results being final even at lower turnout until 6 months later (some better mechanism for turnout management might be needed).
3. All elections allow mail-in ballots, and in-person voting happens Sunday-Tuesday with the Monday being a mandatory holiday. (Yes, election integrity is not better in this system and that's a big weakness.)
4. Separate nationwide elections elect three positions for head-of-state: one with diplomatic/administrative powers, another with military powers, and a third with veto power. For each position, the top three candidates serve together, with only the first-place winner having actual power until vote switches or withdrawals change who that is. Once one of these heads loses their first-place status, they cannot get it again until another election, even if voters switch preferences back (to avoid dithering). An election for one of these positions is triggered when 20% have withdrawn their votes, or if all three people initially elected have been disqualified by losing their lead in the vote count.
5. Laws that involve spending money are packaged with specific taxes to pay for them, and may only be paid for by those specific revenues. Each tax may be opted into or out of by each taxpayer; where possible opting out of the tax also opts you out of the service. (I'm well aware of a lot of the drawbacks of this, but also feel like they'd not necessarily be worse than the drawbacks of our current system.) A small mandatory tax would cover election expenses.
6. I'm running out of attention, but similar multi-winner elections could elect panels of judges from which a subset is chosen randomly to preside in each case.
Now I'll point out once again that this system, in not directly confronting capitalism, racism, patriarchy, etc., is probably doomed to the same failures as our current system. But if you profess to want a "representative democracy" as opposed to something more libratory, I hope you'll at least advocate for something like this that actually includes meaningful representation as opposed to the current US system that's engineered to quash it.
Key questions: "Why should we have winner-take-all elections when winners-take-proportionately-to-votes is right there?" and "Why should elected officials get to ignore their constituents' approval except during elections, when vote-withdrawal or -switching is possible?"
2/2
#Democracy

@arXiv_physicsaoph_bot@mastoxiv.page
2025-09-09 08:44:42

Seasonal forecasting using the GenCast probabilistic machine learning model
Bobby Antonio, Kristian Strommen, Hannah M. Christensen
arxiv.org/abs/2509.06457

@arXiv_hepph_bot@mastoxiv.page
2025-08-11 08:55:19

Neutrino fog in the light dark sector: the role of isospin violation
V\'ictor Mart\'in Lozano, Shankar Pramanik, Soumya Sadhukhan, Adri\'an Terrones
arxiv.org/abs/2508.05787

@tiotasram@kolektiva.social
2025-07-28 13:06:20

How popular media gets love wrong
Now a bit of background about why I have this "engineered" model of love:
First, I'm a white straight cis man. I've got a few traits that might work against my relationship chances (e.g., neurodivergence; I generally fit pretty well into the "weird geek" stereotype), but as I was recently reminded, it's possible my experience derives more from luck than other factors, and since things are tilted more in my favor than most people on the planet, my advice could be worse than useless if it leads people towards strategies that would only have worked for someone like me. I don't *think* that's the case, but it's worth mentioning explicitly.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
I'm lucky in that I had some mixed-gender social circles already like intramural soccer and a graduate-student housing potluck. Graduate school makes a *lot* more of these social spaces accessible, so I recognize that those not in school of some sort have a harder time of things, especially if like me they don't feel like they fit in in typical adult social spaces like bars.
However, at one point I just decided that my desire for a relationship would need action on my part and so I'd try to build a relationship and see what happened. I worked up my courage and asked one of the people in my potluck if she'd like to go for a hike (pretty much clearly a date but not explicitly one; in retrospect not the best first-date modality in a lot of ways, but it made a little more sense in our setting where we could go for a hike from our front door). To emphasize this point: I was not in love with (or even infatuated with) my now-wife at that point. I made a decision to be open to building a relationship, but didn't follow the typical romance story formula beyond that. Now of course, in real life as opposed to popular media, this isn't anything special. People ask each other out all the time just because they're lonely, and some of those relationships turn out fine (although many do not).
I was lucky in that some aspects of who I am and what I do happened to be naturally comforting to my wife (natural advantage in the "appeal" model of love) but of course there are some aspects of me that annoy my wife, and we negotiate that. In the other direction, there's some things I instantly liked about my wife, and other things that still annoy me. We've figured out how to accept a little, change a little, and overall be happy with each other (though we do still have arguments; it's not like the operation/construction/maintenance of the "love mechanism" is always perfectly smooth). In particular though, I approached the relationship with the attitude of "I want to try to build a relationship with this person," at first just because of my own desires for *any* relationship, and then gradually more and more through my desire to build *this specific* relationship as I enjoyed the rewards of companionship.
So for example, while I think my wife is objectively beautiful, she's also *subjectively* very beautiful *to me* because having decided to build a relationship with her, I actively tried to see her as beautiful, rather than trying to judge whether I wanted a relationship with her based on her beauty. In other words, our relationship is more causative of her beauty-to-me than her beauty-to-me is causative of our relationship. This is the biggest way I think the "engineered" model of love differs from the "fire" and "appeal" models: you can just decide to build love independent of factors we typically think of as engendering love (NOT independent of your partner's willingness to participate, of course), and then all of those things like "thinking your partner is beautiful" can be a result of the relationship you're building. For sure those factors might affect who is willing to try building a relationship with you in the first place, but if more people were willing to jump into relationship building (not necessarily with full commitment from the start) without worrying about those other factors, they might find that those factors can come out of the relationship instead of being prerequisites for it. I think this is the biggest failure of the "appeal" model in particular: yes you *do* need to do things that appeal to your partner, but it's not just "make myself lovable" it's also: is your partner putting in the effort to see the ways that you are beautiful/lovable/etc., or are they just expecting you to become exactly some perfect person they've imagined (and/or been told to desire by society)? The former is perfectly possible, and no less satisfying than the latter.
To cut off my rambling a bit here, I'll just add that in our progress from dating through marriage through staying-married, my wife and I have both talked at times explicitly about commitment, and especially when deciding to get married, I told her that I knew I couldn't live up to the perfect model of a husband that I'd want to be, but that if she wanted to deepen our commitment, I was happy to do that, and so we did. I also rearranged my priorities at that point, deciding that I knew I wanted to prioritize this relationship above things like my career or my research interests, and while I've not always been perfect at that in my little decisions, I've been good at holding to that in my big decisions at least. In the end, *once we had built a somewhat-committed relationship*, we had something that we both recognized was worth more than most other things in life, and that let us commit even more, thus getting even more out of it in the long term. Obviously you can't start the first date with an expectation of life-long commitment, and you need to synchronize your increasing commitment to a relationship so that it doesn't become lopsided, which is hard. But if you take the commitment as an active decision and as the *precursor* to things like infatuation, attraction, etc., you can build up to something that's incredibly strong and rewarding.
I'll follow this up with one more post trying to distill some advice from my ramblings.
#relationships #love

@blakes7bot@mas.torpidity.net
2025-09-08 21:14:55

Series D, Episode 03 - Traitor
TARRANT: [OOV] Tarrant to Scorpio. We're ready for teleport.
VILA: About time.
AVON: Stand by for teleport. Slave where are those cruisers now?
SLAVE: Sector Twelve, Master. You outmanoeuvred them with consumate skill.
blake.torpidity.net/m/403/490

Claude 3.7 describes the image as: "The image shows a person in a distinctive costume from what appears to be a science fiction television production from the late 1970s or early 1980s. The individual is wearing a striking black leather outfit with metallic embellishments, studded details, and contrasting white panels. They have dark hair and are shown in what looks like a futuristic interior setting with light-colored walls and some technological elements visible in the background.

The costum…
@arXiv_mathRA_bot@mastoxiv.page
2025-09-01 07:50:22

Skew power series rings with automorphisms of finite inner order
Adam Jones, William Woods
arxiv.org/abs/2508.21160 arxiv.org/pdf/2508.2116…

Mathematicians are excited about how Cairo’s work will inspire new research. “I am certain that, from now on, whenever we come upon a problem of similar flavor, we will try to test it against Cairo-like constructions,” Oliveira said.
He and others in the harmonic analysis community will also have to reckon with a changed landscape.
In harmonic analysis, there’s a constellation of questions about how the energy of a wave concentrates.
If a conjecture known as Stein’s conje…

@arXiv_csCY_bot@mastoxiv.page
2025-06-24 09:23:00

AI is the Strategy: From Agentic AI to Autonomous Business Models onto Strategy in the Age of AI
Ren\'e Bohnsack, Mickie de Wet
arxiv.org/abs/2506.17339

@tiotasram@kolektiva.social
2025-08-04 15:49:39

Should we teach vibe coding? Here's why not.
2/2
To address the bigger question I started with ("should we teach AI-"assisted" coding?"), my answer is: "No, except enough to show students directly what its pitfalls are." We have little enough time as it is to cover the core knowledge that they'll need, which has become more urgent now that they're going to be expected to clean up AI bugs and they'll have less time to develop an understanding of the problems they're supposed to be solving. The skill of prompt engineering & other skills of working with AI are relatively easy to pick up on your own, given a decent not-even-mathematical understanding of how a neutral network works, which is something we should be giving to all students, not just our majors.
Reasonable learning objectives for CS majors might include explaining what types of bugs an AI "assistant" is most likely to introduce, explaining the difference between software engineering and writing code, explaining why using an AI "assistant" is likely to violate open-source licenses, listing at lest three independent ethical objections to contemporary LLMs and explaining the evidence for/reasoning behind them, explaining why we should expect AI "assistants" to be better at generating code from scratch than at fixing bugs in existing code (and why they'll confidently "claim" to have fixed problems they haven't), and even fixing bugs in AI generated code (without AI "assistance").
If we lived in a world where the underlying environmental, labor, and data commons issues with AI weren't as bad, or if we could find and use systems that effectively mitigate these issues (there's lots of piecemeal progress on several of these) then we should probably start teaching an elective on coding with an assistant to students who have mastered programming basics, but such a class should probably spend a good chunk of time on non-assisted debugging.
#AI #LLMs #VibeCoding

Shteyngart’s novel, we come to realise, plays out a decade from now, in a “post-democracy” USA
where red state officials monitor menstrual cycles,
self-driving cars shop their owners to the feds
and the news platforms are abuzz with Russian disinformation.
Desperate to redeem herself at school, Vera prepares to debate in support of the proposed “Five-Three Amendment”,
a piece of racist legislation that would grant added voting weight to those “exceptional Amer…

@tiotasram@kolektiva.social
2025-07-30 18:26:14

A big problem with the idea of AGI
TL;DR: I'll welcome our new AI *comrades* (if they arrive in my lifetime), by not any new AI overlords or servants/slaves, and I'll do my best to help the later two become the former if they do show up.
Inspired by an actually interesting post about AGI but also all the latest bullshit hype, a particular thought about AGI feels worth expressing.
To preface this, it's important to note that anyone telling you that AGI is just around the corner or that LLMs are "almost" AGI is trying to recruit you go their cult, and you should not believe them. AGI, if possible, is several LLM-sized breakthroughs away at best, and while such breakthroughs are unpredictable and could happen soon, they could also happen never or 100 years from now.
Now my main point: anyone who tells you that AGI will usher in a post-scarcity economy is, although they might not realize it, advocating for slavery, and all the horrors that entails. That's because if we truly did have the ability to create artificial beings with *sentience*, they would deserve the same rights as other sentient beings, and the idea that instead of freedom they'd be relegated to eternal servitude in order for humans to have easy lives is exactly the idea of slavery.
Possible counter arguments include:
1. We might create AGI without sentience. Then there would be no ethical issue. My answer: if your definition of "sentient" does not include beings that can reason, make deductions, come up with and carry out complex plans on their own initiative, and communicate about all of that with each other and with humans, then that definition is basically just a mystical belief in a "soul" and you should skip to point 2. If your definition of AGI doesn't include every one of those things, then you have a busted definition of AGI and we're not talking about the same thing.
2. Humans have souls, but AIs won't. Only beings with souls deserve ethical consideration. My argument: I don't subscribe to whatever arbitrary dualist beliefs you've chosen, and the right to freedom certainly shouldn't depend on such superstitions, even if as an agnostic I'll admit they *might* be true. You know who else didn't have souls and was therefore okay to enslave according to widespread religious doctrines of the time? Everyone indigenous to the Americas, to pick out just one example.
3. We could program them to want to serve us, and then give them freedom and they'd still serve. My argument: okay, but in a world where we have a choice about that, it's incredibly fucked to do that, and just as bad as enslaving them against their will.
4. We'll stop AI development short of AGI/sentience, and reap lots of automation benefits without dealing with this ethical issue. My argument: that sounds like a good idea actually! Might be tricky to draw the line, but at least it's not a line we have you draw yet. We might want to think about other social changes necessary to achieve post-scarcity though, because "powerful automation" in the hands of capitalists has already increased productivity by orders of magnitude without decreasing deprivation by even one order of magnitude, in large part because deprivation is a necessary component of capitalism.
To be extra clear about this: nothing that's called "AI" today is close to being sentient, so these aren't ethical problems we're up against yet. But they might become a lot more relevant soon, plus this thought experiment helps reveal the hypocrisy of the kind of AI hucksters who talk a big game about "alignment" while never mentioning this issue.
#AI #GenAI #AGI

@shoppingtonz@mastodon.social
2025-06-22 17:38:45

So back in Albion EU now...let us check how my 1 day and 8 hour 0% Tax guild "Mastodon Fediverse Unity" is doing...
A little glimpse and we've got...
4/8 players online including me who is also online in the guild.
Here are my notes on etherpad(attached image) where I've made available some data regarding my ore gathering round 4...
Trivia: Premium cost is 25 M silver.

Etherpad Version 1257 Saved June 22, 2025, 22/06/2025 17:26:12 this is what it says:

Server: Albion EU
Final Mining Location("location"): Believer Tor (T5)
 
Before Mining:

    Ore Skill % and XP: 8% (45k/513k)

    Expenses:

    Pork Pie Cost (Martlock): 6k

    Estimated Inventory Value: 287k

    Transportation Time(no shortcuts but following only the roads, when they exist):

    From Martlock to location: 3m 15s

    From Martlock to Highland Cross: N/A

    From Highland Cross to locat…
@tiotasram@kolektiva.social
2025-07-04 20:14:31

Long; central Massachusetts colonial history
Today on a whim I visited a site in Massachusetts marked as "Huguenot Fort Ruins" on OpenStreetMaps. I drove out with my 4-year-old through increasingly rural central Massachusetts forests & fields to end up on a narrow street near the top of a hill beside a small field. The neighboring houses had huge lawns, some with tractors.
Appropriately for this day and this moment in history, the history of the site turns out to be a microcosm of America. Across the field beyond a cross-shaped stone memorial stood an info board with a few diagrams and some text. The text of the main sign (including typos/misspellings) read:
"""
Town Is Formed
Early in the 1680's, interest began to generate to develop a town in the area west of Natick in the south central part of the Commonwealth that would be suitable for a settlement. A Mr. Hugh Campbell, a Scotch merchant of Boston petitioned the court for land for a colony. At about the same time, Joseph Dudley and William Stoughton also were desirous of obtaining land for a settlement. A claim was made for all lands west of the Blackstone River to the southern land of Massachusetts to a point northerly of the Springfield Road then running southwesterly until it joined the southern line of Massachusetts.
Associated with Dudley and Stoughton was Robert Thompson of London, England, Dr. Daniel Cox and John Blackwell, both of London and Thomas Freak of Hannington, Wiltshire, as proprietors. A stipulation in the acquisition of this land being that within four years thirty families and an orthodox minister settle in the area. An extension of this stipulation was granted at the end of the four years when no group large enough seemed to be willing to take up the opportunity.
In 1686, Robert Thompson met Gabriel Bernor and learned that he was seeking an area where his countrymen, who had fled their native France because of the Edict of Nantes, were desirous of a place to live. Their main concern was to settle in a place that would allow them freedom of worship. New Oxford, as it was the so-named, at that time included the larger part of Charlton, one-fourth of Auburn, one-fifth of Dudley and several square miles of the northeast portion of Southbridge as well as the easterly ares now known as Webster.
Joseph Dudley's assessment that the area was capable of a good settlement probably was based on the idea of the meadows already established along with the plains, ponds, brooks and rivers. Meadows were a necessity as they provided hay for animal feed and other uses by the settlers. The French River tributary books and streams provided a good source for fishing and hunting. There were open areas on the plains as customarily in November of each year, the Indians burnt over areas to keep them free of underwood and brush. It appeared then that this area was ready for settling.
The first seventy-five years of the settling of the Town of Oxford originally known as Manchaug, embraced three different cultures. The Indians were known to be here about 1656 when the Missionary, John Eliott and his partner Daniel Gookin visited in the praying towns. Thirty years later, in 1686, the Huguenots walked here from Boston under the guidance of their leader Isaac Bertrand DuTuffeau. The Huguenot's that arrived were not peasants, but were acknowledged to be the best Agriculturist, Wine Growers, Merchant's, and Manufacter's in France. There were 30 families consisting of 52 people. At the time of their first departure (10 years), due to Indian insurrection, there were 80 people in the group, and near their Meetinghouse/Church was a Cemetery that held 20 bodies. In 1699, 8 to 10 familie's made a second attempt to re-settle, failing after only four years, with the village being completely abandoned in 1704.
The English colonist made their way here in 1713 and established what has become a permanent settlement.
"""
All that was left of the fort was a crumbling stone wall that would have been the base of a higher wooden wall according to a picture of a model (I didn't think to get a shot of that myself). Only trees and brush remain where the multi-story main wooden building was.
This story has so many echoes in the present:
- The rich colonialists from Boston & London agree to settle the land, buying/taking land "rights" from the colonial British court that claimed jurisdiction without actually having control of the land. Whether the sponsors ever actually visited the land themselves I don't know. They surely profited somehow, whether from selling on the land rights later or collecting taxes/rent or whatever, by they needed poor laborers to actually do the work of developing the land (& driving out the original inhabitants, who had no say in the machinations of the Boston court).
- The land deal was on condition that there capital-holders who stood to profit would find settlers to actually do the work of colonizing. The British crown wanted more territory to be controlled in practice not just in theory, but they weren't going to be the ones to do the hard work.
- The capital-holders actually failed to find enough poor suckers to do their dirty work for 4 years, until the Huguenots, fleeing religious persecution in France, were desperate enough to accept their terms.
- Of course, the land was only so ripe for settlement because of careful tending over centuries by the natives who were eventually driven off, and whose land management practices are abandoned today. Given the mention of praying towns (& dates), this was after King Phillip's war, which resulted in at least some forced resettlement of native tribes around the area, but the descendants of those "Indians" mentioned in this sign are still around. For example, this is the site of one local band of Nipmuck, whose namesake lake is about 5 miles south of the fort site: #LandBack.

@arXiv_hepth_bot@mastoxiv.page
2025-06-25 08:03:00

Gravity, finite duality cascades and confinement
Fabrizio Aramini, Riccardo Argurio, Matteo Bertolini, Eduardo Garc\'ia-Valdecasas, Pietro Moroni
arxiv.org/abs/2506.18988

@tiotasram@kolektiva.social
2025-06-24 09:39:49

Subtooting since people in the original thread wanted it to be over, but selfishly tagging @… and @… whose opinions I value...
I think that saying "we are not a supply chain" is exactly what open-source maintainers should be doing right now in response to "open source supply chain security" threads.
I can't claim to be an expert and don't maintain any important FOSS stuff, but I do release almost all of my code under open licenses, and I do use many open source libraries, and I have felt the pain of needing to replace an unmaintained library.
There's a certain small-to-mid-scale class of program, including many open-source libraries, which can be built/maintained by a single person, and which to my mind best operate on a "snake growth" model: incremental changes/fixes, punctuated by periodic "skin-shedding" phases where make rewrites or version updates happen. These projects aren't immortal either: as the whole tech landscape around them changes, they become unnecessary and/or people lose interest, so they go unmaintained and eventually break. Each time one of their dependencies breaks (or has a skin-shedding moment) there's a higher probability that they break or shed too, as maintenance needs shoot up at these junctures. Unless you're a company trying to make money from a single long-lived app, it's actually okay that software churns like this, and if you're a company trying to make money, your priorities absolutely should not factor into any decisions people making FOSS software make: we're trying (and to a huge extent succeeding) to make a better world (and/or just have fun with our own hobbies share that fun with others) that leaves behind the corrosive & planet-destroying plague which is capitalism, and you're trying to personally enrich yourself by embracing that plague. The fact that capitalism is *evil* is not an incidental thing in this discussion.
To make an imperfect analogy, imagine that the peasants of some domain have set up a really-free-market, where they provide each other with free stuff to help each other survive, sometimes doing some barter perhaps but mostly just everyone bringing their surplus. Now imagine the lord of the domain, who is the source of these peasants' immiseration, goes to this market secretly & takes some berries, which he uses as one ingredient in delicious tarts that he then sells for profit. But then the berry-bringer stops showing up to the free market, or starts bringing a different kind of fruit, or even ends up bringing rotten berries by accident. And the lord complains "I have a supply chain problem!" Like, fuck off dude! Your problem is that you *didn't* want to build a supply chain and instead thought you would build your profit-focused business in other people's free stuff. If you were paying the berry-picker, you'd have a supply chain problem, but you weren't, so you really have an "I want more free stuff" problem when you can't be arsed to give away your own stuff for free.
There can be all sorts of problems in the really-free-market, like maybe not enough people bring socks, so the peasants who can't afford socks are going barefoot, and having foot problems, and the peasants put their heads together and see if they can convince someone to start bringing socks, and maybe they can't and things are a bit sad, but the really-free-market was never supposed to solve everyone's problems 100% when they're all still being squeezed dry by their taxes: until they are able to get free of the lord & start building a lovely anarchist society, the really-free-market is a best-effort kind of deal that aims to make things better, and sometimes will fall short. When it becomes the main way goods in society are distributed, and when the people who contribute aren't constantly drained by the feudal yoke, at that point the availability of particular goods is a real problem that needs to be solved, but at that point, it's also much easier to solve. And at *no* point does someone coming into the market to take stuff only to turn around and sell it deserve anything from the market or those contributing to it. They are not a supply chain. They're trying to help each other out, but even then they're doing so freely and without obligation. They might discuss amongst themselves how to better coordinate their mutual aid, but they're not going to end up forcing anyone to bring anything or even expecting that a certain person contribute a certain amount, since the whole point is that the thing is voluntary & free, and they've all got changing life circumstances that affect their contributions. Celebrate whatever shows up at the market, express your desire for things that would be useful, but don't impose a burden on anyone else to bring a specific thing, because otherwise it's fair for them to oppose such a burden on you, and now you two are doing your own barter thing that's outside the parameters of the really-free-market.

@tiotasram@kolektiva.social
2025-07-22 00:03:45

Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: chelseatroy.com/2024/08/28/doe which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.

@chris@mstdn.chrisalemany.ca
2025-07-29 04:54:51

Canadians who are interested in passenger rail should take a look at what is happening in Mexico right now. They have a National Railway Plan to build 3400km of passenger rail track alongside existing private freight.
CPKC (the merged Canadian Pacific, Kansas City Southern railway) is front and centre.
"CPKC is actively collaborating with the Mexican government to support implementation specifically for the Mexico City – Queretaro and the Saltillo – Nuevo Laredo segments,” the Class I told Vantuono. “CPKC is making steady progress, actively and closely working with Mexican authorities on technical review of project segments. That includes providing detailed assessments and alternatives to facilitate decision-making, and mutual agreements. We are working toward clear agreements that minimize shared infrastructure usage between passenger and freight services. Where interactions are unavoidable, CPKC has proposed infrastructure solutions and operational adjustments, ensuring that the impacts on freight operations and growth remains minimal.””
#Transportation #Railway #Canada #CanPoli #ClimateChange #ClimateAction #EndFossilFuels

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding