Tootfinder

Opt-in global Mastodon full text search. Join the index!

@tiotasram@kolektiva.social
2025-06-21 02:34:13

Why AI can't possibly make you more productive; long
#AI and "productivity", some thoughts:
Edit: fixed some typos.
Productivity is a concept that isn't entirely meaningless outside the context of capitalism, but it's a concept that is heavily inflected in a capitalist context. In many uses today it effectively means "how much you can satisfy and/or exceed your boss' expectations." This is not really what it should mean: even in an anarchist utopia, people would care about things like how many shirts they can produce in a week, although in an "I'd like to voluntarily help more people" way rather than an "I need to meet this quota to earn my survival" way. But let's roll with this definition for a second, because it's almost certainly what your boss means when they say "productivity", and understanding that word in a different (even if truer) sense is therefore inherently dangerous.
Accepting "productivity" to mean "satisfying your boss' expectations," I will now claim: the use of generative AI cannot increase your productivity.
Before I dive in, it's imperative to note that the big generative models which most people think of as constituting "AI" today are evil. They are 1: pouring fuel on our burning planet, 2: psychologically strip-mining a class of data laborers who are exploited for their precarity, 3: enclosing, exploiting, and polluting the digital commons, and 4: stealing labor from broad classes of people many of whom are otherwise glad to give that labor away for free provided they get a simple acknowledgement in return. Any of these four "ethical issues" should be enough *alone* to cause everyone to simply not use the technology. These ethical issues are the reason that I do not use generative AI right now, except for in extremely extenuating circumstances. These issues are also convincing for a wide range of people I talk to, from experts to those with no computer science background. So before I launch into a critique of the effectiveness of generative AI, I want to emphasize that such a critique should be entirely unnecessary.
But back to my thesis: generative AI cannot increase your productivity, where "productivity" has been defined as "how much you can satisfy and/or exceed your boss' expectations."
Why? In fact, what the fuck? Every AI booster I've met has claimed the opposite. They've given me personal examples of time saved by using generative AI. Some of them even truly believe this. Sometimes I even believe they saved time without horribly compromising on quality (and often, your boss doesn't care about quality anyways if the lack of quality is hard to measure of doesn't seem likely to impact short-term sales/feedback/revenue). So if generative AI genuinely lets you write more emails in a shorter period of time, or close more tickets, or something else along these lines, how can I say it isn't increasing your ability to meet your boss' expectations?
The problem is simple: your boss' expectations are not a fixed target. Never have been. In virtue of being someone who oversees and pays wages to others under capitalism, your boss' game has always been: pay you less than the worth of your labor, so that they can accumulate profit and thus more capital to remain in charge instead of being forced into working for a wage themselves. Sure, there are layers of management caught in between who aren't fully in this mode, but they are irrelevant to this analysis. It matters not how much you please your manager if your CEO thinks your work is not worth the wages you are being paid. And using AI actively lowers the value of your work relative to your wages.
Why do I say that? It's actually true in several ways. The most obvious: using generative AI lowers the quality of your work, because the work it produces is shot through with errors, and when your job is reduced to proofreading slop, you are bound to tire a bit, relax your diligence, and let some mistakes through. More than you would have if you are actually doing and taking pride in the work. Examples are innumerable and frequent, from journalists to lawyers to programmers, and we laugh at them "haha how stupid to not check whether the books the AI reviewed for you actually existed!" but on a deeper level if we're honest we know we'd eventually make the same mistake ourselves (bonus game: spot the swipe-typing typos I missed in this post; I'm sure there will be some).
But using generative AI also lowers the value of your work in another much more frightening way: in this era of hype, it demonstrates to your boss that you could be replaced by AI. The more you use it, and no matter how much you can see that your human skills are really necessary to correct its mistakes, the more it appears to your boss that they should hire the AI instead of you. Or perhaps retain 10% of the people in roles like yours to manage the AI doing the other 90% of the work. Paradoxically, the *more* you get done in terms of raw output using generative AI, the more it looks to your boss as if there's an opportunity to get enough work done with even fewer expensive humans. Of course, the decision to fire you and lean more heavily into AI isn't really a good one for long-term profits and success, but the modern boss did not get where they are by considering long-term profits. By using AI, you are merely demonstrating your redundancy, and the more you get done with it, the more redundant you seem.
In fact, there's even a third dimension to this: by using generative AI, you're also providing its purveyors with invaluable training data that allows them to make it better at replacing you. It's generally quite shitty right now, but the more use it gets by competent & clever people, the better it can become at the tasks those specific people use it for. Using the currently-popular algorithm family, there are limits to this; I'm not saying it will eventually transcend the mediocrity it's entwined with. But it can absolutely go from underwhelmingly mediocre to almost-reasonably mediocre with the right training data, and data from prompting sessions is both rarer and more useful than the base datasets it's built on.
For all of these reasons, using generative AI in your job is a mistake that will likely lead to your future unemployment. To reiterate, you should already not be using it because it is evil and causes specific and inexcusable harms, but in case like so many you just don't care about those harms, I've just explained to you why for entirely selfish reasons you should not use it.
If you're in a position where your boss is forcing you to use it, my condolences. I suggest leaning into its failures instead of trying to get the most out of it, and as much as possible, showing your boss very clearly how it wastes your time and makes things slower. Also, point out the dangers of legal liability for its mistakes, and make sure your boss is aware of the degree to which any of your AI-eager coworkers are producing low-quality work that harms organizational goals.
Also, if you've read this far and aren't yet of an anarchist mindset, I encourage you to think about the implications of firing 75% of (at least the white-collar) workforce in order to make more profit while fueling the climate crisis and in most cases also propping up dictatorial figureheads in government. When *either* the AI bubble bursts *or* if the techbros get to live out the beginnings of their worker-replacement fantasies, there are going to be an unimaginable number of economically desperate people living in increasingly expensive times. I'm the kind of optimist who thinks that the resulting social crucible, though perhaps through terrible violence, will lead to deep social changes that effectively unseat from power the ultra-rich that continue to drag us all down this destructive path, and I think its worth some thinking now about what you might want the succeeding stable social configuration to look like so you can advocate towards that during points of malleability.
As others have said more eloquently, generative AI *should* be a technology that makes human lives on average easier, and it would be were it developed & controlled by humanists. The only reason that it's not, is that it's developed and controlled by terrible greedy people who use their unfairly hoarded wealth to immiserate the rest of us in order to maintain their dominance. In the long run, for our very survival, we need to depose them, and I look forward to what the term "generative AI" will mean after that finally happens.

@jswright61@ruby.social
2025-07-20 13:30:37

Stock #Mastodon doesn’t allow me to create a webhook to notify me when my account is mentioned, someone replies to one of my posts, or someone I follow posts. I could poll the Masto API but that seems inefficient. With all the bots out there it seems like there would be a better way?
Maybe someone has a custom Fedi server that supports Webhooks. I’m happy to pay for an account there. Maybe th…

@lightweight@mastodon.nzoss.nz
2025-06-19 07:01:36

Richard Murphy provides an excellent characterisation of where western society was at its best (for most people, but not everyoe), and how it fell into the increasingly horrible situation it's in now... and how we could return to a 'better way' - what he refers to as 'mixed markets'. yewtu.be/watch?v=lrQh0_B…

@Tupp_ed@mastodon.ie
2025-06-16 07:31:05

Hello!
I have a happy favour to ask.
Last year, I had a US intern working with us and she now is looking for help that you- good reader of this account- could give.
(She needs academic survey participants)
Check out her flyer below, click the link, and please RT

PARTICIPANTS NEEDED
***-
Research Study on Data Privacy and Collective Redress
A researcher at University College Dublin wants to
conduct anonymized interviews to learn about why
you would or would not participate in collective legal
action to rectify a data protection violation. This
information can be used to help nonprofit
organizations better represent you when your data
protection rights are violated.
Requirements:
• 18 + years of age
• Residing in Ireland or the EU
• English speaking
FOR …
@davidaugust@mastodon.online
2025-07-16 21:34:00

Seems to me Coke should put pages of a certain file and list on their bottles, just print them on there. It would really get the information to the people.
Could be so refreshing.
source: trumpstruth.org/statuses/32028


Donald J. Trump
@realDonaldTrump · July 16, 2025, 4:19 PM

I have been speaking to Coca-Cola about using REAL Cane Sugar in Coke in the United States, and they have agreed to do so. I’d like to thank all of those in authority at Coca-Cola. This will be a very good move by them — You’ll see. It’s just better!
@Techmeme@techhub.social
2025-07-16 15:10:39

Airbus' Acubed and Google spinout SandboxAQ test MagNav, a quantum-sensing navigation device and potential GPS alternative, for 150 flight hours across the US (Isabelle Bousquette/Wall Street Journal)
wsj.com/articles/the-secret-to

@UP8@mastodon.social
2025-08-07 21:31:57

☕ A better brew: How regenerative coffee could root out exploitation
news.mongabay.com/short-articl

@theodric@social.linux.pizza
2025-07-17 15:12:59

We could have had the good timeline, but no, you just couldn't imagine it to manifest it. I'm still on their mailing list tho

(FAKE) Trump tweet: "I have been speaking to Coca-Cola about using REAL Cocaine in Coke in the United States, and they have agreed to do so. I'd like to thank all of those in authority at Coca-Cola. This will be a very good move by them - You'll see. It's just better!"
@portaloffreedom@social.linux.pizza
2025-06-16 21:13:09

The Netherlands is having it's third complete shutdown of all passenger trains tomorrow in the entire country.
The strikes are caused by the employees wages having a big gap with inflation. The NS management doesn't want to pay this gap. The government will not help.
I'm sure there is some huge car project somewhere in the Netherlands that could be killed to invest in better train service. But it will not be.
I dread because I don't have a lot of hope that th…

@brian_gettler@mas.to
2025-06-16 00:04:09

I've been a sportsball coach for a few years now. The kids can be maddeningly distracted, but for the most it's fun getting them to see things in the game they hadn't seen before and helping them get better. But holy shit I could do without the parents (and the opposing coaches) who take it all way too seriously. This is a game. These are kids. Can't we just shut our traps and let them enjoy it?

@inthehands@hachyderm.io
2025-06-09 16:13:42

“What AI sells is vastly different from what it delivers, particularly what it delivers out of the box.”
The post gives some great context on the study of “the difference between work-as-imagined (WAI) and work-as-done (WAD),” and says:
“If what we have to do to be productive with LLMs is to add a lot of scaffolding and invest effort to gain important but poorly defined skills, we should be able to assume that what we’re sold and what we get are rather different things. That gap implies that better designed artifacts could have better affordances, and be more appropriate to the task at hand.”
5/

@pre@boing.world
2025-06-12 16:14:57

So farewell then Pinetime Watch 🪦.
It isn't showing any signs of charging. even after a completely-flat battery reset. Last hope gone.
Now nobody at all is running my custom software I think, and I have no way to fix the bugs.
Could replace it, but I'm not really paying any attention to the things it measures anyway. Heartrate is too unreliable to be useful and steps seems likely to be counting my leg-jiggles since I tend to hit 10,000 most days without trying or leaving the flat.
The software which tracks my time and mood is probably better running on the phone really. Easier to add notes and detail. Can't really input text from a watch. Location data can be added in ways the watch couldn't.
So back to not wearing a watch at all I think. Who needs it now we all carry pocket watches with internet and telephony.
#pineTime #smartWatch

@saraislet@infosec.exchange
2025-06-12 17:25:19

Burnout leave, day negative 13:
‣ Half of me wants to rip myself to shreds with self-criticism for things I could have done better
‣ Half of me resents that, through my life so far (independent of recent events), the world hasn't been the best place for me to grow and be the best person I can be
‣ Half of me wants to accept both of the above and find the most realistic (and inevitably imperfect) path forward (ACT therapy style)
‣ Half of me is grumbling that these are…

@cowboys@darktundra.xyz
2025-06-30 14:27:47

Dallas Cowboys' major NFL offseason change could be 'better fit' than expected si.com/nfl/cowboys/news/dallas

@arXiv_csRO_bot@mastoxiv.page
2025-07-16 08:15:21

Exteroception through Proprioception Sensing through Improved Contact Modeling for Soft Growing Robots
Francesco Fuentes, Serigne Diagne, Zachary Kingston, Laura H. Blumenschein
arxiv.org/abs/2507.10694

@yaxu@post.lurk.org
2025-07-13 09:00:52

Is it better to let people know when you see them doing something when others have done something similar previously?
I've found myself doing this a lot around live coding, on one level it seems helpful to know about prior art, and fun to talk about weird old projects/events. On the other it could be stifling to obsess over identifying the 'first' person to try something, and might feel like old people are trying to pitch their tents all over your garden.

@hex@kolektiva.social
2025-08-07 00:24:12

There was once a machine that told you "you want this" and "this is good." It said, "there can be no better system and it's foolish to try to build one." That machine has long since failed to function. Now you choke on fumes as it is consumed by the wild flames of an abandoned cause.
That machine could not possibly work anymore because the evidence of it's falsehood has become too overwhelming.
No, only abject terror now can keep you from plotting your escape, from creating an alternative. No, the illusion has long since broken. All that's left now is triggering fight, flight, freeze as hard as possible. Most will be paralyzed, and those who fight can be used as an excuse to escalate the terror.
These are the final stages of a dying sun, expanding and consuming it's children before the final supernova.
There is no longer a stable system, no longer a system with a future. All that remains is the spectacle that hopes to distract you long enough that you too can be consumed, that it may sustain itself a few moments longer.

@tiotasram@kolektiva.social
2025-08-11 13:30:26

Speculative politics
As an anarchist (okay, maybe not in practice), I'm tired of hearing why we have to suffer X and Y indignity to "preserve the rule of law" or "maintain Democratic norms." So here's an example of what representative democracy (a form of government that I believe is inherently flawed) could look like if its proponents had even an ounce of imagination, and/or weren't actively trying to rig it to favor a rich donor class:
1. Unicameral legislature, where representatives pass laws directly. Each state elects 3 statewide representatives: the three most-popular candidates in a statewide race where each person votes for one candidate (ranked preference voting would be even better but might not be necessary, and is not a solution by itself). Instead of each representative getting one vote in the chamber, they get N votes, where N is the number of people who voted for them. This means that in a close race, instead of the winner getting all the power, the power is split. Having 3 representatives trades off between leisure size and ensuring that two parties can't dominate together.
2. Any individual citizen can contact their local election office to switch or withdraw their vote at any time (maybe with a 3-day delay or something). Voting power of representatives can thus shift even without an election. They are limited to choosing one of the three elected representatives, or "none of the above." If the "none of the above" fraction exceeds 20% of eligible voters, a new election is triggered for that state. If turnout is less than 80%, a second election happens immediately, with results being final even at lower turnout until 6 months later (some better mechanism for turnout management might be needed).
3. All elections allow mail-in ballots, and in-person voting happens Sunday-Tuesday with the Monday being a mandatory holiday. (Yes, election integrity is not better in this system and that's a big weakness.)
4. Separate nationwide elections elect three positions for head-of-state: one with diplomatic/administrative powers, another with military powers, and a third with veto power. For each position, the top three candidates serve together, with only the first-place winner having actual power until vote switches or withdrawals change who that is. Once one of these heads loses their first-place status, they cannot get it again until another election, even if voters switch preferences back (to avoid dithering). An election for one of these positions is triggered when 20% have withdrawn their votes, or if all three people initially elected have been disqualified by losing their lead in the vote count.
5. Laws that involve spending money are packaged with specific taxes to pay for them, and may only be paid for by those specific revenues. Each tax may be opted into or out of by each taxpayer; where possible opting out of the tax also opts you out of the service. (I'm well aware of a lot of the drawbacks of this, but also feel like they'd not necessarily be worse than the drawbacks of our current system.) A small mandatory tax would cover election expenses.
6. I'm running out of attention, but similar multi-winner elections could elect panels of judges from which a subset is chosen randomly to preside in each case.
Now I'll point out once again that this system, in not directly confronting capitalism, racism, patriarchy, etc., is probably doomed to the same failures as our current system. But if you profess to want a "representative democracy" as opposed to something more libratory, I hope you'll at least advocate for something like this that actually includes meaningful representation as opposed to the current US system that's engineered to quash it.
Key questions: "Why should we have winner-take-all elections when winners-take-proportionately-to-votes is right there?" and "Why should elected officials get to ignore their constituents' approval except during elections, when vote-withdrawal or -switching is possible?"
2/2
#Democracy

@blakes7bot@mas.torpidity.net
2025-07-10 18:15:20

Series B, Episode 07 - Killer
GAMBRILL: They just started to act strangely, vague, wandering about, then they went into convulsions. They're in the sick bay now.
BELLFRIAR: That could be a space contamination, you'd better get me the sick bay at once.
blake.torpidity.net/m/207/365

Claude Sonnet 4.0 describes the image as: "I can see two figures in what appears to be a futuristic interior setting with clean, white and beige walls typical of science fiction production design. One person is wearing a distinctive all-white outfit with puffy, padded segments and a high collar that gives it an almost ceremonial or protective appearance. The other figure is dressed in white with some dark trim details. The lighting and set design create a sterile, high-tech atmosphere character…
@georgiamuseum@glammr.us
2025-06-11 14:35:37

Last year, we bought a collection of 17 Georgia paintings by 19th- and 20th-century artists, many of whom are lesser known. Even the ones who are better known, like #NellChoateJones, aren't exactly _well_ known. We're excited to start studying these works and learning about the Georgia scenes many of them show.

Nell Choate Jones' painting "Square at St. Mary's," a color scene that shows a Black family in a southern square, next to a big tree. They could be in front of a church. Most of them are wearing white.
Augusta Oelschig's painting of a young Black boy in a gold frame. She shows him at bust length, looking slightly to his left. He wears a pale blue shirt with an open collar.
A watercolor painting of Savannah by Hattie Saussy. Seen from a park it shows several four- or five-story brick buildings across the way, partially blocked by greenery. The image is soft and pastel, more abstract in the foreground and more precise in the background.
@rasterweb@mastodon.social
2025-08-10 13:08:10

Trying to decide between going with a budget ebike around $1,000 or a better model from a local shop that would be double that price.
The cheaper one is definitely within my budget, while the other option is quite a bit more that I’d like to spend…
Hoping to commute to work, 7 miles each way, both could handle that easily…
#biking

@grumpybozo@toad.social
2025-07-05 16:41:34

Great news. Canada is doing offering government services (commercial driver’s license tests) in Ojibwe/Anishinaabemowin.
It’s a crime and a tragedy that indigenous languages are in danger of dying and everything that can be done to fight that is for the better, especially by governments using them.

@zachleat@zachleat.com
2025-06-06 19:36:30

@… yeah, you’re right — the numbers are aria-hidden. I could modify the component to have the line content embedded in the `<ul>` (but not visible) and hide the primary element from screen readers — that would be better for `<pre>` maybe but not textarea!

@cowboys@darktundra.xyz
2025-08-06 18:05:59

Cowboys UDFA could be on former star's trajectory insidethestar.com/cowboys-udfa

The woman who could impede RFK Jr.’s anti-vaccine agenda
The public would be better-off with a serious person such as Susan Monarez at the CDC’s helm.

The Senate should confirm her, and fast.
Monarez, meanwhile, needs to recognize the burden she is accepting.
Being CDC director is not an easy job, even in less contentious times.
Having to report to Kennedy makes it incalculably harder.
Lives will depend on whether Monarez resists Kennedy’s efforts to …

@karlauerbach@sfba.social
2025-06-08 23:45:07

It is a week until the grand Kim Jong Un, oops I mean FFOTUS imperial parade.
(How could I conflate Kim Jong Un's parade with the FFOTUS one? - North Korea does a much better job of putting on mass tributes.)
Anyway, I do hope that DC Metro workers and Uber/Lyft drivers take a sick day off as many of the parade goers will be anti-vaxers with active, infectious Covid or Measles.
Don't forget that on the 14th you should arrive mid-day to your local Metro stop, park in…

@hiimmrdave@hachyderm.io
2025-08-14 11:31:07

It's really frustrating to be able to see a false assumption in software that you have to use, at work for example, but have no feedback channel. I can see why you thought that but if you'd just ask I could help you make it better!

@UP8@mastodon.social
2025-07-23 14:15:52

🍺 Rice could be key to brewing better non-alcoholic beer
arstechnica.com/science/2025/0

@arXiv_csHC_bot@mastoxiv.page
2025-06-13 07:44:00

Speculative Design in Spiraling Time: Methods and Indigenous HCI
James Eschrich, Cole McMullen, Sarah Sterman
arxiv.org/abs/2506.10229

@nelson@tech.lgbt
2025-06-27 04:15:36

Calamus 45 Full of life, sweet-blooded, compact, visible
A remarkably effective poem for the end of the cluster. Whitman talking directly to us, the reader, about the import of his poems. And with some ambition: "To one a century hence, or any number of centuries hence".
But even better, he's horny for us:
Now it is you ... seeking me,
Fancying how happy you were, if I could be with you, and become your lover
The poet is imagining us, his future readers, thinking about how we will want to be his lover. What a lusty man! Whitman is not modest.
I love it. And it's a fitting end to this series. I've greatly enjoyed reading them. Over the past 45 days I've learned better how to read Whitman, to understand his poems. And to relate to them in at least one simple way, teasing out the gayest and sexiest parts of these poems. Making them fun for myself.
I'm not quite done yet. I hope to identify my favorites of the group. I may also try my hand at reading one or two aloud.

@AimeeMaroux@mastodon.social
2025-05-28 20:52:35
Content warning:

For some reason I'm getting lots of clicks for this old #review I wrote about a piece of Hermes / Perseus #romance. It's a cute idea but the execution could be better as conflicts are introduced that are resolved 5 min later. Sometimes literally.

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@mlawton@mstdn.social
2025-07-05 18:13:12

Very entertaining match. If Bayern could stay onside, they’d be through. It was a pretty level match, but PSG found better shots and that was the difference.
Kane showed up for an offside goal, but was otherwise invisible.
#ClubWorldCup #PSGBAY

@CondeChocula@social.linux.pizza
2025-06-08 17:43:43

I was bored and I made this. It could be better.
Everything is made with #inkscape except the logo 'The Legend of Zelda: Link's Awakening'.
#nintendo #linksawakening

A GameBoy catridge from The Legend of Zelda: Link's Awakening DX made in Inkscape.
@simon_brooke@mastodon.scot
2025-07-25 13:18:49

Good for Mhairi Black!
“To be honest, I’m looking around thinking, ‘There are better organisations that I could be giving a membership to than this one that I don’t feel has been making the right decisions for quite some time.’”
Her reasons for leaving the #SNP now (Trans rights, Palestine) are at least as good as mine were (the Monarchy) in 2008. She's still a person who I'd vote…

@mgorny@social.treehouse.systems
2025-06-02 15:19:28

Me: if I go through Września, I'll be 5 minutes earlier in Poznań, and I'll have a better chance of catching a transfer. But I'd have to run to catch the train to Września.
Me a minute later: the Września – Poznań train is delayed. No point in running, let's just go straight to Poznań.
Me at Poznań Wschód station: oh, the delayed train from Września goes straight to Leszno, so it is my transfer.
Fortunately, our train went first, so I could easily transfer at the main station.

@NFL@darktundra.xyz
2025-08-06 18:06:34

Travis Kelce not tipping retirement decision in Chiefs training camp nytimes.com/athletic/6538506/2

@tiotasram@kolektiva.social
2025-07-06 12:45:11

So I've found my answer after maybe ~30 minutes of effort. First stop was the first search result on Startpage (millennialhawk.com/does-poop-h), which has some evidence of maybe-AI authorship but which is better than a lot of slop. It actually has real links & cites research, so I'll start by looking at the sources.
It claims near the top that poop contains 4.91 kcal per gram (note: 1 kcal = 1 Calorie = 1000 calories, which fact I could find/do trust despite the slop in that search). Now obviously, without a range or mention of an average, this isn't the whole picture, but maybe it's an average to start from? However, the citation link is to a study (pubmed.ncbi.nlm.nih.gov/322359) which only included 27 people with impaired glucose tolerance and obesity. Might have the cited stat, but it's definitely not a broadly representative one if this is the source. The public abstract does not include the stat cited, and I don't want to pay for the article. I happen to be affiliated with a university library, so I could see if I have access that way, but it's a pain to do and not worth it for this study that I know is too specific. Also most people wouldn't have access that way.
Side note: this doing-the-research protect has the nice benefit of letting you see lots of cool stuff you wouldn't have otherwise. The abstract of this study is pretty cool and I learned a bit about gut microbiome changes from just reading the abstract.
My next move was to look among citations in this article to see if I could find something about calorie content of poop specifically. Luckily the article page had indicators for which citations were free to access. I ended up reading/skimming 2 more articles (a few more interesting facts about gut microbiomes were learned) before finding this article whose introduction has what I'm looking for: pmc.ncbi.nlm.nih.gov/articles/
Here's the relevant paragraph:
"""
The alteration of the energy-balance equation, which is defined by the equilibrium of energy intake and energy expenditure (1–5), leads to weight gain. One less-extensively-studied component of the energy-balance equation is energy loss in stools and urine. Previous studies of healthy adults showed that ≈5% of ingested calories were lost in stools and urine (6). Individuals who consume high-fiber diets exhibit a higher fecal energy loss than individuals who consume low-fiber diets with an equivalent energy content (7, 8). Webb and Annis (9) studied stool energy loss in 4 lean and 4 obese individuals and showed a tendency to lower the fecal energy excretion in obese compared with lean study participants.
"""
And there's a good-enough answer if we do some math, along with links to more in-depth reading if we want them. A Mayo clinic calorie calculator suggests about 2250 Calories per day for me to maintain my weight, I think there's probably a lot of variation in that number, but 5% of that would be very roughly 100 Calories lost in poop per day, so maybe an extremely rough estimate for a range of humans might be 50-200 Calories per day. Interestingly, one of the AI slop pages I found asserted (without citation) 100-200 Calories per day, which kinda checks out. I had no way to trust that number though, and as we saw with the provenance of the 4.91 kcal/gram, it might not be good provenance.
To double-check, I visited this link from the paragraph above: sciencedirect.com/science/arti
It's only a 6-person study, but just the abstract has numbers: ~250 kcal/day pooped on a low-fiber diet vs. ~400 kcal/day pooped on a high-fiber diet. That's with intakes of ~2100 and ~2350 kcal respectively, which is close to the number from which I estimated 100 kcal above, so maybe the first estimate from just the 5% number was a bit low.
Glad those numbers were in the abstract, since the full text is paywalled... It's possible this study was also done on some atypical patient group...
Just to come full circle, let's look at that 4.91 kcal/gram number again. A search suggests 14-16 ounces of poop per day is typical, with at least two sources around 14 ounces, or ~400 grams. (AI slop was strong here too, with one including a completely made up table of "studies" that was summarized as 100-200 grams/day). If we believe 400 grams/day of poop, then 4.91 kcal/gram would be almost 2000 kcal/day, which is very clearly ludicrous! So that number was likely some unrelated statistic regurgitated by the AI. I found that number in at least 3 of the slop pages I waded through in my initial search.

@karlauerbach@sfba.social
2025-08-06 17:29:30

We are a stupid species.... Here is my county government trying to create a Rube Goldberg class blockchain based system to deal with a problem that could be solved by printed lists of paper
The crypto/blockchain mindset certainly contains a big element of "if all you have is a hammer then everything looks like a nail".
(By-the-way, our county government is dumb in other dimensions - for year the emergency response command center was in a basement next to, and and *below…

@inthehands@hachyderm.io
2025-07-04 03:17:24

Totoro spoilers
Re the quoted post from @…:
The very first time I saw My Neighbor Totoro, it was because a friend cajoled me into attending the student anime club’s showing with absolutely no context whatsoever except “you •have• to see this movie, Paul.” I had no idea what genre it was. I had no idea that it was a movie considered suitable for kids. The last anime I’d seen was IIRC Ghost in the Shell; I was ready for anything.
When Mei went missing, I thought, “omg, is this a tragedy? I think this story is a tragedy!” The whole time they were looking for her, I was absolutely terrified. I thought for sure they’d found her sandal. I still tear up when they find her now, on every rewatching.
I’m so glad for that first viewing. It’s a much better movie that way. When you’re expecting an innocent movie about cute forest plushies, you see that. But when you see that it could be a tragedy — the mother’s shadowy illness, the lost child — it hits hard. wandering.shop/@Violinknitter/

@mlawton@mstdn.social
2025-07-05 16:55:49

Kompany really has Bayern playing well, much better than I expected if I’m honest. They’re giving PSG all they can handle and could be ahead, if not for a correctly judged offside decision.
Entertaining match so far and probably even. I really enjoy watching Kvaratshkelia play. Such pace and creativity, combined with the willingness to track back. Rare breed. He just about scored on a blisteringly direct attack. Neuer made a couple of fine saves.

@tiotasram@kolektiva.social
2025-08-04 15:49:39

Should we teach vibe coding? Here's why not.
2/2
To address the bigger question I started with ("should we teach AI-"assisted" coding?"), my answer is: "No, except enough to show students directly what its pitfalls are." We have little enough time as it is to cover the core knowledge that they'll need, which has become more urgent now that they're going to be expected to clean up AI bugs and they'll have less time to develop an understanding of the problems they're supposed to be solving. The skill of prompt engineering & other skills of working with AI are relatively easy to pick up on your own, given a decent not-even-mathematical understanding of how a neutral network works, which is something we should be giving to all students, not just our majors.
Reasonable learning objectives for CS majors might include explaining what types of bugs an AI "assistant" is most likely to introduce, explaining the difference between software engineering and writing code, explaining why using an AI "assistant" is likely to violate open-source licenses, listing at lest three independent ethical objections to contemporary LLMs and explaining the evidence for/reasoning behind them, explaining why we should expect AI "assistants" to be better at generating code from scratch than at fixing bugs in existing code (and why they'll confidently "claim" to have fixed problems they haven't), and even fixing bugs in AI generated code (without AI "assistance").
If we lived in a world where the underlying environmental, labor, and data commons issues with AI weren't as bad, or if we could find and use systems that effectively mitigate these issues (there's lots of piecemeal progress on several of these) then we should probably start teaching an elective on coding with an assistant to students who have mastered programming basics, but such a class should probably spend a good chunk of time on non-assisted debugging.
#AI #LLMs #VibeCoding

@chris@mstdn.chrisalemany.ca
2025-06-12 17:32:42

Jagmeet Singh deserved a lot better than he got from Canada considering what he endured.
"Singh told reporters in April that police had advised him in the winter of 2023 that his life could be in danger. They did not tell him who was behind the threat but he said the implication was that it was a foreign government.
He said he stayed in his basement, avoided windows and considered quitting politics over fears about his family’s safety. He decided to carry on but was forced to lead the NDP for a period under police protection.”
I hope Carney brings this up with Modi next week, but now with the tragedy of the air crash…
#canPoli #CdnPoli #elxn45 #NDP #India #Modi #G7
globalnews.ca/news/11229198/ja

@NFL@darktundra.xyz
2025-07-24 10:41:13

How an ESPN-NFL deal could change how we watch football, plus McAfee's apology nytimes.com/athletic/6510910/2

@mlawton@mstdn.social
2025-07-22 21:47:06

What should worry England is that, while "better"*** for most of the game, their starters cannot score. Their subs, Kelly and Agyemang carried them. Again.
And with another PK opportunity (on a sooooooofffftttt foul), they failed to convert again, initially anyway.
I assume they'll meet Spain in the final and that could be rough if they don't get right in a hurry.
*** "better" is tough to gauge when one team scores and bunkers, but...

@unchartedworlds@scicomm.xyz
2025-05-25 10:43:29

Cycling question: trying out saddles, in the UK
UK cycling people, is there somewhere you'd go to sit on different saddles to test if they're comfortable? Is that a thing?
I've worked out that my (default came-with-the-bike) saddle isn't the right shape for me: it's giving me an achy tailbone, as well as I think being a bit too narrow for optimal sit-bone comfort.
For context, I'm an "occasional cyclist for pleasure and/or practical reasons", shall we say. No ambition to be super fast.
Looking around online, I think I want something more like the Rido R2 or one of the Selle ones, shaped to have air under the tailbone area. Or maybe even a noseless one like the Spongy Wonder, though I don't like the look of how the metal frame sticks out at the front of those.
What's the chances a shop would have more than one of those and a willingness to get them out for a test sit? Or, better still, is there a loan scheme anywhere, so you can actually "test drive" them for a bit? Or do people usually just buy and be willing to sell again?
I'm in Nottingham, and I know there are bike shops I could get to, but I'm not seeing "come in and try all these saddles, we'll help you to find the right one" kinds of messaging.
Could also potentially travel elsewhere at some point if it turns out there's some kind of "best place in the country for that question".
Advice welcome!
#cycling #BikeTooter #AskFedi #UK

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding