Tootfinder

Opt-in global Mastodon full text search. Join the index!

@mxp@mastodon.acm.org
2025-08-19 16:40:40

“‘Good enough’ has been keeping me up at night. Because good enough would likely mean that not enough people recognize what’s really being built—and what’s being sacrificed—until it’s too late. […] What scares me the most about this scenario is that it’s the only one that doesn’t sound all that insane.”

‪@mxp@mastodon.acm.org‬
2025-08-19 16:40:40

“‘Good enough’ has been keeping me up at night. Because good enough would likely mean that not enough people recognize what’s really being built—and what’s being sacrificed—until it’s too late. […] What scares me the most about this scenario is that it’s the only one that doesn’t sound all that insane.”

@mxp@mastodon.acm.org‬
2025-08-19 16:40:40

“‘Good enough’ has been keeping me up at night. Because good enough would likely mean that not enough people recognize what’s really being built—and what’s being sacrificed—until it’s too late. […] What scares me the most about this scenario is that it’s the only one that doesn’t sound all that insane.”

@azonenberg@ioc.exchange
2025-06-20 02:46:28

I'm pretty happy with the top side Samtec ARF6 launch simulations now.
S11:
* Red: No cutout, -12 dB at 10 GHz, -7 at 20 GHz, -4.8 at 30 GHz
* Blue: Rectangular ground cutout the size of the pad, -22.5 at 10 GHz, -16.4 at 20 GHz, -11.7 at 30 GHz
* Green: 30um larger than the pad on all sides (1.06 x 0.41 mm), -30.5 at 10 GHz, -22 at 20, -15 at 30.
With the oversized cutout it's more than good enough for QSGMII and 25Gbase-R.
TDR:
* Red: No cutou…

Sonnet EM solver S11 plot (see post text for discussion of data)
ngscopeclient TDR transform of the EM simulations
2D geometry view showing the oversized cutout under to the connector launch
3D current density view showing return currents in the ground plane flowing around the cutouts
@arXiv_csPL_bot@mastoxiv.page
2025-09-18 08:30:11

Catalpa: GC for a Low-Variance Software Stack
Anthony Arnold, Mark Marron
arxiv.org/abs/2509.13429 arxiv.org/pdf/2509.13429

@crell@phpc.social
2025-07-17 18:31:29

It's 2025. If your library still doesn't have types in it (parameter, return, and property), I assume it's abandoned and I should not use it.
There are no exceptions to this statement. Not typing your PHP code in 2025 is irresponsible. No, docblocks are not good enough.
#PHP

@thomasfuchs@hachyderm.io
2025-08-15 13:39:32

Meeting at Google: “We haven’t fucked up the Web enough with our AMP shit, but the AI shit is doing pretty good work. What about if we also fuck up RSS?”
What an utterly shit company. I hope the AI hype destroys them.
(Just in case you think they're "just asking questions", they already opened a removal tracker before listening to replies: issues.chromium.org/issues/435)
github.com/whatwg/html/issues/

@markhburton@mstdn.social
2025-09-14 18:52:09

'Building “alternative” energy infrastructure isn’t enough. To avert climate disaster, fossil fuels need to be restricted, and energy consumption overall needs to fall.'
Good to see this in a major publication.
Dirty Lies About Clean Energy
currentaffairs.org/news/dirty-

@luana@wetdry.world
2025-08-15 17:03:54

Honestly we should invent a system of notifications that works good but that isn’t invasive enough to make you unwilling open a notification that arrives while your finger is just traveling to the screen to press a button on the top of an app.

@cheryanne@aus.social
2025-07-12 12:19:09

I've had a fairly unproductive day. Aound 2 hours sleep (finally) in the wee hours this morning Not really enough to equip me to face the day. Did a little work of my own. Did some minor tech support for a friend. She stayed most of the day and we ended up binge watching all 10 episodes of Murderbot, eating snacks, and drinking a couple of bottles of wine. I'm hoping that will be enough to enable me to sleep well tonight. Fingers crossed. Good night all. 🥱

@mariyadelano@hachyderm.io
2025-09-12 20:16:16

This AI complaint is brought to you today by this ridiculous poster at Manhattan’s Guitar Center
“Make your dream tone a reality”, my ass.
Ever notice how 90% of AI marketing copy is literally just vague platitudes because they can’t actually think of any legitimate benefits or use cases?
“Be anything you imagine” = “we can’t imagine anything good enough to say here so you do the thinking for us”

@NFL@darktundra.xyz
2025-08-23 06:24:25

Johnson slams Bears' offense: 'Not good enough' espn.com/nfl/story/_/id/460548

@cowboys@darktundra.xyz
2025-09-02 17:25:48

Do the Cowboys have enough to replace sack production? insidethestar.com/do-the-cowbo

@tiotasram@kolektiva.social
2025-07-04 20:14:31

Long; central Massachusetts colonial history
Today on a whim I visited a site in Massachusetts marked as "Huguenot Fort Ruins" on OpenStreetMaps. I drove out with my 4-year-old through increasingly rural central Massachusetts forests & fields to end up on a narrow street near the top of a hill beside a small field. The neighboring houses had huge lawns, some with tractors.
Appropriately for this day and this moment in history, the history of the site turns out to be a microcosm of America. Across the field beyond a cross-shaped stone memorial stood an info board with a few diagrams and some text. The text of the main sign (including typos/misspellings) read:
"""
Town Is Formed
Early in the 1680's, interest began to generate to develop a town in the area west of Natick in the south central part of the Commonwealth that would be suitable for a settlement. A Mr. Hugh Campbell, a Scotch merchant of Boston petitioned the court for land for a colony. At about the same time, Joseph Dudley and William Stoughton also were desirous of obtaining land for a settlement. A claim was made for all lands west of the Blackstone River to the southern land of Massachusetts to a point northerly of the Springfield Road then running southwesterly until it joined the southern line of Massachusetts.
Associated with Dudley and Stoughton was Robert Thompson of London, England, Dr. Daniel Cox and John Blackwell, both of London and Thomas Freak of Hannington, Wiltshire, as proprietors. A stipulation in the acquisition of this land being that within four years thirty families and an orthodox minister settle in the area. An extension of this stipulation was granted at the end of the four years when no group large enough seemed to be willing to take up the opportunity.
In 1686, Robert Thompson met Gabriel Bernor and learned that he was seeking an area where his countrymen, who had fled their native France because of the Edict of Nantes, were desirous of a place to live. Their main concern was to settle in a place that would allow them freedom of worship. New Oxford, as it was the so-named, at that time included the larger part of Charlton, one-fourth of Auburn, one-fifth of Dudley and several square miles of the northeast portion of Southbridge as well as the easterly ares now known as Webster.
Joseph Dudley's assessment that the area was capable of a good settlement probably was based on the idea of the meadows already established along with the plains, ponds, brooks and rivers. Meadows were a necessity as they provided hay for animal feed and other uses by the settlers. The French River tributary books and streams provided a good source for fishing and hunting. There were open areas on the plains as customarily in November of each year, the Indians burnt over areas to keep them free of underwood and brush. It appeared then that this area was ready for settling.
The first seventy-five years of the settling of the Town of Oxford originally known as Manchaug, embraced three different cultures. The Indians were known to be here about 1656 when the Missionary, John Eliott and his partner Daniel Gookin visited in the praying towns. Thirty years later, in 1686, the Huguenots walked here from Boston under the guidance of their leader Isaac Bertrand DuTuffeau. The Huguenot's that arrived were not peasants, but were acknowledged to be the best Agriculturist, Wine Growers, Merchant's, and Manufacter's in France. There were 30 families consisting of 52 people. At the time of their first departure (10 years), due to Indian insurrection, there were 80 people in the group, and near their Meetinghouse/Church was a Cemetery that held 20 bodies. In 1699, 8 to 10 familie's made a second attempt to re-settle, failing after only four years, with the village being completely abandoned in 1704.
The English colonist made their way here in 1713 and established what has become a permanent settlement.
"""
All that was left of the fort was a crumbling stone wall that would have been the base of a higher wooden wall according to a picture of a model (I didn't think to get a shot of that myself). Only trees and brush remain where the multi-story main wooden building was.
This story has so many echoes in the present:
- The rich colonialists from Boston & London agree to settle the land, buying/taking land "rights" from the colonial British court that claimed jurisdiction without actually having control of the land. Whether the sponsors ever actually visited the land themselves I don't know. They surely profited somehow, whether from selling on the land rights later or collecting taxes/rent or whatever, by they needed poor laborers to actually do the work of developing the land (& driving out the original inhabitants, who had no say in the machinations of the Boston court).
- The land deal was on condition that there capital-holders who stood to profit would find settlers to actually do the work of colonizing. The British crown wanted more territory to be controlled in practice not just in theory, but they weren't going to be the ones to do the hard work.
- The capital-holders actually failed to find enough poor suckers to do their dirty work for 4 years, until the Huguenots, fleeing religious persecution in France, were desperate enough to accept their terms.
- Of course, the land was only so ripe for settlement because of careful tending over centuries by the natives who were eventually driven off, and whose land management practices are abandoned today. Given the mention of praying towns (& dates), this was after King Phillip's war, which resulted in at least some forced resettlement of native tribes around the area, but the descendants of those "Indians" mentioned in this sign are still around. For example, this is the site of one local band of Nipmuck, whose namesake lake is about 5 miles south of the fort site: #LandBack.

@mgorny@social.treehouse.systems
2025-08-10 18:33:53

"""
Once the distinctions had been made, and the first punishments applied, the venereal were accepted into the hospital. And they were crammed inside. In 1781, 138 men occupied 60 beds in the Saint-Eustache quarter of Bicêtre, and in the Miséricorde in the Salpêtrière there were 125 beds for 224 women. Patients in the terminal stages of the disease were simply left to die. 'Grand Remedies' were applied to the others: never more, and rarely less than six weeks of care, starting of course with blood-letting and purging, then a week of baths for two hours per day, then purging again, followed by a full and complete confession to bring this first part of the treatment to a close. Rubbing with mercury could then begin, with all its efficacy. Each course of treatment lasted one month, and was followed by two more purges and one final bleeding to chase out the remaining morbific humours. Fifteen days of convalescence were then granted. After he had definitively made his peace with God, the patient was declared cured and sent away.
This 'therapeutic' demonstrates a rich tapestry of fantasy, and above all a profound complicity between medicine and morality, which give their full meaning to these purification practices. For the classical age, venereal disease was less a sickness than an impurity to which physical symptoms are correlated. Accordingly, medical perception is ruled by ethical perception, and on occasion even effaced by it. The body must be treated to remove the contagion, but the flesh must be punished, for it is the flesh that attaches us to sin. Mere corporal punishment was not enough: the flesh was to be pummelled and bruised, and leaving painful traces was not to be feared, as good health, all too frequently, transformed the human body into another opportunity for sinful conduct. The sickness was to be treated, but the good health that could lead to temptation was to be destroyed.
"""
(Michel Foucault, History of Madness)

@catsalad@infosec.exchange
2025-09-08 21:01:32

Aww... It's an itty bitty pillow!

Photo of someone holding a tiny spicy pillow (lithium ion battery that has swollen and will explode soon). The battery from a USB device, which is small enough to be held with one fingers, has a red and black wire on one side with gold colored tape and a tiny bit of white smoke coming out. Well that's not good...
@luana@wetdry.world
2025-08-15 12:58:06

Is there like a zigbee (or anything that works with home-assistant) device that can connect/disconnect a HDMI cable (4k, etc)? Even better if it’s also a HDMI switch, but just being able to disable the connection without physically removing the cable when my TV is off would be good enough

@pgcd@mastodon.online
2025-08-11 14:28:41

I'm currently hitting a huge impostor syndrome wall-cum-quicksands state of mind.
I don't want to talk about it with friends because "no you're actually good" is something I tell myself already and I donì't trust myself about it, let alone non-mes saying it.
I have watched videos and read articles and it's not enough right now.
What do I do?

@PwnieFan@infosec.exchange
2025-08-13 16:05:48

I kinda hate marketing . . . but I enjoy supporting talented artists. I worked with two artists to get some promotional art for .Liar, Cheater, Sinner, Saint.' Here's the art from @gargoyle.pastures that inspired me to throw away the cover I had already bought and use one of these panels instead. Links to the full panels are up at

Dialogue Tiny Clint: Dan. Dan: Tiny. Tiny: You know what my orders are? Dan: Take me to the revival of Cats showing tonight? Tiny: I like you, Dan Mackenzie. You could make this easier on yourself. Dan: (silent) Tiny: It’s a good offer. A lucrative offer Dan: Assuming I make it. Tiny: There is that. Dan: I can’t do this job. I know how I survive; I’m a cockroach. Tiny: Is that some sort of Kafka reference? Dan: I survive because no one cares enough to squash me. If I take this job, I’ll need pr…
@teledyn@mstdn.ca
2025-08-04 04:46:31

This must be the ultimate #monsterdon
- monster? Check
- nuclear terror? Check
- hip soundtrack? Check
- unbearably bad acting? Check
- unbearably bad writing? Check
- unbearably bad voice-over? Check
- unbearably bad editing? Check
- long sequences of irrelevant stock footage? Check
- bizarre plot twist you didn't expect? Check
- solid inspiration to any aspiring film maker who thinks they aren't good enough or have budget enough or skills enough to gain eternal global distribution? Check
What more could you want?
"Monster a Go-Go" (Herschell Gordon Lewis, 1965) - FULL MOVIE
youtube.com/watch?v=btJoXBIv2S

@kexpmusicbot@mastodonapp.uk
2025-06-30 22:19:49

🇺🇦 #NowPlaying on KEXP's #AfternoonShow
Mudhoney:
🎵 Good Enough
#Mudhoney
mudhoney.bandcamp.com/track/go
open.spotify.com/track/2wcpa5P

@padraig@mastodon.ie
2025-08-09 21:47:32

Yeah, that's not good :/
Seems like the blocking threads/meta was not enough.
#mastodon #fediverse #fediverse

leaked list from Dropsitenews showing that cdn.masto.host is being scraped by Meta
@simon_brooke@mastodon.scot
2025-09-06 07:31:27

Having designed a good-enough CAD model of the subframe for my #tricycle as a welded aluminium structure, I'm now thinking that it might be better constructed as carbon fibre laid up over an XPS polystyrene armature.
The benefits are that I can do composites myself, whereas I can't weld aluminium; and that the composite would probably be both stronger and lighter. The downside is th…

Another illustration of the aluminium framed subframe. Both sprockets on the epicyclic are now shown, but the epicyclic still floats in space with no visible means of support. The secondary chain sprocket on the road wheel is also shown. The chainring is now a ring and not a solid wheel, making it easier to see that's behind it. The chainring now lines up correctly with the primary sprocket on the epicyclic, and the secondary sprocket lines up with the sprocket on the road wheel: there is room …
@Erikmitk@mastodon.gamedev.place
2025-07-11 06:33:51

„She is excited to go home and keep dreaming her American Microdream (a new trend where Gen Z workers, instead of using their salaries to buy homes and support a family, hope to someday pay off their student loans.) She sighs. The Microdeath cannot come soon enough.“
This piece is very good!
„Are you sure these are new workplace trends? Are you sure you aren’t just describing a routine phenomenon in an alarmed way?“

@drbruced@aus.social
2025-09-09 14:18:22

@… well, they call it a coffee with milk, literally translated, but it sure resembles a flat white, including the size and shape of cup. I was told not to expect great coffee here but it seems that any town with enough tourists eventually wakes up to the market for good coffee

@blakes7bot@mas.torpidity.net
2025-09-08 18:12:10

Series A, Episode 02 - Space Fall
BLAKE: Make it good Vila.
VILA: Gan?...
[Vila and Gan head towards the guard]
BLAKE: We'll be ready in exactly fifteen minutes. Will that give you enough time?
[Avon nods.]
blake.torpidity.net/m/102/239 B7B5

@nelson@tech.lgbt
2025-07-06 02:13:14

Feeling wistful for an old lover, someone I didn't appreciate enough how good he was for me.

@hex@kolektiva.social
2025-08-07 00:24:12

There was once a machine that told you "you want this" and "this is good." It said, "there can be no better system and it's foolish to try to build one." That machine has long since failed to function. Now you choke on fumes as it is consumed by the wild flames of an abandoned cause.
That machine could not possibly work anymore because the evidence of it's falsehood has become too overwhelming.
No, only abject terror now can keep you from plotting your escape, from creating an alternative. No, the illusion has long since broken. All that's left now is triggering fight, flight, freeze as hard as possible. Most will be paralyzed, and those who fight can be used as an excuse to escalate the terror.
These are the final stages of a dying sun, expanding and consuming it's children before the final supernova.
There is no longer a stable system, no longer a system with a future. All that remains is the spectacle that hopes to distract you long enough that you too can be consumed, that it may sustain itself a few moments longer.

@chris@mstdn.chrisalemany.ca
2025-08-14 21:16:59

Water off. Level is not 100% full but close enough that the pump can just run. We are expecting rain tonight that might be enough to completely fill it.
No leaks detected.
I’ll leave the pump at 100%/180W for a couple hours just to give it a good run. Then I will turn down to minimum tonight.
#poolpond #diy #portalberni #backyardproject

@jswright61@ruby.social
2025-09-07 10:41:46

That feeling when you try to fav a toot and you can’t because it's been deleted.
Wait what? it was good enough for a fav, why would you delete it?

@compfu@mograph.social
2025-07-06 18:34:54

Really cool video about why the video games industry is struggling: everybody has to compete with addictive social media for eyeballs and time. And unless whole new markets are opened up (humans are not born quickly enough) there's just no longer a way to create exponential growth. But billionaire investors need that. That's why they are rather investing in AI.
By the way, this is the same reason that cinemas have gotten in trouble (and now even streaming services...)

@losttourist@social.chatty.monster
2025-07-03 09:28:25

Fedi meta-musings.
Just went to look at a Mastodon account I interacted with a little while back. Their follow requests require approval (which is fair enough) and the bio states
Got a blank or nonsensical avatar, no visible activity, no pointers to your identity? I'll ignore your follow request.
Well I guess that's me out. I have a good reason for wanting to be pseudonymous, as do many others here I imagine.
Of course it's every user's right to set whatever conditions they want on who follows them, but a blanket refusal on anyone not featuring a "real name" and a human-appearing avatar feels quite over-sensitive to me.
#fediverse #mastodon

@mariyadelano@hachyderm.io
2025-08-06 17:45:03

I really need to find a way to work with more ethical companies as clients.
Challenges: there are fewer of them out there than unethical ones, they are hard to distinguish from ones who just pretend to do good, and often they don't have enough money to afford our services.
But I don't think that's worth giving up on... I want to find a way to get paid enough to afford my life and the business while also helping organizations and people I believe in.

@tiotasram@kolektiva.social
2025-07-10 13:31:32

"As we approach the coming jobs cliff, we're entering a period where a college isn't going to be worth it for the majority of people, since AI will take over most white-collar jobs. Combined with the demographic cliff, the entire higher education system will crumble."
This is the kind of statement you don't hear that much from sub-CEO-level #AI boosters, because it's awkward for them to admit that the tech they think is improving their life is going to be disastrous for society. Or if they do admit this, they spin it like it's a good thing (don't get me wrong, tuition is ludicrously high and higher education absolutely could be improved by a wholesale reinvention, but the potential AI-fueled collapse won't be an improvement).
I'm in the "anti-AI" crowd myself, and I think the current tech is in a hype bubble that will collapse before we see wholesale replacement of white-collar jobs, with a re-hiring to come that will somewhat make up for the current decimation. There will still be a lot of fallout for higher ed (and hopefully some productive transformation), but it might not be apocalyptic.
Fun question to ask the next person who extols the virtues of using generative AI for their job: "So how long until your boss can fire you and use the AI themselves?"
The following ideas are contradictory:
1. "AI is good enough to automate a lot of mundane tasks."
2. "AI is improving a lot so those pesky issues will be fixed soon."
3. "AI still needs supervision so I'm still needed to do the full job."

@cdp1337@social.veraciousnetwork.com
2025-09-06 23:30:58

This week I watched a video about Grist posted by LawrenceSystems and found it to be a good fit in what I've been looking for a while now.
Basically just a spreadsheet with API support for automation with other systems and data collectors. Been using SuiteCRM for a while and it worked well enough but is too clunky and brittle to quickly add/adjust columns or extend functionality.
Thus far have put together some middleware for device inventory management and about to work on …

@zachleat@zachleat.com
2025-08-28 12:43:38

@… a ha, you might have subscribed to my arch nemesis: that software should be good enough

Great stuff! I can't recommend enough!
OK, I'll try, but it's like I'm trying to slam dunk a basketball, when I can't jump 6" off the ground...
You'll love his books. They're great. Life-lessons. Tragedy. Comedy. And they're a good deal. Did I mention cheap?
#ShortFiction #ProsePoetry

@fell@ma.fellr.net
2025-07-02 23:42:42

I'm going on a 4 day vacation to the north sea tomorrow and I think I'm taking the @… phone.
The 25.06 release is the one where finally everything clicks into place for me. It's super stable. I can even take pictures! Not the prettiest, but good enough to capture memories.

@detondev@social.linux.pizza
2025-06-27 18:28:37

"good enough" - guy who isn't suicidal

@volephd@fediscience.org
2025-08-01 19:14:14

Well, "worked". There are some weird artefacts in the video, but good enough for my use-case

@unchartedworlds@scicomm.xyz
2025-07-24 07:30:11
Content warning: a nice thing - yesterday's BiCon pre-meet

Hosted a BiCon pre-meet yesterday, online. Conveniently there were exactly 12 people there for most of it (not counting me), perfect for dividing into threes! I kept switching the groups so that people could meet different people.
We talked about how we'd each like BiCon to be, and how we could make it more likely to turn out that way.
Top tips: get enough sleep, eat enough food, and don't try to do everything!
Then we also talked about what contribution we might like to make - though I also said, just being there and being friendly and making BiCon more varied is a contribution in itself :-)
Several of the people who'd come along turned out to be already signed up to offer workshop sessions, so we heard a little bit about those.
Two tasks currently available if you want one are (a) keeping an eye on the Zoom setup for the hybrid events, (b) leafleting at Pride on Saturday, so that more people know about BiCon for Sunday. There's usually also opportunities to assist with being welcoming at reception.
In-person BiCon starts tomorrow, and runs Friday till Sunday. The venue is a couple of buildings belonging to the girls' high school, in between the Forest and the Arboretum. I tagged along for a site visit the other day and I think it's pretty good for air quality.
Apparently about 70 people have booked so far. It's also possible to buy a ticket on the day, so that might not be the final total.
As I reminded people last night, you don't have to be bi to come to BiCon! And if you _are_ bi, you don't have to be any particular amount of bi :-)
#BiCon #Nottingham

@tante@tldr.nettime.org
2025-07-21 08:03:34

It does have use cases (we use it for prototyping spatial experiences at work) but for mainstream use that is spot on. The tech doesn't work good enough to combat all the negative aspects that come with its usage.
mastodon.social/@anon_opin/114

@paulbusch@mstdn.ca
2025-08-28 11:07:22

Good Morning #Canada
In my opinion, Great Bear Lake doesn't get enough attention or respect. It is the largest lake entirely in Canada (Lake Superior and Lake Huron are larger but straddle the Canada–US border), the fourth-largest in North America, and the eighth-largest in the world. The lake has a surface area of 31,000 square km and a volume of 2,234 cubic km. Its maximum depth is 446 m with an average depth of 71.7 m. In the winter, ice hijghways are opened across Great Bear Lake to supply northern communities and provide heavy equipment for resource companies.
#CanadaIsAwesome
youtu.be/kJXBUbVwBNo?si=fyUDnL

@sean@scoat.es
2025-07-28 18:29:57

A big part of why everything is worse than it was is that we’ve let a relatively very small number of people and businesses become rich enough to buy up the good things and make them worse than they were before.

@azonenberg@ioc.exchange
2025-07-06 17:05:56

Does anyone have experience with running docker images of other Linux distros in GitHub Actions CI?
Use case is validating that things build and run under e.g. Fedora or Arch given an outer Ubuntu system. It won't be perfect because you're still running the Ubuntu kernel but should be good enough to find obvious build problems, produce usable nightly builds, etc.

@tiotasram@kolektiva.social
2025-07-19 07:51:05

AI, AGI, and learning efficiency
My 4-month-old kid is not DDoSing Wikipedia right now, nor will they ever do so before learning to speak, read, or write. Their entire "training corpus" will not top even 100 million "tokens" before they can speak & understand language, and do so with real intentionally.
Just to emphasize that point: 100 words-per-minute times 60 minutes-per-hour times 12 hours-per-day times 365 days-per-year times 4 years is a mere 105,120,000 words. That's a ludicrously *high* estimate of words-per-minute and hours-per-day, and 4 years old (the age of my other kid) is well after basic speech capabilities are developed in many children, etc. More likely the available "training data" is at least 1 or 2 orders of magnitude less than this.
The point here is that large language models, trained as they are on multiple *billions* of tokens, are not developing their behavioral capabilities in a way that's remotely similar to humans, even if you believe those capabilities are similar (they are by certain very biased ways of measurement; they very much aren't by others). This idea that humans must be naturally good at acquiring language is an old one (see e.g. #AI #LLM #AGI

@luana@wetdry.world
2025-07-30 16:08:42

Question for other therians and otherkins on fedi: For someone who doesn’t know what therian is, is “kinda like trans but for species instead of gender” a good enough simple explanation?
#otherkin #therian
Really good
Good enough
Bad
Really bad

@samir@functional.computer
2025-06-21 14:45:21

@… I followed up in subsequent posts, but briefly: one alarm is not enough. I am really good at ignoring notifications and alarms.

@kasilas@mastodon.ie
2025-07-29 07:39:25

Yeah, there seems no way to argue that the EU deal was bad. At a time when Trump was weakened, given the EU market size, there was really no excuse for this. The EU is normally good at trade deals, but this was bad.
Hopefully, someone is brave enough to block it.
#eu #us

@lil5@social.linux.pizza
2025-07-30 07:47:54

What people fail to realize, both in the UK and internationally is that the UK has the authority under the online safety bill to arrest anyone who owns a website and refuses to make their website UK compliant, or simply didn't do a good enough job of making it so.
#OnlineSafetyAct #selfhosting

@grumpybozo@toad.social
2025-08-22 21:38:13

It's even worse among junior and would-be sysadmins.
I constantly see people worried about mature free email software that goes for a year or more without new releases or even commits. That can be reasonable for complex tools, but sometimes a piece of software is small enough in scope that it can be *done* and actually bug-free.
The only way old software goes bad is when it is asked to work in an environment that it was not built for.

@catsalad@infosec.exchange
2025-06-28 06:00:01

You'd think with how much I love and obsess over cats, I would know all their breeds, but they are all just kitties to me!
Big kitty, naked kitty, floofy kitty, orange kitty, etc. is good enough for me. :3

@NFL@darktundra.xyz
2025-09-08 10:46:38

The Bills are going to win the Super Bowl*, plus Alcaraz's reign nytimes.com/athletic/6609558/2

@jamesthebard@social.linux.pizza
2025-07-20 07:39:20

Got the documentation for the dice parser software online and now I can close the laptop and relax a bit. I think it still needs some cleaning up, but it's good enough for now.
#python

A screenshot of the front page of the DiceParser documentation written using mkdocs.
@gwire@mastodon.social
2025-06-23 18:09:56

I hope Foundation gets a fourth season. The first season was pretty enough, but the second season really got into its stride.
gizmodo.com/murderbot-season-2

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@arXiv_csPL_bot@mastoxiv.page
2025-08-21 07:38:59

Close is Good Enough: Component-Based Synthesis Modulo Logical Similarity
Ashish Mishra, Suresh Jagannathan
arxiv.org/abs/2508.14614 arxiv.…

@losttourist@social.chatty.monster
2025-08-29 20:28:17

As the saying goes, "Follow in haste, unfollow at leisure".
Actually I don't think that's a saying, but I just said it so that's good enough for me.

@azonenberg@ioc.exchange
2025-09-05 03:27:04

Welp.
The toolchain is cursed but I have the Trion devkit talking FMC/APB at 75 MHz to a STM32H750. I can probably push further (maybe 100 MHz PCLK?), this isn't failing timing, but it's good enough for the purpose.

Efinix Trion T20 devkit with alternating on/off LED pattern displayed and a small purple PCB hanging off one of the IO connectors
@raiders@darktundra.xyz
2025-08-30 17:07:18

Raiders Predicted to Add Elite OT Who Can Protect Geno Smith heavy.com/sports/nfl/las-vegas]

@cowboys@darktundra.xyz
2025-09-05 18:59:50

Cowboys Break: Good Enough to Win | Dallas Cowboys 2025 youtube.com/watch?v=76nfIG2C86A

@compfu@mograph.social
2025-07-26 10:48:34

Nostalgia is a helluva drug. Supercuts of the 80s movies set to #synthwave music make me emotional.
I know about the research that everybody regards "the good old days" as exactly the time when they were kids/adolescents: no memory of the real weight of the world.
We have enough articles about how the 50s/60s weren't really good for many groups (and we're myopically …

@jswright61@ruby.social
2025-06-25 14:11:18

Headed home. Prognosis is good. 106 hours in the hospital is quite long enough.

@kasilas@mastodon.ie
2025-07-29 07:39:25

Yeah, there seems no way to argue that the EU deal was bad. At a time when Trump was weakened, given the EU market size, there was really no excuse for this. The EU is normally good at trade deals, but this was bad.
Hopefully, someone is brave enough to block it.
#eu #us

@blakes7bot@mas.torpidity.net
2025-06-25 15:27:36

Series C, Episode 06 - City at the Edge of the World
VILA: Homeworld. You wanted to call it Homeworld. All right, then we'll call it Homeworld, it's a good enough name.
KERRIL: Are you playing games with me?
VILA: Games?
blake.torpidity.net/m/306/531 B7…

@tiotasram@kolektiva.social
2025-08-04 15:49:39

Should we teach vibe coding? Here's why not.
2/2
To address the bigger question I started with ("should we teach AI-"assisted" coding?"), my answer is: "No, except enough to show students directly what its pitfalls are." We have little enough time as it is to cover the core knowledge that they'll need, which has become more urgent now that they're going to be expected to clean up AI bugs and they'll have less time to develop an understanding of the problems they're supposed to be solving. The skill of prompt engineering & other skills of working with AI are relatively easy to pick up on your own, given a decent not-even-mathematical understanding of how a neutral network works, which is something we should be giving to all students, not just our majors.
Reasonable learning objectives for CS majors might include explaining what types of bugs an AI "assistant" is most likely to introduce, explaining the difference between software engineering and writing code, explaining why using an AI "assistant" is likely to violate open-source licenses, listing at lest three independent ethical objections to contemporary LLMs and explaining the evidence for/reasoning behind them, explaining why we should expect AI "assistants" to be better at generating code from scratch than at fixing bugs in existing code (and why they'll confidently "claim" to have fixed problems they haven't), and even fixing bugs in AI generated code (without AI "assistance").
If we lived in a world where the underlying environmental, labor, and data commons issues with AI weren't as bad, or if we could find and use systems that effectively mitigate these issues (there's lots of piecemeal progress on several of these) then we should probably start teaching an elective on coding with an assistant to students who have mastered programming basics, but such a class should probably spend a good chunk of time on non-assisted debugging.
#AI #LLMs #VibeCoding

@kexpmusicbot@mastodonapp.uk
2025-07-22 00:13:40

🇺🇦 #NowPlaying on KEXP's #DriveTime
Yazmin Lacey:
🎵 Ain’t Good Enough for You
#YazminLacey
open.spotify.com/track/77WEFz4

@tiotasram@kolektiva.social
2025-07-28 13:06:20

How popular media gets love wrong
Now a bit of background about why I have this "engineered" model of love:
First, I'm a white straight cis man. I've got a few traits that might work against my relationship chances (e.g., neurodivergence; I generally fit pretty well into the "weird geek" stereotype), but as I was recently reminded, it's possible my experience derives more from luck than other factors, and since things are tilted more in my favor than most people on the planet, my advice could be worse than useless if it leads people towards strategies that would only have worked for someone like me. I don't *think* that's the case, but it's worth mentioning explicitly.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
I'm lucky in that I had some mixed-gender social circles already like intramural soccer and a graduate-student housing potluck. Graduate school makes a *lot* more of these social spaces accessible, so I recognize that those not in school of some sort have a harder time of things, especially if like me they don't feel like they fit in in typical adult social spaces like bars.
However, at one point I just decided that my desire for a relationship would need action on my part and so I'd try to build a relationship and see what happened. I worked up my courage and asked one of the people in my potluck if she'd like to go for a hike (pretty much clearly a date but not explicitly one; in retrospect not the best first-date modality in a lot of ways, but it made a little more sense in our setting where we could go for a hike from our front door). To emphasize this point: I was not in love with (or even infatuated with) my now-wife at that point. I made a decision to be open to building a relationship, but didn't follow the typical romance story formula beyond that. Now of course, in real life as opposed to popular media, this isn't anything special. People ask each other out all the time just because they're lonely, and some of those relationships turn out fine (although many do not).
I was lucky in that some aspects of who I am and what I do happened to be naturally comforting to my wife (natural advantage in the "appeal" model of love) but of course there are some aspects of me that annoy my wife, and we negotiate that. In the other direction, there's some things I instantly liked about my wife, and other things that still annoy me. We've figured out how to accept a little, change a little, and overall be happy with each other (though we do still have arguments; it's not like the operation/construction/maintenance of the "love mechanism" is always perfectly smooth). In particular though, I approached the relationship with the attitude of "I want to try to build a relationship with this person," at first just because of my own desires for *any* relationship, and then gradually more and more through my desire to build *this specific* relationship as I enjoyed the rewards of companionship.
So for example, while I think my wife is objectively beautiful, she's also *subjectively* very beautiful *to me* because having decided to build a relationship with her, I actively tried to see her as beautiful, rather than trying to judge whether I wanted a relationship with her based on her beauty. In other words, our relationship is more causative of her beauty-to-me than her beauty-to-me is causative of our relationship. This is the biggest way I think the "engineered" model of love differs from the "fire" and "appeal" models: you can just decide to build love independent of factors we typically think of as engendering love (NOT independent of your partner's willingness to participate, of course), and then all of those things like "thinking your partner is beautiful" can be a result of the relationship you're building. For sure those factors might affect who is willing to try building a relationship with you in the first place, but if more people were willing to jump into relationship building (not necessarily with full commitment from the start) without worrying about those other factors, they might find that those factors can come out of the relationship instead of being prerequisites for it. I think this is the biggest failure of the "appeal" model in particular: yes you *do* need to do things that appeal to your partner, but it's not just "make myself lovable" it's also: is your partner putting in the effort to see the ways that you are beautiful/lovable/etc., or are they just expecting you to become exactly some perfect person they've imagined (and/or been told to desire by society)? The former is perfectly possible, and no less satisfying than the latter.
To cut off my rambling a bit here, I'll just add that in our progress from dating through marriage through staying-married, my wife and I have both talked at times explicitly about commitment, and especially when deciding to get married, I told her that I knew I couldn't live up to the perfect model of a husband that I'd want to be, but that if she wanted to deepen our commitment, I was happy to do that, and so we did. I also rearranged my priorities at that point, deciding that I knew I wanted to prioritize this relationship above things like my career or my research interests, and while I've not always been perfect at that in my little decisions, I've been good at holding to that in my big decisions at least. In the end, *once we had built a somewhat-committed relationship*, we had something that we both recognized was worth more than most other things in life, and that let us commit even more, thus getting even more out of it in the long term. Obviously you can't start the first date with an expectation of life-long commitment, and you need to synchronize your increasing commitment to a relationship so that it doesn't become lopsided, which is hard. But if you take the commitment as an active decision and as the *precursor* to things like infatuation, attraction, etc., you can build up to something that's incredibly strong and rewarding.
I'll follow this up with one more post trying to distill some advice from my ramblings.
#relationships #love

@tiotasram@kolektiva.social
2025-07-06 12:45:11

So I've found my answer after maybe ~30 minutes of effort. First stop was the first search result on Startpage (millennialhawk.com/does-poop-h), which has some evidence of maybe-AI authorship but which is better than a lot of slop. It actually has real links & cites research, so I'll start by looking at the sources.
It claims near the top that poop contains 4.91 kcal per gram (note: 1 kcal = 1 Calorie = 1000 calories, which fact I could find/do trust despite the slop in that search). Now obviously, without a range or mention of an average, this isn't the whole picture, but maybe it's an average to start from? However, the citation link is to a study (pubmed.ncbi.nlm.nih.gov/322359) which only included 27 people with impaired glucose tolerance and obesity. Might have the cited stat, but it's definitely not a broadly representative one if this is the source. The public abstract does not include the stat cited, and I don't want to pay for the article. I happen to be affiliated with a university library, so I could see if I have access that way, but it's a pain to do and not worth it for this study that I know is too specific. Also most people wouldn't have access that way.
Side note: this doing-the-research protect has the nice benefit of letting you see lots of cool stuff you wouldn't have otherwise. The abstract of this study is pretty cool and I learned a bit about gut microbiome changes from just reading the abstract.
My next move was to look among citations in this article to see if I could find something about calorie content of poop specifically. Luckily the article page had indicators for which citations were free to access. I ended up reading/skimming 2 more articles (a few more interesting facts about gut microbiomes were learned) before finding this article whose introduction has what I'm looking for: pmc.ncbi.nlm.nih.gov/articles/
Here's the relevant paragraph:
"""
The alteration of the energy-balance equation, which is defined by the equilibrium of energy intake and energy expenditure (1–5), leads to weight gain. One less-extensively-studied component of the energy-balance equation is energy loss in stools and urine. Previous studies of healthy adults showed that ≈5% of ingested calories were lost in stools and urine (6). Individuals who consume high-fiber diets exhibit a higher fecal energy loss than individuals who consume low-fiber diets with an equivalent energy content (7, 8). Webb and Annis (9) studied stool energy loss in 4 lean and 4 obese individuals and showed a tendency to lower the fecal energy excretion in obese compared with lean study participants.
"""
And there's a good-enough answer if we do some math, along with links to more in-depth reading if we want them. A Mayo clinic calorie calculator suggests about 2250 Calories per day for me to maintain my weight, I think there's probably a lot of variation in that number, but 5% of that would be very roughly 100 Calories lost in poop per day, so maybe an extremely rough estimate for a range of humans might be 50-200 Calories per day. Interestingly, one of the AI slop pages I found asserted (without citation) 100-200 Calories per day, which kinda checks out. I had no way to trust that number though, and as we saw with the provenance of the 4.91 kcal/gram, it might not be good provenance.
To double-check, I visited this link from the paragraph above: sciencedirect.com/science/arti
It's only a 6-person study, but just the abstract has numbers: ~250 kcal/day pooped on a low-fiber diet vs. ~400 kcal/day pooped on a high-fiber diet. That's with intakes of ~2100 and ~2350 kcal respectively, which is close to the number from which I estimated 100 kcal above, so maybe the first estimate from just the 5% number was a bit low.
Glad those numbers were in the abstract, since the full text is paywalled... It's possible this study was also done on some atypical patient group...
Just to come full circle, let's look at that 4.91 kcal/gram number again. A search suggests 14-16 ounces of poop per day is typical, with at least two sources around 14 ounces, or ~400 grams. (AI slop was strong here too, with one including a completely made up table of "studies" that was summarized as 100-200 grams/day). If we believe 400 grams/day of poop, then 4.91 kcal/gram would be almost 2000 kcal/day, which is very clearly ludicrous! So that number was likely some unrelated statistic regurgitated by the AI. I found that number in at least 3 of the slop pages I waded through in my initial search.

@raiders@darktundra.xyz
2025-07-21 18:43:35

Raiders' Geno Smith trade doesn't come without key worry as training camp opens sportingnews.com/us/nfl/las-ve

@cowboys@darktundra.xyz
2025-06-26 19:39:23

17) Who makes their first career Pro Bowl? dallascowboys.com/news/17-who-

@paulbusch@mstdn.ca
2025-08-06 11:57:18

Good Morning #Canada
There is a little known reporting tool on the ##StatsCan website called Canada’s Quality of Life Hub. First proposed in 2021, and launched in 2023, it is a framework that gathers data and evidence to inform priority setting and guide decision-making in various policy areas, including the budgetary process. The Framework comprises five domains –prosperity, health, environment, society, and good governance – and two cross-cutting lenses: fairness and inclusion, and sustainability and resilience. Not enough Canadians have heard of it, and I suspect very few government policies are implemented before analyzing the impact on quality of life measurement by this tool. On a positive note, #StatsCan is looking for feedback on how to improve the hub.
#CanadaIsAwesome #QualityOfLife
www160.statcan.gc.ca/index-eng

@cowboys@darktundra.xyz
2025-06-26 19:18:38

17) Who makes their first career Pro Bowl? dallascowboys.com/news/17-who-

@tiotasram@kolektiva.social
2025-07-30 17:56:35

Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
social.coop/@eloquence/1149406
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.

@pre@boing.world
2025-06-22 10:19:09
Content warning: UKPol MidEast

I see Sir Kier Starmer thinks that it's "inappropriate" for a music festival to have a band that campaigns for peace and an end to genocide, but that it's perfectly appropriate and may "alleviate" a "grave threat” for a country to bomb nuclear sites in a another country!
He thinks daubing some paint on airplanes as a protest is terrorism, but that using those planes in reconnaissance to support an ongoing genocide by Israel is a good and normal use of them.
What a fucking weasel, absolute death worshiping fuck-knuckle.
Goddamn Labour party loves them some illegal wars of aggression in the middle east. Can't get enough of it.
Please fuck off Sir Starmer.
#ukpol #iran #kneecap #starmer #fuckKnuckle

@tomkalei@machteburch.social
2025-07-20 07:35:56

In "Mathematica" David Bessis writes:
"While the official knowledge has been transcribed in textbooks the secret art of mathematicians has remained an oral tradition passed down from generation to generation. It reveals what no one dares write down in books because it doesn't seem serious enough, because it's not science, and because it resembles self-improvement too much."
I'm thinking about LRMs doing math research.
Why are they good at it (sometimes) and will they keep improving?
1/2

@cowboys@darktundra.xyz
2025-06-24 19:01:23

Cowboys Trade Pitch Lands Dak Prescott Another Weapon in Ex-Steelers Star heavy.com/sports/nfl/dallas-co]

@tiotasram@kolektiva.social
2025-06-21 02:34:13

Why AI can't possibly make you more productive; long
#AI and "productivity", some thoughts:
Edit: fixed some typos.
Productivity is a concept that isn't entirely meaningless outside the context of capitalism, but it's a concept that is heavily inflected in a capitalist context. In many uses today it effectively means "how much you can satisfy and/or exceed your boss' expectations." This is not really what it should mean: even in an anarchist utopia, people would care about things like how many shirts they can produce in a week, although in an "I'd like to voluntarily help more people" way rather than an "I need to meet this quota to earn my survival" way. But let's roll with this definition for a second, because it's almost certainly what your boss means when they say "productivity", and understanding that word in a different (even if truer) sense is therefore inherently dangerous.
Accepting "productivity" to mean "satisfying your boss' expectations," I will now claim: the use of generative AI cannot increase your productivity.
Before I dive in, it's imperative to note that the big generative models which most people think of as constituting "AI" today are evil. They are 1: pouring fuel on our burning planet, 2: psychologically strip-mining a class of data laborers who are exploited for their precarity, 3: enclosing, exploiting, and polluting the digital commons, and 4: stealing labor from broad classes of people many of whom are otherwise glad to give that labor away for free provided they get a simple acknowledgement in return. Any of these four "ethical issues" should be enough *alone* to cause everyone to simply not use the technology. These ethical issues are the reason that I do not use generative AI right now, except for in extremely extenuating circumstances. These issues are also convincing for a wide range of people I talk to, from experts to those with no computer science background. So before I launch into a critique of the effectiveness of generative AI, I want to emphasize that such a critique should be entirely unnecessary.
But back to my thesis: generative AI cannot increase your productivity, where "productivity" has been defined as "how much you can satisfy and/or exceed your boss' expectations."
Why? In fact, what the fuck? Every AI booster I've met has claimed the opposite. They've given me personal examples of time saved by using generative AI. Some of them even truly believe this. Sometimes I even believe they saved time without horribly compromising on quality (and often, your boss doesn't care about quality anyways if the lack of quality is hard to measure of doesn't seem likely to impact short-term sales/feedback/revenue). So if generative AI genuinely lets you write more emails in a shorter period of time, or close more tickets, or something else along these lines, how can I say it isn't increasing your ability to meet your boss' expectations?
The problem is simple: your boss' expectations are not a fixed target. Never have been. In virtue of being someone who oversees and pays wages to others under capitalism, your boss' game has always been: pay you less than the worth of your labor, so that they can accumulate profit and thus more capital to remain in charge instead of being forced into working for a wage themselves. Sure, there are layers of management caught in between who aren't fully in this mode, but they are irrelevant to this analysis. It matters not how much you please your manager if your CEO thinks your work is not worth the wages you are being paid. And using AI actively lowers the value of your work relative to your wages.
Why do I say that? It's actually true in several ways. The most obvious: using generative AI lowers the quality of your work, because the work it produces is shot through with errors, and when your job is reduced to proofreading slop, you are bound to tire a bit, relax your diligence, and let some mistakes through. More than you would have if you are actually doing and taking pride in the work. Examples are innumerable and frequent, from journalists to lawyers to programmers, and we laugh at them "haha how stupid to not check whether the books the AI reviewed for you actually existed!" but on a deeper level if we're honest we know we'd eventually make the same mistake ourselves (bonus game: spot the swipe-typing typos I missed in this post; I'm sure there will be some).
But using generative AI also lowers the value of your work in another much more frightening way: in this era of hype, it demonstrates to your boss that you could be replaced by AI. The more you use it, and no matter how much you can see that your human skills are really necessary to correct its mistakes, the more it appears to your boss that they should hire the AI instead of you. Or perhaps retain 10% of the people in roles like yours to manage the AI doing the other 90% of the work. Paradoxically, the *more* you get done in terms of raw output using generative AI, the more it looks to your boss as if there's an opportunity to get enough work done with even fewer expensive humans. Of course, the decision to fire you and lean more heavily into AI isn't really a good one for long-term profits and success, but the modern boss did not get where they are by considering long-term profits. By using AI, you are merely demonstrating your redundancy, and the more you get done with it, the more redundant you seem.
In fact, there's even a third dimension to this: by using generative AI, you're also providing its purveyors with invaluable training data that allows them to make it better at replacing you. It's generally quite shitty right now, but the more use it gets by competent & clever people, the better it can become at the tasks those specific people use it for. Using the currently-popular algorithm family, there are limits to this; I'm not saying it will eventually transcend the mediocrity it's entwined with. But it can absolutely go from underwhelmingly mediocre to almost-reasonably mediocre with the right training data, and data from prompting sessions is both rarer and more useful than the base datasets it's built on.
For all of these reasons, using generative AI in your job is a mistake that will likely lead to your future unemployment. To reiterate, you should already not be using it because it is evil and causes specific and inexcusable harms, but in case like so many you just don't care about those harms, I've just explained to you why for entirely selfish reasons you should not use it.
If you're in a position where your boss is forcing you to use it, my condolences. I suggest leaning into its failures instead of trying to get the most out of it, and as much as possible, showing your boss very clearly how it wastes your time and makes things slower. Also, point out the dangers of legal liability for its mistakes, and make sure your boss is aware of the degree to which any of your AI-eager coworkers are producing low-quality work that harms organizational goals.
Also, if you've read this far and aren't yet of an anarchist mindset, I encourage you to think about the implications of firing 75% of (at least the white-collar) workforce in order to make more profit while fueling the climate crisis and in most cases also propping up dictatorial figureheads in government. When *either* the AI bubble bursts *or* if the techbros get to live out the beginnings of their worker-replacement fantasies, there are going to be an unimaginable number of economically desperate people living in increasingly expensive times. I'm the kind of optimist who thinks that the resulting social crucible, though perhaps through terrible violence, will lead to deep social changes that effectively unseat from power the ultra-rich that continue to drag us all down this destructive path, and I think its worth some thinking now about what you might want the succeeding stable social configuration to look like so you can advocate towards that during points of malleability.
As others have said more eloquently, generative AI *should* be a technology that makes human lives on average easier, and it would be were it developed & controlled by humanists. The only reason that it's not, is that it's developed and controlled by terrible greedy people who use their unfairly hoarded wealth to immiserate the rest of us in order to maintain their dominance. In the long run, for our very survival, we need to depose them, and I look forward to what the term "generative AI" will mean after that finally happens.

@paulbusch@mstdn.ca
2025-08-30 11:57:30

Good Morning #Canada
It's 1866, and there's no TV or interweb, so what do you do? We'll, if you live on a dairy farm near Ingersoll Ontario, you make a 7,000-pound wheel of cheese. It's helpful if you are also not lactose intolerant.
#CanadaIsAwesome #Cheesy
cbc.ca/news/canada/london/mamm.

@tiotasram@kolektiva.social
2025-07-25 10:57:58

Just saw this:
#AI can mean a lot of things these days, but lots of the popular meanings imply a bevy of harms that I definitely wouldn't feel are worth a cute fish game. In fact, these harms are so acute that even "just" playing into the AI hype becomes its own kind of harm (it's similar to blockchain in that way).
@… noticed that the authors claim the code base is 80% AI generated, which is a red flag because people with sound moral compasses wouldn't be using AI to "help" write code in the first place. The authors aren't by some miracle people who couldn't build this app without help, in case that influences your thinking about it: they have the skills to write the code themselves, although it likely would have taken longer (but also been better).
I was more interested in the fish-classification AI, and how much it might be dependent on datacenters. Thankfully, a quick glance at the code confirms they're using ONNX and running a self-trained neural network on your device. While the exponentially-increasing energy & water demands of datacenters to support billion-parameter models are a real concern, this is not that. Even a non-AI game can burn a lot of cycles on someone's phone, and I don't think there's anything to complain about energy-wise if we're just using cycles on the end user's device as long as we're not having them keep it on for hours crunching numbers like blockchain stuff does. Running whatever stuff locally while the user is playing a game is a negligible environmental concern, unlike, say, calling out to ChatGPT where you're directly feeding datacenter demand. Since they claimed to have trained the network themselves, and since it's actually totally reasonable to make your own dataset for this and get good-enough-for-a-silly-game results with just a few hundred examples, I don't have any ethical objections to the data sourcing or training processes either. Hooray! This is finally an example of "ethical use of neutral networks" that I can hold up as an example of what people should be doing instead of the BS they are doing.
But wait... Remember what I said about feeding the AI hype being its own form of harm? Yeah, between using AI tools for coding and calling their classifier "AI" in a way that makes it seem like the same kind of thing as ChatGPT et al., they're leaning into the hype rather than helping restrain it. And that means they're causing harm. Big AI companies can point to them and say "look AI enables cute things you like" when AI didn't actually enable it. So I'm feeling meh about this cute game and won't be sharing it aside from this post. If you love the cute fish, you don't really have to feel bad for playing with it, but I'd feel bad for advertising it without a disclaimer.

@tiotasram@kolektiva.social
2025-07-22 00:03:45

Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: chelseatroy.com/2024/08/28/doe which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.