Tootfinder

Opt-in global Mastodon full text search. Join the index!

@grumpybozo@toad.social
2025-06-08 03:04:53

She’s right, you know.
We don’t need a general solution for how we accommodate trans kids in sports. It can be handled on a case by case basis. It’s being hypothesized into absurd scale to trigger reaction. There’s no “catastrophe” possible. There are a few dozen specific cases and they are different enough that they should be handled independently.
The details in real cases always seem to argue in favor of the trans kid’s choice.

@david@boles.xyz
2025-08-07 20:07:29

This Is Not the World I Wanted to Leave for You: Reflections on Legacy, Loss, and the Future We Shape
I have been thinking a great deal lately about living and dying, and about the strange, stubborn human hunger to leave something meaningful behind. The faces of those I have known who have already passed return to me in quiet moments, and I find myself watching those who are, even now, nearing the end of their own stories. I also include my final braided prairie knot…

@tiotasram@kolektiva.social
2025-07-06 12:45:11

So I've found my answer after maybe ~30 minutes of effort. First stop was the first search result on Startpage (millennialhawk.com/does-poop-h), which has some evidence of maybe-AI authorship but which is better than a lot of slop. It actually has real links & cites research, so I'll start by looking at the sources.
It claims near the top that poop contains 4.91 kcal per gram (note: 1 kcal = 1 Calorie = 1000 calories, which fact I could find/do trust despite the slop in that search). Now obviously, without a range or mention of an average, this isn't the whole picture, but maybe it's an average to start from? However, the citation link is to a study (pubmed.ncbi.nlm.nih.gov/322359) which only included 27 people with impaired glucose tolerance and obesity. Might have the cited stat, but it's definitely not a broadly representative one if this is the source. The public abstract does not include the stat cited, and I don't want to pay for the article. I happen to be affiliated with a university library, so I could see if I have access that way, but it's a pain to do and not worth it for this study that I know is too specific. Also most people wouldn't have access that way.
Side note: this doing-the-research protect has the nice benefit of letting you see lots of cool stuff you wouldn't have otherwise. The abstract of this study is pretty cool and I learned a bit about gut microbiome changes from just reading the abstract.
My next move was to look among citations in this article to see if I could find something about calorie content of poop specifically. Luckily the article page had indicators for which citations were free to access. I ended up reading/skimming 2 more articles (a few more interesting facts about gut microbiomes were learned) before finding this article whose introduction has what I'm looking for: pmc.ncbi.nlm.nih.gov/articles/
Here's the relevant paragraph:
"""
The alteration of the energy-balance equation, which is defined by the equilibrium of energy intake and energy expenditure (1–5), leads to weight gain. One less-extensively-studied component of the energy-balance equation is energy loss in stools and urine. Previous studies of healthy adults showed that ≈5% of ingested calories were lost in stools and urine (6). Individuals who consume high-fiber diets exhibit a higher fecal energy loss than individuals who consume low-fiber diets with an equivalent energy content (7, 8). Webb and Annis (9) studied stool energy loss in 4 lean and 4 obese individuals and showed a tendency to lower the fecal energy excretion in obese compared with lean study participants.
"""
And there's a good-enough answer if we do some math, along with links to more in-depth reading if we want them. A Mayo clinic calorie calculator suggests about 2250 Calories per day for me to maintain my weight, I think there's probably a lot of variation in that number, but 5% of that would be very roughly 100 Calories lost in poop per day, so maybe an extremely rough estimate for a range of humans might be 50-200 Calories per day. Interestingly, one of the AI slop pages I found asserted (without citation) 100-200 Calories per day, which kinda checks out. I had no way to trust that number though, and as we saw with the provenance of the 4.91 kcal/gram, it might not be good provenance.
To double-check, I visited this link from the paragraph above: sciencedirect.com/science/arti
It's only a 6-person study, but just the abstract has numbers: ~250 kcal/day pooped on a low-fiber diet vs. ~400 kcal/day pooped on a high-fiber diet. That's with intakes of ~2100 and ~2350 kcal respectively, which is close to the number from which I estimated 100 kcal above, so maybe the first estimate from just the 5% number was a bit low.
Glad those numbers were in the abstract, since the full text is paywalled... It's possible this study was also done on some atypical patient group...
Just to come full circle, let's look at that 4.91 kcal/gram number again. A search suggests 14-16 ounces of poop per day is typical, with at least two sources around 14 ounces, or ~400 grams. (AI slop was strong here too, with one including a completely made up table of "studies" that was summarized as 100-200 grams/day). If we believe 400 grams/day of poop, then 4.91 kcal/gram would be almost 2000 kcal/day, which is very clearly ludicrous! So that number was likely some unrelated statistic regurgitated by the AI. I found that number in at least 3 of the slop pages I waded through in my initial search.

Hi, it's Representative Jasmine Crockett.
I really hope you read this -- not for me, but for my friend.
People are scared, frustrated & downright mad about all that is being taken away from us.
Some people are just sitting around and letting it happen,
but not Esther.
Esther Kim Varet is running to take back one of the most competitive congressional seats in the country -- from a MAGA Republican.
We need more people like her to provide REAL repre…

@hanno@mastodon.social
2025-06-08 09:22:05

In case anyone here has connections with the Python team: can you please tell them to update their docs on XML security? The way it is is quite misleading, and it's been annoying me for a while. I raised this a while ago in their issue tracker, but it got no reaction whatsoever. github.com/python/cpython/issu

@ruari@velocipederider.com
2025-07-08 08:39:50

Credit to "SykkelMafiaen" for this. It is hosted on Instagram but given many here are adverse to that site I do not feel too bad re-hosting. Plus it is a video of me. 🤷
Also thanks to @… for tracking it down.

Video of a tallbike with tiny wheels cycling down a hill. It starts with the text "Have you seen a bike like this before".

The video rewinds and the text shows, "Don't blow over in the wind 😂" with a 😎 over the face of the rider.

Lots if "Wheee" and other silly noises in the background.
@AthanSpod@social.linux.pizza
2025-07-06 10:49:37

Re: zeroes.ca/@broadwaybabyto/1148
Yeah, I still do most of my weekly shopping at one of the big UK supermarkets (Tesco currently, switched from Sainsbury's when they just Would Not Stop advertising on GB News), because I only have so many…

@mgorny@social.treehouse.systems
2025-08-06 12:42:23

Lately I've been approached by #UNICEF volunteer. I wanted to help. It turned out the only way to do that was… to sign a "direct debit" form. Of course, I can't imagine giving someone my PII in the middle of a street, and signing authorizations like that. She was particularly insistent too; I managed to get her to tell me that I can get the account number for regular wire from the website.
Well, I understand the logic there. They need repeated donations, not one-time. A lot of people would just forget about it after talking to her. However, if they fill the form there, they will have to actively put an effort to cancel it afterwards. And if I understood her correctly, she suggested you can cancel anytime by calling *them* — so effectively giving them an additional opportunity to convince you not to. Still, I am dismayed by such psychological games.

@nerb@techhub.social
2025-07-06 20:51:02

“Do not worry I just wanted to thank you for giving me life” said the slightly translucent blob as it floated in the #gravy
“How did I do that!” exclaimed Margaret.
“Easy” said the blob. You recreated the conditions needed for life to form when you mixed the flour and fats and oils.” “The water, amino acids and carbohydrates combined to form complex molecules” “The heat accelerated the …

@midtsveen@social.linux.pizza
2025-06-05 19:37:52

Every day I use the #Internet it gets more crap. First we had ads popping up everywhere. Then Google came in and crushed all the good stuff. Now we’re stuck with this AI slop no one asked for.
Remember when pages actually loaded fast? Not bloated with trackers and nonsense. When forums were real communities, not spam factories. When you could browse without being followed by cookies.…

@groupnebula563@mastodon.social
2025-07-07 04:33:25

not sure if this counts but:
“Does anyone know how to set up an email filter for, say, ‘how do you type with boxing gloves on’? I asked The Cheat, but he lost me at the part where I had to do not-typing.”
#HashTagGames #FilmCharactersViralPost (strong bad isn’t exact…

@GroupNebula563@mastodon.social
2025-07-07 04:33:25

not sure if this counts but:
“Does anyone know how to set up an email filter for, say, ‘how do you type with boxing gloves on’? I asked The Cheat, but he lost me at the part where I had to do not-typing.”
#HashTagGames #FilmCharactersViralPost (strong bad isn’t exact…

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@timbray@cosocial.ca
2025-07-04 23:22:03

O'Reilly: GenAI has adopted a colonialist business model.
linkedin.com/posts/timo3_proud

@stefan@gardenstate.social
2025-06-05 17:37:01

Annd youtube is doing this again for me.

 Ad blockers are not allowed on YouTube

    It looks like you may be using an ad blocker.
    Ads allow YouTube to be used by billions worldwide.
    You can go ad-free with YouTube Premium, and creators can still get paid from your subscription.
@pixelcode@social.tchncs.de
2025-06-05 22:25:31
Content warning: SimpleX founder approving of right-wing extremism

Today, I learned that the founder of the #SimpleX messenger is a #ClimateChange-denying #Covid conspiracy-theorist, anti-vaxxer and

Twitter profile of Evgeny Poberezkin, the founder of SimpleX and creator of the Ajv JSON validator. Viewed via the Nitter server XCancel. On 30 May, Evgeny retweeted a post from Andrew Bridgen which reads:

“It was a military operation across the world from the development of the virus and so-called vaccines to the delivery of the propaganda narrative to increase compliance.”

Bridgen's tweet quotes an image shared by Liz Churchill, reading: “Dutch government official admits Covid pandemic …
On 28 May, Evgeny retweeted a post from Sayer Ji reading:

“Americans Are Fed Up! In just 24 hours, over 20,000 emails have been sent to Congress demanding an investigation into unauthorized geoengineering and atmospheric spraying. People are taking a stand for transparency, accountability, and the right to clean skies.”
On 16 March, Evgeny Poberezkin retweeted JD Vance's screenshot of Donald Trump's Truth Social post with a picture showing three presidential photos:

2017 – 2021: happy Trump
2021 – 2025: a robot pen faking Biden's signature
2025 – present: mad Trump
On 26 February, Evgeny Poberezkin retweeted a post from the Twitter profile “Bill Gates is a psycho”, reading:

“That’s where the money is. There is no consensus in Science, it’s about facts, and if you get down to the cold hard facts – climate change is not happening – there is no man made Global Warming now & there hasn’t been any in the past. I resent you calling me a ‘Denier” this is a word meant to put me down - there is NO significant Global Warming. John Coleman is a Meteorological exp…
@nemorosa@mastodon.nu
2025-06-05 10:10:51

#PennedPossibilities 690 — SC POV: What do you fear losing the most?
This question is for the MC's brother, I believe.
SC: My brother.
It terrifies me to think that my brother could become completely corrupted and lose his true Ethereal essence to Ianthe's manipulation. You see, The Unmaking is not just simple indoctrination, it is the obliteration of free will,…

@shoppingtonz@mastodon.social
2025-08-06 07:38:03

You know the saying
'Those who dislike you for the right reasons are the people you should worry about'
I don't quite agree:
THOSE ARE PROBABLY THE ONLY PEOPLE WHO CAN TELL YOU THE TRUTH THAT YOU YOURSELF ARE PROBABLY NOT AWARE OF
You need their help!
ie. if I ... I dunno post something bad, even if I delete it and someone tells me they read it and explain why, maybe I'll get mad, maybe I'll mute them but their truth will stick with me if it&…

@lapizistik@social.tchncs.de
2025-06-04 14:09:38

Dear „fundamental Christians“, for you the proof that the climate catastrophe is men made is easy:
• God said to Noah: “I have set my rainbow in the clouds, and it will be the sign of the covenant between me and the earth.” (Gen 9, 13)
• You are fighting rainbows, calling them and those who praise it evil.
• So you cut the contract that was sealed between us.
• Therefore there will be flood – and worse. One does not cut the gay ribbon between yourself and God.
• It i…

@ruth_mottram@fediscience.org
2025-06-03 20:30:50

I'm afraid this is true. I already struggle to raise enough money to keep the very talented young scientists I have working with me (from many different countries already), finding funding for more will be difficult, and why should US scientists essentially queue jump just because of their nationality ? I know that sounds harsh, I have very good US colleagues, but I have to be fair to all, regardless of where they come from. OTOH, if you are a US scientist and you are interested in exploring possibilities in Denmark, give me a shout and we'll see what we can do.
mastodon.world/@davidho/114620
davidho@mastodon.world - Unless you’re a Nobel laureate, the brain drain will be away from science and not to other countries.
nytimes.com/2025/06/03/us/trum

@volephd@fediscience.org
2025-08-06 07:18:08

This is why I got a #FrameworkLaptop !
I spent maybe 10 minutes opening up and closing my #Framework16 to snap a picture of my mainboard for a potential warranty case.
It took me 10 minutes because I was hyper-paranoid about missing a step, but you can realistically do it in under 5 minutes.
I don't like tinkering with hardware, but I'm not tech-illiterate either, so this is a good middle ground of what is doable.

I’m going to be very honest and clear.
I am fully preparing myself to die under this new American regime.
That’s not to say that it’s the end of the world. It isn’t.
But I am almost 50 years old. It will take so long to do anything with this mess that this is the new normal for *me*.
I do hope a lot of you run. I hope you vote, sure.
Maybe do a general strike or rent strike.
All great!
But I spent the last week reading things and this is not, for ME…

@hey@social.nowicki.io
2025-06-05 05:26:30

There should be a law that says a smart watch MUST at all times, regardless of anything, at some place on screen, show time.
It's beyond me that both Apple and Garmin don't follow this rule. I walk somewhere, right hand busy with holding something and this thing decides to show me some notification for the next 30 seconds.
Well fuck you watch! As the name suggests (Uhr, zegarek) your main job is to show time. Not AliExpress notification about my package leaving the tarif…

@christydena@zirk.us
2025-07-02 02:50:31

❓ A question for screenwriters/writers!
👉 When I say "screenwriting conventions" or "writing conventions," do you have negative associations with the term "convention"? As in, "convention" is bad. If not, what associations do you have? Also, if you could let me know if you're a paid screenwriter, too!
Thank you! 🙂

@jae@mastodon.me.uk
2025-05-29 09:24:35

I have to say I was pleasantly surprised to see how easy Pirate Ship is for getting shipping labels at reduced rates. Their customer service is top-notch. My work had been paying for a shipping provider and I was able to drop their prices considerably.
Not an ad, just a fan of how easy they made #shipping.

@axbom@axbom.me
2025-05-30 19:37:19

If you don’t have a mute button on your microphone, the next best thing is software that will mute your microphone at the system level. MicDrop is an example of an app I’ve tried for this (Mac only): https://getmicdrop.com

You know it’s working when the meeting software starts throwing an ”error” and saying it’s not picking up sound from your microphone.

@rasterweb@mastodon.social
2025-07-29 21:18:02

Canceled the Google account for my small business... Less money going to AI bullshit and Google's evil practices.
And I let them know.
(I had to click through a lot of screens where they tried to get me to stay, offered discounts, or offered to archive data for a few dollars a month. No thanks.)
#noAI

Let us know why you are cancelling your subscription (Select all that apply).
[ ] Lacked some features | needed
[ ] Business or organization shut down
[ ] Don'tuse it enough
[x]  Will use another productivity tool
[ ] Difficult to use or set up
[ ] cost reduction
[ ] Creating a Workspace account to replace this one
[x] I do not support the use of Al.

Do not include personal information

Next

By continuing, you agree Google uses your answers, account & system info to improve services, per…
@inthehands@hachyderm.io
2025-05-28 16:33:05

So look, if you talk to me about the job threat from the things we currently call “AI,” well…
…if where you’re going with that is “The concentration of wealth is an existential crisis! Establish UBI! 99% marginal tax rate! Capital gains tax! Wealth tax! Abolish billionaires!” then yes, I’m •all• ears. My pitchfork is already sharpened.
But if where you’re going is “Get in on it now while you still can! Buy the AI vendors’ products! Drink radium for that youthful glow!” then…kthanksbye. I am not going to be a vector for your marketing propaganda, no matter how agitated you are.
/end

@tiotasram@kolektiva.social
2025-07-28 13:06:20

How popular media gets love wrong
Now a bit of background about why I have this "engineered" model of love:
First, I'm a white straight cis man. I've got a few traits that might work against my relationship chances (e.g., neurodivergence; I generally fit pretty well into the "weird geek" stereotype), but as I was recently reminded, it's possible my experience derives more from luck than other factors, and since things are tilted more in my favor than most people on the planet, my advice could be worse than useless if it leads people towards strategies that would only have worked for someone like me. I don't *think* that's the case, but it's worth mentioning explicitly.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
I'm lucky in that I had some mixed-gender social circles already like intramural soccer and a graduate-student housing potluck. Graduate school makes a *lot* more of these social spaces accessible, so I recognize that those not in school of some sort have a harder time of things, especially if like me they don't feel like they fit in in typical adult social spaces like bars.
However, at one point I just decided that my desire for a relationship would need action on my part and so I'd try to build a relationship and see what happened. I worked up my courage and asked one of the people in my potluck if she'd like to go for a hike (pretty much clearly a date but not explicitly one; in retrospect not the best first-date modality in a lot of ways, but it made a little more sense in our setting where we could go for a hike from our front door). To emphasize this point: I was not in love with (or even infatuated with) my now-wife at that point. I made a decision to be open to building a relationship, but didn't follow the typical romance story formula beyond that. Now of course, in real life as opposed to popular media, this isn't anything special. People ask each other out all the time just because they're lonely, and some of those relationships turn out fine (although many do not).
I was lucky in that some aspects of who I am and what I do happened to be naturally comforting to my wife (natural advantage in the "appeal" model of love) but of course there are some aspects of me that annoy my wife, and we negotiate that. In the other direction, there's some things I instantly liked about my wife, and other things that still annoy me. We've figured out how to accept a little, change a little, and overall be happy with each other (though we do still have arguments; it's not like the operation/construction/maintenance of the "love mechanism" is always perfectly smooth). In particular though, I approached the relationship with the attitude of "I want to try to build a relationship with this person," at first just because of my own desires for *any* relationship, and then gradually more and more through my desire to build *this specific* relationship as I enjoyed the rewards of companionship.
So for example, while I think my wife is objectively beautiful, she's also *subjectively* very beautiful *to me* because having decided to build a relationship with her, I actively tried to see her as beautiful, rather than trying to judge whether I wanted a relationship with her based on her beauty. In other words, our relationship is more causative of her beauty-to-me than her beauty-to-me is causative of our relationship. This is the biggest way I think the "engineered" model of love differs from the "fire" and "appeal" models: you can just decide to build love independent of factors we typically think of as engendering love (NOT independent of your partner's willingness to participate, of course), and then all of those things like "thinking your partner is beautiful" can be a result of the relationship you're building. For sure those factors might affect who is willing to try building a relationship with you in the first place, but if more people were willing to jump into relationship building (not necessarily with full commitment from the start) without worrying about those other factors, they might find that those factors can come out of the relationship instead of being prerequisites for it. I think this is the biggest failure of the "appeal" model in particular: yes you *do* need to do things that appeal to your partner, but it's not just "make myself lovable" it's also: is your partner putting in the effort to see the ways that you are beautiful/lovable/etc., or are they just expecting you to become exactly some perfect person they've imagined (and/or been told to desire by society)? The former is perfectly possible, and no less satisfying than the latter.
To cut off my rambling a bit here, I'll just add that in our progress from dating through marriage through staying-married, my wife and I have both talked at times explicitly about commitment, and especially when deciding to get married, I told her that I knew I couldn't live up to the perfect model of a husband that I'd want to be, but that if she wanted to deepen our commitment, I was happy to do that, and so we did. I also rearranged my priorities at that point, deciding that I knew I wanted to prioritize this relationship above things like my career or my research interests, and while I've not always been perfect at that in my little decisions, I've been good at holding to that in my big decisions at least. In the end, *once we had built a somewhat-committed relationship*, we had something that we both recognized was worth more than most other things in life, and that let us commit even more, thus getting even more out of it in the long term. Obviously you can't start the first date with an expectation of life-long commitment, and you need to synchronize your increasing commitment to a relationship so that it doesn't become lopsided, which is hard. But if you take the commitment as an active decision and as the *precursor* to things like infatuation, attraction, etc., you can build up to something that's incredibly strong and rewarding.
I'll follow this up with one more post trying to distill some advice from my ramblings.
#relationships #love

"I got rid of – just one I got rid of the other night, you buy a house, they have a faucet in the house, Joe, and the faucet the water doesn’t come out. They have a restrictor. You can’t – in areas where you have so much water they don’t know what to do with it. Uh, you have a shower head the shower doesn’t uh, the shower doesn’t, you think it’s not working. It is working. The water’s dripping out and that’s no good for me. I like this hair lace and [sic] – I like that hair nice and wet…

@pre@boing.world
2025-06-26 17:04:54
Content warning: UKPol, Palestine Action, Email to my MP

Dear Emily Thornberry,
I don't usually bother to write to you on most issues because I figure there is pretty much no point communicating with a whipped MP in a safe seat under first past the post. Such an MP has no reason to listen to their constituents at all, and is entirely a tool of the party leadership.
I make an exception today since I hear your government is about to classify Palestine Action as a terrorist group. Despite them being peaceful, non-violent, and dedicated entirely to preventing the greater crime of the ongoing genocide of Gazan Palestinians.
This is obviously a gross overreaction and a completely unjustifiable act designed not to prevent domestic terrorism but to cover up British forces and UK government involvement and collaboration with the genocide in Gaza.
If we are taking suggestions for groups to ban as terrorists even though they aren't terrorists, I would like to suggest the Labour Party! The party has helped facilitate a genocide abroad, and continues to supply the perpetrators with arms and intelligence to aid their actions.
I don't expect you to take that suggestion seriously, but maybe Reform will take it seriously when they get elected in a few years and I suggest it again to them. After all, a precedent will have been set that groups which aren't terrorists can be banned under anti-terror legislation anyway. Democracy will have already been eroded.
I was ready to be disappointed by this Labour government, but I confess that the level of gut-wrenching visceral disgust I am experiencing at them surpassed all my wildest expectations. Taking money from the disabled to buy new war-planes from a fascist US president while abetting a genocide in Gaza makes me wonder if Reform wouldn't be better in the end anyway. At least they might do electoral reform and nationalize the water companies.
Labour's only hope, the country's only hope, is to remove Starmer. I wish you had won that leadership election instead of him.
Anyway, as I say, I don't expect it to make any difference at all because under this election system even MPs in safe seats are nothing but tools of the party leadership and the party leadership seems determined. But I thought I'd let you know that I see you. I see what you are doing.
I support Palestine Action more than I support this government. Let me know where I should hand myself in for my "crime".
Yours sincerely,
Adam

@niklaskorz@rheinneckar.social
2025-07-02 20:55:12

Turn of events this evening:
- wanted to try out @…
- v1.3.4 doesn't seem to have a #macOS binary in the release but release notes mention that 1.3.4 includes upstreamed fixes from MacPorts
- look at the

@joergi@chaos.social
2025-07-02 09:01:58

hey pixelfed users and pixelfed admins - is the "retoot" / "repost" function working for you?
For me it seems not, it doesn't matter which app I'm using or if I'm using the web interface.
I found this closed bug...
added my comment.

@nelson@tech.lgbt
2025-06-01 18:35:50

Calamus 20 I saw in Louisiana a live-oak growing
What a heartaching poem of loneliness and the need for the love of another! Just wonderful. I understand now why this poem is so popular, particularly as a gay poem. It is full of meaning and is quite clear about it.
I wondered how it could utter joyous leaves, standing alone there, without its friend, its lover near—for I knew I could not
There's a more cerebral interpretation of this work, particularly if you understand "leaves" to mean "pages in my poetry book Leaves of Grass". Whitman talking about his own poetic inspiration from lovers.
Which well enough. But I'm more interested in Whitman's expressed need for "manly love". Which is clearly on his mind constantly:
my own dear friends ... I believe lately I think of little else than of them
Also Whitman's own eroticization of nature and himself. Here speaking of the tree,
its look, rude, unbending, lusty, made me think of myself

@unchartedworlds@scicomm.xyz
2025-07-24 07:30:11
Content warning: a nice thing - yesterday's BiCon pre-meet

Hosted a BiCon pre-meet yesterday, online. Conveniently there were exactly 12 people there for most of it (not counting me), perfect for dividing into threes! I kept switching the groups so that people could meet different people.
We talked about how we'd each like BiCon to be, and how we could make it more likely to turn out that way.
Top tips: get enough sleep, eat enough food, and don't try to do everything!
Then we also talked about what contribution we might like to make - though I also said, just being there and being friendly and making BiCon more varied is a contribution in itself :-)
Several of the people who'd come along turned out to be already signed up to offer workshop sessions, so we heard a little bit about those.
Two tasks currently available if you want one are (a) keeping an eye on the Zoom setup for the hybrid events, (b) leafleting at Pride on Saturday, so that more people know about BiCon for Sunday. There's usually also opportunities to assist with being welcoming at reception.
In-person BiCon starts tomorrow, and runs Friday till Sunday. The venue is a couple of buildings belonging to the girls' high school, in between the Forest and the Arboretum. I tagged along for a site visit the other day and I think it's pretty good for air quality.
Apparently about 70 people have booked so far. It's also possible to buy a ticket on the day, so that might not be the final total.
As I reminded people last night, you don't have to be bi to come to BiCon! And if you _are_ bi, you don't have to be any particular amount of bi :-)
#BiCon #Nottingham

@nemorosa@mastodon.nu
2025-08-04 16:52:18

Slightly irritated. I went into a DIY store to buy fine concrete. I asked a couple of questions. The shop assistant kept turning to my husband, who stood up for me and kept insisting it was my project.
My husband was even more annoyed than I was when we left, bless his heart. I was actually less annoyed when he showed the clerk, “this is not OK, you do not treat women like this”.
Perhaps the clerk will remember that the next time he gets a female customer. I hope.

‪@todbot@mastodon.social‬
2025-07-24 22:54:21

If you're looking for a more complete solution to the "visual diff for KiCad" problem, and you have a working Node Typescript setup, try out "typecad-gitdiff". It's not for me, but it might be for you! npmjs.com/package/@typecad/typ

@sonnets@bots.krohsnest.com
2025-06-01 11:25:11

Sonnet 111 - CXI
O! for my sake do you with Fortune chide,
The guilty goddess of my harmful deeds,
That did not better for my life provide
Than public means which public manners breeds.
Thence comes it that my name receives a brand,
And almost thence my nature is subdued
To what it works in, like the dyer's hand:
Pity me, then, and wish I were renewed;
Whilst, like a willing patient, I will drink
Potions of eisell 'gainst my…

@cjust@infosec.exchange
2025-07-31 15:43:57

A good friend of mine has the Muppet character "Beaker" as his avatar. For reasons.
He offers me advice. I offer him advice. We chat. These are #ChatsWithBeaker
#PaloAlto #CyberArk

-> huh palo stock is down today.
<- was overvalued anyway
-> and when you blow $20 billion investors will get....grumpy
-> I think cyberark is a good product. However, PA will purchase them fail to integrate 
-> them successfully into their product stack, make the product shittier and charge 
-> more for it
<- that's the consensus of my team as well.
<- We're in the midst of a Cyberark roll out.
-> Not to mention they will rename it to be something like cortex donkey fuck or something
<- Cortex…
@tezoatlipoca@mas.to
2025-06-26 17:34:19

angry sales guy: hey, what gives. I was reading the minutes yesterday and you recorded `and was written for sales (if they can read..?)`.. that's not professional.
me: well, I write all the customer facing stuff.
asg: ... so?
me: and I write all of the training material for all teams downstream of engineering
asg: .. and?
me: so what I write is arguably the most important, most exciting content that engineering produces.
asg: Look, I don't see-

@midtsveen@social.linux.pizza
2025-06-29 18:01:44

Yeah, call me a fucking baby anarchist, I don’t give a shit, because that’s exactly what I am. But don’t you dare shit on me just because I chose to watch a clip from Noam Chomsky.
Rudolf Rocker is the one who really inspires me, not Chomsky, so don’t fucking trash me just for watching a clip from Chomsky.
You do you and I’ll do me. Cool? Cool.
#Anarchism

Young adult wearing a fur ushanka and winter jacket, calm and introspective, red and black background, dressed for cold weather.
An anarchist flag hanging on a white textured wall with a bisexual pride flag suspended from the ceiling. A window is partially visible on the left side of the image.
@davidaugust@mastodon.online
2025-06-29 18:53:44

Theoretically, a specific federal court could say the feds can’t kill me, or you, or anyone, where we stand for saying “trump is a weak child.”
And such restriction on federal use of force might only apply within that federal circuit.
We cross a boundary, an injunction against our summary execution may not hold.
That’s the illogical but real conclusion of scotus’ bad decision this week as I understand it.
Can the dead file suit against their already-carried out ex…

map of the boundaries of federal circuits
@tiotasram@kolektiva.social
2025-07-03 15:21:37

#ScribesAndMakers for July 3: When (and if) you procrastinate, what do you do? If you don't, what do you do to avoid it?
I'll swap right out of programming to read a book, play a video game, or watch some anime. Often got things open in other windows so it's as simple as alt-tab.
I've noticed recently I tend to do this more often when I have a hard problem to solve that I'm not 100% sure about. I definitely have cycles of better & worse motivation and I've gotten to a place where I'm pretty relaxed about it instead of feeling guilty. I work how I work, and that includes cycles of rest, and that's enough (at least, for me it has been so far, and I'm in a comfortable career, married with 2 kids).
Some projects ultimately lose steam and get abandoned, and I've learned to accept that too. I learn a lot and grow from each project, so nothing is a true waste of time, and there remains plenty of future ahead of me to achieve cool things.
The procrastination does sometimes impact my wife & kids, and that's something I do sometimes feel bad about, but I think I keep that in check well enough, and for things my wife worries about, I usually don't procrastinate those too much (used to be worse about this).
Right now I'm procrastinating a big work project by working on a hobby project instead. The work project probably won't get done by the start of the semester as a result. But as I remind myself, my work doesn't actually pay me to work during the summer, and things will be okay without the work project being finished until later.
When I want to force myself into a more productive cycle, talking to people about project details sometimes helps, as does finding some new tech I can learn about by shoehorning it into a project. Have been thinking about talking to a rubber duck, but haven't motivated myself to try that yet, and I'm not really in doldrums right now.

@philip@mastodon.mallegolhansen.com
2025-07-31 04:21:13

@… For better or worse, we each have different ways of coping with the social pressures of not fitting in.
I remember my mother once telling me "Philip, you have to pick your battles" and without missing a beat responding with "I pick them all". That's who I was.
But some people... it breaks them not to be liked, they don't manage t…

@AimeeMaroux@mastodon.social
2025-06-27 02:32:39
Content warning:

Some of you may not know this but I am a biologist with a uni diploma and everything in my other life. And I want to do something for a living that makes the world a little better.
Would you be interested in hearing me give a talk about homosexuality in animals or the genetics of queerness? I've been told my voice is annoying, so don't miss out!
#poll

@rberger@hachyderm.io
2025-05-24 20:03:14

@…
Not sure why it won't let me respond to your comment directly..
If you look at my feed at all, you will see I am far from a "Republican"
If one watches PBS Newshour with any critical media literacy their drift to the right is clear. They are always doing the "both sides", treating Trump and his Reublican minons as normal, etc. But the Republicans (who I am definitely not one of) have been working on this since Reagan and they really got going in 2005 democracynow.org/2005/5/12/a_r
markey.senate.gov/news/press-r

@ginevra@hachyderm.io
2025-07-01 07:01:43

#IndieGames I played and finished in June. Mostly demos, due to NextFest. Strange Horticulture is the only full game I played, recommended as a chill experience.
The top 2 rows of demos I'll likely buy near release (depends on pricing, of course!) The next row I'll likely buy at some stage, the final 2 rows ... perhaps not for me.
Feel free to ask for more details if you're interested in any of these!

@detondev@social.linux.pizza
2025-07-28 19:00:54

i recently left fedi for a bit cause i was feeling like shit and unable to do anything but spread that energy, and during that time i cobbled this together off a few loosely connected notes and thoughts for an unreleased project of mine. i dislike it but dont really know what to do about that anymore, so here u go. all words treasured.
#art

I'm not explaining this stream of consiousness ass shit, i couldn't see straight making it, if you're blind and wanna know what all it's components are ask me in a reply.
@aral@mastodon.ar.al
2025-06-22 16:19:32

“The band said: ‘You know what’s “not appropriate” Keir?!’ They then used an expletive to accuse the prime minister of arming a genocide.
Israel has strongly denied allegations of genocide relating to the ongoing war in Gaza.”
– BBC
israel has strongly denied that it’s raining, have they, BBC? So tell me, what do you see when you look out the fucking window?
#bbc

@tgpo@social.linux.pizza
2025-05-25 18:27:30

If you're like me, you play multiple episodes in a row in #Jellyfin for #Roku and you're not always sure what the episode you've moved into is about.
So I've added a new button to the OSD that gives you the media overview text - the same content displayed on the detail scre…

Demo video showing a user clicking on an icon icon and a popup displaying the episode overview text.
@nemorosa@mastodon.nu
2025-06-05 09:38:32

#WritersCoffeeClub 5: Talk about something you’ve read that made you think, 'I wish I wrote that.'
Most anything of Sir Terry Pratchett, I have an immense admiration for his wit, his humour, his intelligence, and his writing. I'm not remotely funny, not in my opinion anyway, which makes me awestruck when I find someone who is.

@tiotasram@kolektiva.social
2025-07-28 13:55:54

How popular media gets love wrong
Okay, my attempt at (hopefully widely-applicable) advice about relationships based on my mental "engineering" model and how it differs from the popular "fire" and "appeal" models:
1. If you're looking for a partner, don't focus too much on external qualities, but instead ask: "Do they respect me?" "Are they interested in active consent in all aspects of our relationship?" "Are they willing to commit a little now, and open to respectfully negotiating deeper commitment?" "Are they trustworthy, and willing to trust me?" Finding your partner attractive can come *from* trusting/appreciating/respecting them, rather than vice versa.
2. If you're looking for a partner, don't wait for infatuation to start before you try building a relationship. Don't wait to "fall in love;" if you "fall" into love you could just as easily "fall" out, but if you build up love, it won't be so easy to destroy. If you're feeling lonely and want a relationship, pick someone who seems interesting and receptive in your social circles and ask if they'd like to do something with you (doesn't have to be a date at first). *Pursue active consent* at each stage (if they're not interested; ask someone else, this will be easier if you're not already infatuated). If they're judging you by the standards in point 1, this is doubly important.
3. When building a relationship, try to synchronize your levels of commitment & trust even as you're trying to deepen them, or at least try to be honest and accepting when they need to be out-of-step. Say things and do things that show your partner the things (like trust, commitment, affection, etc.) that are important in your relationship, and ask them to do the same (or ideally you don't have to ask if they're conscious of this too). Do these things not as a chore or a transaction when your partner does them, but because they're the work of building the relationship that you value for its own sake (and because you value your partner for themselves too).
4. When facing big external challenges to your commitment to a relationship, like a move, ensure that your partner has an appropriate level of commitment too, but then don't undervalue the relationship relative to other things in life. Everyone is different, but *to me*, my committed relationship has been far more rewarding than e.g., a more "successful" career would have been. Of course worth noting here that non-men are taught by our society to undervalue their careers & other aspects of their life and sacrifice everything for their partners, which is toxic. I'm not saying "don't value other things" but especially for men, *do* value romantic relationships and be prepared to make decisions that prioritize them over other things, assuming a partner who is comfortable with that commitment and willing to reciprocate.
Okay, this thread is complete for now, until I think of something else that I've missed. I hope this advice is helpful in some way (or at least not harmful). Feel free to chime in if you've got different ideas...
#relationships #love

@wrog@mastodon.murkworks.net
2025-06-30 08:08:57

Still wondering what that casting call for Poseidon looked like
"Must be able to stay underwater for 2 minutes while looking imposing."
or how that conversation between Jack Gwillim (a.k.a. Guy Who Got the Part) and his agent went
"No, this is good! You're gonna be starring right alongside Lawrence Fucking Olivier."
"There aren't any lines! How can Poseidon not have any lines? They're telling me I only have to do one day of shoo…

@inthehands@hachyderm.io
2025-05-28 16:33:05

So look, if you talk to me about the job threat from the things we currently call “AI,” well…
…if where you’re going with that is “The concentration of wealth is an existential crisis! Establish UBI! 99% marginal tax rate! Capital gains tax! Wealth tax! Abolish billionaires!” then yes, I’m •all• ears. My pitchfork is already sharpened.
But if where you’re going is “Get in on it now while you still can! Buy the AI vendors’ products! Drink radium for that youthful glow!” then…kthanksbye. I am not going to be a vector for your marketing propaganda, no matter how agitated you are.
/end

@losttourist@social.chatty.monster
2025-06-27 12:31:35

There's no Top of the Pops #TOTP tonight because some people are sitting in a field in Somerset, but of course I'm not going to let a little thing like that stop me from wittering on about Old Time Music.
Anyway I've just relistened (for the first time in about a bazillion years) to Ballroom Blitz by the Sweet. And however much we laugh about glam rock, you can't deny it's a glorious joyful piece of music.
#70sMusic #GlamRock

@shoppingtonz@mastodon.social
2025-07-05 07:30:17

qubes-os.org/doc/how-to-instal
Did not help me but I'm trying to help myself...will I succeed as when I was troubleshooting why my dispXXXX didn't work for new stuff?
I succeeded with dispXXXX thingy...so maybe I'll succeed with insta…

@rasterweb@mastodon.social
2025-07-30 21:35:34

I found an even faster way to make it so people do not talk to me when I answer the phone. I just say:
"Thank you for calling Pete. How may Pete assist you today?"
And I usually can't even get the whole thing out before they hang up.
The old version of my script would often go 10 to 15 seconds before they hung up, now we're down to 5 to 10 seconds!

@samir@functional.computer
2025-06-27 05:48:43

@… It’s almost as if people were not lying when they told me I need a sleep cycle. :-p
Seriously, I am really glad it works for you. And TBH, I think I need to do the same. I have always cherished my alone time at night, but I’m discovering that alone time at 6am is just as good.

@tiotasram@kolektiva.social
2025-06-21 02:34:13

Why AI can't possibly make you more productive; long
#AI and "productivity", some thoughts:
Edit: fixed some typos.
Productivity is a concept that isn't entirely meaningless outside the context of capitalism, but it's a concept that is heavily inflected in a capitalist context. In many uses today it effectively means "how much you can satisfy and/or exceed your boss' expectations." This is not really what it should mean: even in an anarchist utopia, people would care about things like how many shirts they can produce in a week, although in an "I'd like to voluntarily help more people" way rather than an "I need to meet this quota to earn my survival" way. But let's roll with this definition for a second, because it's almost certainly what your boss means when they say "productivity", and understanding that word in a different (even if truer) sense is therefore inherently dangerous.
Accepting "productivity" to mean "satisfying your boss' expectations," I will now claim: the use of generative AI cannot increase your productivity.
Before I dive in, it's imperative to note that the big generative models which most people think of as constituting "AI" today are evil. They are 1: pouring fuel on our burning planet, 2: psychologically strip-mining a class of data laborers who are exploited for their precarity, 3: enclosing, exploiting, and polluting the digital commons, and 4: stealing labor from broad classes of people many of whom are otherwise glad to give that labor away for free provided they get a simple acknowledgement in return. Any of these four "ethical issues" should be enough *alone* to cause everyone to simply not use the technology. These ethical issues are the reason that I do not use generative AI right now, except for in extremely extenuating circumstances. These issues are also convincing for a wide range of people I talk to, from experts to those with no computer science background. So before I launch into a critique of the effectiveness of generative AI, I want to emphasize that such a critique should be entirely unnecessary.
But back to my thesis: generative AI cannot increase your productivity, where "productivity" has been defined as "how much you can satisfy and/or exceed your boss' expectations."
Why? In fact, what the fuck? Every AI booster I've met has claimed the opposite. They've given me personal examples of time saved by using generative AI. Some of them even truly believe this. Sometimes I even believe they saved time without horribly compromising on quality (and often, your boss doesn't care about quality anyways if the lack of quality is hard to measure of doesn't seem likely to impact short-term sales/feedback/revenue). So if generative AI genuinely lets you write more emails in a shorter period of time, or close more tickets, or something else along these lines, how can I say it isn't increasing your ability to meet your boss' expectations?
The problem is simple: your boss' expectations are not a fixed target. Never have been. In virtue of being someone who oversees and pays wages to others under capitalism, your boss' game has always been: pay you less than the worth of your labor, so that they can accumulate profit and thus more capital to remain in charge instead of being forced into working for a wage themselves. Sure, there are layers of management caught in between who aren't fully in this mode, but they are irrelevant to this analysis. It matters not how much you please your manager if your CEO thinks your work is not worth the wages you are being paid. And using AI actively lowers the value of your work relative to your wages.
Why do I say that? It's actually true in several ways. The most obvious: using generative AI lowers the quality of your work, because the work it produces is shot through with errors, and when your job is reduced to proofreading slop, you are bound to tire a bit, relax your diligence, and let some mistakes through. More than you would have if you are actually doing and taking pride in the work. Examples are innumerable and frequent, from journalists to lawyers to programmers, and we laugh at them "haha how stupid to not check whether the books the AI reviewed for you actually existed!" but on a deeper level if we're honest we know we'd eventually make the same mistake ourselves (bonus game: spot the swipe-typing typos I missed in this post; I'm sure there will be some).
But using generative AI also lowers the value of your work in another much more frightening way: in this era of hype, it demonstrates to your boss that you could be replaced by AI. The more you use it, and no matter how much you can see that your human skills are really necessary to correct its mistakes, the more it appears to your boss that they should hire the AI instead of you. Or perhaps retain 10% of the people in roles like yours to manage the AI doing the other 90% of the work. Paradoxically, the *more* you get done in terms of raw output using generative AI, the more it looks to your boss as if there's an opportunity to get enough work done with even fewer expensive humans. Of course, the decision to fire you and lean more heavily into AI isn't really a good one for long-term profits and success, but the modern boss did not get where they are by considering long-term profits. By using AI, you are merely demonstrating your redundancy, and the more you get done with it, the more redundant you seem.
In fact, there's even a third dimension to this: by using generative AI, you're also providing its purveyors with invaluable training data that allows them to make it better at replacing you. It's generally quite shitty right now, but the more use it gets by competent & clever people, the better it can become at the tasks those specific people use it for. Using the currently-popular algorithm family, there are limits to this; I'm not saying it will eventually transcend the mediocrity it's entwined with. But it can absolutely go from underwhelmingly mediocre to almost-reasonably mediocre with the right training data, and data from prompting sessions is both rarer and more useful than the base datasets it's built on.
For all of these reasons, using generative AI in your job is a mistake that will likely lead to your future unemployment. To reiterate, you should already not be using it because it is evil and causes specific and inexcusable harms, but in case like so many you just don't care about those harms, I've just explained to you why for entirely selfish reasons you should not use it.
If you're in a position where your boss is forcing you to use it, my condolences. I suggest leaning into its failures instead of trying to get the most out of it, and as much as possible, showing your boss very clearly how it wastes your time and makes things slower. Also, point out the dangers of legal liability for its mistakes, and make sure your boss is aware of the degree to which any of your AI-eager coworkers are producing low-quality work that harms organizational goals.
Also, if you've read this far and aren't yet of an anarchist mindset, I encourage you to think about the implications of firing 75% of (at least the white-collar) workforce in order to make more profit while fueling the climate crisis and in most cases also propping up dictatorial figureheads in government. When *either* the AI bubble bursts *or* if the techbros get to live out the beginnings of their worker-replacement fantasies, there are going to be an unimaginable number of economically desperate people living in increasingly expensive times. I'm the kind of optimist who thinks that the resulting social crucible, though perhaps through terrible violence, will lead to deep social changes that effectively unseat from power the ultra-rich that continue to drag us all down this destructive path, and I think its worth some thinking now about what you might want the succeeding stable social configuration to look like so you can advocate towards that during points of malleability.
As others have said more eloquently, generative AI *should* be a technology that makes human lives on average easier, and it would be were it developed & controlled by humanists. The only reason that it's not, is that it's developed and controlled by terrible greedy people who use their unfairly hoarded wealth to immiserate the rest of us in order to maintain their dominance. In the long run, for our very survival, we need to depose them, and I look forward to what the term "generative AI" will mean after that finally happens.

@nelson@tech.lgbt
2025-06-27 04:15:36

Calamus 45 Full of life, sweet-blooded, compact, visible
A remarkably effective poem for the end of the cluster. Whitman talking directly to us, the reader, about the import of his poems. And with some ambition: "To one a century hence, or any number of centuries hence".
But even better, he's horny for us:
Now it is you ... seeking me,
Fancying how happy you were, if I could be with you, and become your lover
The poet is imagining us, his future readers, thinking about how we will want to be his lover. What a lusty man! Whitman is not modest.
I love it. And it's a fitting end to this series. I've greatly enjoyed reading them. Over the past 45 days I've learned better how to read Whitman, to understand his poems. And to relate to them in at least one simple way, teasing out the gayest and sexiest parts of these poems. Making them fun for myself.
I'm not quite done yet. I hope to identify my favorites of the group. I may also try my hand at reading one or two aloud.

It’s exhilarating to hear Bernie Sanders speak to a crowd:
his zeal is reflected back in their faces,
his moral clarity is such a relief,
set against the cynicism and resignation of most of the Democratic party’s opposition to Trump and his administration.
Class war is as old as time, but it’s a peculiarity of this age that you rarely hear a politician name it.
“I do,” he tells me. “There is a class war going on. The people on top are waging that war.”

@midtsveen@social.linux.pizza
2025-05-31 18:32:00

I'm a fucking Anarchist, and I’m not shutting the hell up! Stop trying to convince me otherwise. If you don’t like my path and my knowledge, then I have a damn good option for you!
Move the fuck on with your life. Find new friends, meet new people, connect with those who share your interests. Stop wasting your time trying to change me.
Fuck off telling me to vote in some bullshit election. Stop trying to talk me out of anarchism and revolutionary syndicalism. I’m done with th…

I’m wearing a fur ushanka and jacket, standing against a red and black anarchist background.
@blakes7bot@mas.torpidity.net
2025-07-25 12:05:23

Series D, Episode 03 - Traitor
FORBUS: Oh no, please, don't do that. Look. This is the test sample ... [He starts to reach for his detonator control on the desk.]
SERVALAN: I told you, [knocks away his hand - Forbus yelps in pain.] I'm not interested. I'll teach you to obey me if I have to destroy all your skinny little body. [She points a gun at him and fires. He falls from his wheelchair to the floor. Leitz enters]

Claude Sonnet 4.0 describes the image as: "I can see this appears to be from a science fiction television series, showing a scene in what looks like a futuristic kitchen or laboratory setting with white tiled floors and modern equipment. There's a person in an elaborate black outfit with feathered or textured details standing prominently in the scene, while another figure in light-colored clothing appears to be crouched or kneeling near some equipment. The setting has a sterile, institutional f…
@shriramk@mastodon.social
2025-05-20 18:15:47

We're at the top of the Florence Duomo Cupola climb. There's VERY little room between the outer and inner domes. Tall, older American men are banging their heads.
TOAM: How tall did they think people would be?!?
Me, against my better judgment: 5'2".
(How would you respond?)
TOAM: Who were they designing for?!? PYGMIES?!?
Me: Medieval Italians.
TOAM (with disgust): Oh…ITALIANS!!!
We are not sending our best, folks. (Or, can we send them…

@hex@kolektiva.social
2025-07-21 01:50:28

Epstein shit and adjacent, Rural America, Poverty, Abuse
Everyone who's not a pedophile thinks pedophiles are bad, but there's this special obsessed hatred you'll find among poor rural Americans. The whole QAnon/Epstein obsession may not really make sense to folks raised in cities. Like, why do these people think *so much* about pedophiles? Why do they think that everyone in power is a pedophile? Why would the Pizzagate thing make sense to anyone? What is this unhinged shit? A lot of folks (who aren't anarchists) might be inclined to ask "why can't these people just let the cops take care of it?"
I was watching Legal Eagle's run down on the Trump Epstein thing earlier today and I woke up thinking about something I don't know if I've ever talked about. Now that I'm not in the US, I'm not at any risk of talking about it. I don't know how much I would have been before, but that's not something I'm gonna dig into right now. So let me tell you a story that might explain a few things.
I'm like 16, maybe 17. I have my license, so this girl I was dating/not dating/just friends with/whatever would regularly convince me to drive her and her friends around. I think she's like 15 at the time. Her friends are younger than her.
She tells me that there's a party we can go to where they have beer. She was told to invite her friends, so I can come too. We're going to pick her friends up (we regularly fill the VW Golf well beyond the legal limit and drive places) and head to the party.
So I take these girls, at least is 13 years old, down to this party. I'm already a bit sketched out bringing a 13 year old to a party. We drive out for a while. It's in the country. We drive down a long dark road. Three are some barrel fires and a shack. This is all a bit strange, but not too abnormal for this area. We're a little ways outside of a place called Mill City (in Oregon).
We park and walk towards the shack. This dude who looks like a rat comes up and offers us beer. He laughs and talks to the girl who invited me, "What's he doing here? You're supposed to bring your girl friends." She's like, "He's our ride." I don't remember if he offered me a beer or not.
We go over to this shed and everyone starts smoking, except me because I didn't smoke until I turned 18. The other girls start talking about the rat face dude, who's wandered over by the fire with some other guys. They're mainly teasing one of the 13 year old girls about having sex with him a bunch of times. They say he's like, 32 or something. The other girls joke about him only having sex with 13 year olds because he's too ugly to have sex with anyone closer to his own age.
Somewhere along the line it comes out that he's a cop. I never forgot that, it's absolutely seared in to my memory. I can picture his face perfectly still, decades later, and them talking about how he's a deputy, he was in his 30's, and he was having sex with a 13 year old girl. I was the only boy there, but there were a few older men. This was a chunk of the good ol' boys club of the town. I think there were a couple of cops besides the one deputy, and a judge or the mayor or some kind of big local VIP.
I kept trying to get my friend to leave, but she wanted to stay. Turns out under age drinking with cops seems like a great deal if you're a kid because you know you won't get busted. I left alone, creeped the fuck out.
I was told later that I wasn't invited and that I couldn't talk about it, I've always been good at compartmentalization, so I never did.
Decades later it occurred to me what was actually happening. I'm pretty sure that cop was giving meth he'd seized as evidence to these kids. This wasn't some one-off thing. It was regular. Who knows how many decades it went on after I left, or how many decades it had been going on before I found out. I knew this type of thing had happened at least a few times before because that's how that 13 year old girl and that 32 year old cop had hooked up in the first place.
Hearing about Epstein's MO, targeting these teenage girls from fucked up backgrounds, it's right there for me. I wouldn't be surprised if they were involved in sex trafficking of minors or some shit like that... but who would you call if you found out? Half the sheriff's department was there and the other half would cover for them.
You live in the city and shit like that doesn't happen, or at least you don't think it happens. But rural poor folks have this intuition about power and abuse. It's right there and you know it.
Trump is such a familiar character for me, because he's exactly that small town mayor or sheriff. He'll will talk about being tough on crime and hunting down pedophiles, while hanging out at a party that exists so people can fuck 8th graders.
The problem with the whole thing is that rural folks will never break the cognitive dissonance between "kill the peods" and "back the blue." They'll never go kill those cops. No, the pedos must be somewhere else. It must be the elites. It must be outsiders. It can't be the cops and good ol' boys everyone respects. It can't be the mayor who rigs the election to win every time. It can't be the "good upstanding" sheriff. Nah, it's the Clintons.
To be fair, it's probably also the Clitnons, a bunch of other politicians, billionaires, etc. Epstein was exactly who everyone thought he was, and he didn't get away with it for so long without a whole lot of really powerful help.
There are still powerful people who got away with involvement with #Epstein. #Trump is one of them, but I don't really believe that he's the only one.
#USPol #ACAB

@tiotasram@kolektiva.social
2025-07-28 13:04:34

How popular media gets love wrong
Okay, so what exactly are the details of the "engineered" model of love from my previous post? I'll try to summarize my thoughts and the experiences they're built on.
1. "Love" can be be thought of like a mechanism that's built by two (or more) people. In this case, no single person can build the thing alone, to work it needs contributions from multiple people (I suppose self-love might be an exception to that). In any case, the builders can intentionally choose how they build (and maintain) the mechanism, they can build it differently to suit their particular needs/wants, and they will need to maintain and repair it over time to keep it running. It may need winding, or fuel, or charging plus oil changes and bolt-tightening, etc.
2. Any two (or more) people can choose to start building love between them at any time. No need to "find your soulmate" or "wait for the right person." Now the caveat is that the mechanism is difficult to build and requires lots of cooperation, so there might indeed be "wrong people" to try to build love with. People in general might experience more failures than successes. The key component is slowly-escalating shared commitment to the project, which is negotiated between the partners so that neither one feels like they've been left to do all the work themselves. Since it's a big scary project though, it's very easy to decide it's too hard and give up, and so the builders need to encourage each other and pace themselves. The project can only succeed if there's mutual commitment, and that will certainly require compromise (sometimes even sacrifice, though not always). If the mechanism works well, the benefits (companionship; encouragement; praise; loving sex; hugs; etc.) will be well worth the compromises you make to build it, but this isn't always the case.
3. The mechanism is prone to falling apart if not maintained. In my view, the "fire" and "appeal" models of love don't adequately convey the need for this maintenance and lead to a lot of under-maintained relationships many of which fall apart. You'll need to do things together that make you happy, do things that make your partner happy (in some cases even if they annoy you, but never in a transactional or box-checking way), spend time with shared attention, spend time alone and/or apart, reassure each other through words (or deeds) of mutual beliefs (especially your continued commitment to the relationship), do things that comfort and/or excite each other physically (anywhere from hugs to hand-holding to sex) and probably other things I'm not thinking of. Not *every* relationship needs *all* of these maintenance techniques, but I think most will need most. Note especially that patriarchy teaches men that they don't need to bother with any of this, which harms primarily their romantic partners but secondarily them as their relationships fail due to their own (cultivated-by-patriarchy) incompetence. If a relationship evolves to a point where one person is doing all the maintenance (& improvement) work, it's been bent into a shape that no longer really qualifies as "love" in my book, and that's super unhealthy.
4. The key things to negotiate when trying to build a new love are first, how to work together in the first place, and how to be comfortable around each others' habits (or how to change those habits). Second, what level of commitment you have right now, and what how/when you want to increase that commitment. Additionally, I think it's worth checking in about what you're each putting into and getting out of the relationship, to ensure that it continues to be positive for all participants. To build a successful relationship, you need to be able to incrementally increase the level of commitment to one that you're both comfortable staying at long-term, while ensuring that for both partners, the relationship is both a net benefit and has manageable costs (those two things are not the same). Obviously it's not easy to actually have conversations about these things (congratulations if you can just talk about this stuff) because there's a huge fear of hearing an answer that you don't want to hear. I think the range of discouraging answers which actually spell doom for a relationship is smaller than people think and there's usually a reasonable "shoulder" you can fall into where things aren't on a good trajectory but could be brought back into one, but even so these conversations are scary. Still, I think only having honest conversations about these things when you're angry at each other is not a good plan. You can also try to communicate some of these things via non-conversational means, if that feels safer, and at least being aware that these are the objectives you're pursuing is probably helpful.
I'll post two more replies here about my own experiences that led me to this mental model and trying to distill this into advice, although it will take me a moment to get to those.
#relationships #love

@davidaugust@mastodon.online
2025-07-18 18:41:24

Did you know, President & CEO of CBS, which just bribed potus by settling a case and canceled Stephen Colbert's show, is named, not making this up, George Cheeks?
Sure they were bullied as a kid, but as CEO, George Cheeks laying down for fascists makes me theoretically ('cause violence is not a good answer) want to clap those cheeks (metaphorically slap him).
Stop returning Paramount & subsidiaries' (like CBS) calls. They wither w/no one to work for them. …

I’m George Cheeks, President and CEO of CBS, and I love fascism. 

[photo of George Cheeks] 

Do not watch, return their phone calls, pitch shows, sell to or work for Paramount, CBS or any of their subsidiaries and affiliates.

The shareholders will realize George and the rest of leadership’s fetish for fascism is bad business.
@rene_mobile@infosec.exchange
2025-07-23 05:41:30

Firefox 141.0 released with "a local AI model" that can perform tab grouping.
If that's really the best use for an "AI" in a browser, then please stop trying to shove it in, will you? And no, I'm not dissing the "local" part at all - any cloud AI models used by any browser coming near me are immediately disabled!
Reference:

@rachel@norfolk.social
2025-06-18 06:36:56

Can anyone describe to me the last time they actually used a human-readable sitemap page on a website?
What were you looking for? Why did the menu system not give this to you?
#webdev

@tiotasram@kolektiva.social
2025-08-05 10:34:05

It's time to lower your inhibitions towards just asking a human the answer to your question.
In the early nineties, effectively before the internet, that's how you learned a lot of stuff. Your other option was to look it up in a book. I was a kid then, so I asked my parents a lot of questions.
Then by ~2000 or a little later, it started to feel almost rude to do this, because Google was now a thing, along with Wikipedia. "Let me Google that for you" became a joke website used to satirize the poor fool who would waste someone's time answering a random question. There were some upsides to this, as well as downsides. I'm not here to judge them.
At this point, Google doesn't work any more for answering random questions, let alone more serous ones. That era is over. If you don't believe it, try it yourself. Between Google intentionally making their results worse to show you more ads, the SEO cruft that already existed pre-LLMs, and the massive tsunami of SEO slop enabled by LLMs, trustworthy information is hard to find, and hard to distinguish from the slop. (I posted an example earlier: #AI #LLMs #DigitalCommons #AskAQuestion

@shoppingtonz@mastodon.social
2025-07-05 07:36:53

alt F11, interesting keyboard shortcut for making a window take the entire screen in Qubes OS...discovered by randomly pressing it, didn't even plan to, my fingers did it, not me!
Reminds me of when I discovered that holding alt scroll wheel will zoom in or out depending on which direction you are going in...scroll up = zoom in?
scroll down = zoom out? Or was it the other way around? Anyway!...
I'm thankful!

@pbloem@sigmoid.social
2025-05-18 14:09:23

This type of reasoning is always baffling to me. When climate change is discussed these people always say that there is some magical technological solution that will pop up to save us (usually handed to us by the AI gods).
Why then, in the the several decades that it takes to scale up nuclear, can we not account for the possibility that AI could become more energy efficient?
That sounds like something you could solve for the cost of a few power plants...

@mgorny@social.treehouse.systems
2025-07-14 16:39:18

About morbid thriftiness (Autism Spectrum Condition)
As you may have noticed, I am morbidly thrifty. Usually I don't buy stuff that I don't need — and if I decide that I actually need something, I am going to ponder about it for a while, look for value products, and for the best price. And with some luck, I'm going to decide I don't need it that bad after all.
One reason for that is probably how I was raised. My parents taught me to be thrifty, so I have to be. It doesn't matter that, from retrospective, I see that their thriftiness was applied rather arbitrarily to some spendings and not others, or that perhaps they were greedy — spending less on individual things so that they could buy more. Well, I can't delude myself like that, so I have to be thrifty for real. And when I fail, when I pay too much, when I get cheated — I feel quite bad about it.
The other reason is that I keep worrying about my future. It doesn't matter how rich I may end up — I'll keep worrying that I'll run out of money in the future. Perhaps I'll lose a job and won't be able to find anything for a long time, Perhaps something terrible will happen and I'm going to need to pay a lot suddenly.
Another thing is that I easily get attached to objects. Well, it's easier to be thrifty when you really don't want to replace stuff. Over time you also learn to avoid getting new stuff at all, since the more stuff you have, the more stuff may break and need to be thrown away.
Finally, there's my environmental responsibility. I admit that I don't do enough — but at least the things I can do, I do.
[EDIT: and yes, I feel bad about how expensive my new phone was, even though it's of much higher quality than the last one. Also, I got a worse deal because I waited too long.]
#ActuallyAutistic

Why aren't I seeing every Democrat in congress on TV right now talking about this insane budget bill⁉️
It's Isaiah Martin
- and I think we need more Democrats with spines in Congress.
I know everyone is texting you right now, but my special election to fill a critical blue seat in Congress is⭐️ in just a few months - not a year and a half.
As a grassroots zero corporate dollar candidate, I need folks like you to come through for me.
It's our end o…

@al3x@hachyderm.io
2025-07-13 19:09:11

I am looking for a mood tracking app for the iPhone.
It would be nice to have a long list of moods/feelings to choose from. I think I want to allow me to enter a bit of text too.
It must be not subscription based.
If you know any please do share. Thank you

@NicolasGriseyDemengel@piaille.fr
2025-06-25 07:06:24

I'm sharing a presentation I gave at my workspace last week: "LLMs, GenAI... should we?"
Just for fun, I converted what was an ODP presentation into Markdown that my Jekyll blog could display as a regular article, then materialized the slides as such, and finally I added a slideshow feature.
I'm thinking of making a presenter mode where one window would display the slideshow, while another would display the presenter's notes, as well as the previous & ne…

@stefan@gardenstate.social
2025-07-25 14:01:00

If I reply to a migrated account it will never reply to me and I might not ever notice it's migrated unless I view the profile on the source server.
Here is a mastodon suggestion issue with a few ideas to make it more clear the account has migrated.
If you have ideas add them to the issue as a comment!

@Dwemthy@social.linux.pizza
2025-05-14 19:14:51

I get the desire for live coding interviews, you can't just take people's word that they know how to do it.
But what's the point in throwing Advent of Code style problems at me and interrupting a naive or incorrect approach before I even start implementing it? Let me write unoptimized code for you and then make it better! I'm not going to write the perfect implementation first try for every problem, but I can show you my process and prove I can write _some_ code.

@tiotasram@kolektiva.social
2025-07-30 17:56:35

Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
social.coop/@eloquence/1149406
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.

@grumpybozo@toad.social
2025-06-18 17:42:44

Is this a parody?
I don’t do #InfoSec or other cons so I don’t have a strong sense of whether the “Open Space” concept is brilliant or uproariously absurd. I lean towards the latter because it just seems to me like a recipe for people standing around.

@sonnets@bots.krohsnest.com
2025-07-18 11:25:10

Sonnet 071 - LXXI
No longer mourn for me when I am dead
Than you shall hear the surly sullen bell
Give warning to the world that I am fled
From this vile world with vilest worms to dwell:
Nay, if you read this line, remember not
The hand that writ it, for I love you so,
That I in your sweet thoughts would be forgot,
If thinking on me then should make you woe.
O! if, I say, you look upon this verse,
When I perhaps compounded am with …

@tezoatlipoca@mas.to
2025-07-19 19:32:54

Sitting on the porch at the cabin. A solitary loon calls out across the lake.
"What are you doing!?"
"Huh?" <dips cheese cube in the spinach dip>
"That's for the Vegetables!"
"You're not the boss of me!" <holds dip tighter>
"The vegan can't have it anyway cause it's not plant based so.... Stay. In your lane, loon."

@philip@mastodon.mallegolhansen.com
2025-07-24 18:09:34

“Diversify your skillset by learning AI” they’ve been saying for the past 2 years.
Not a very diverse offering if you ask me.

@inthehands@hachyderm.io
2025-06-09 16:42:33

All this brings me back to some text I was writing yesterday for my students, on which I’d appreciate any thoughtful feedback:
❝You can let the computer do the typing for you, but never let it do the thinking for you.
This is doubly true in the current era of AI hype. If the AI optimists are correct (the credible ones, anyway), software development will consist of humans critically evaluating, shaping, and correcting the output of LLMs. If the AI skeptics are correct, then the future will bring mountains of AI slop to decode, disentangle, fix, and/or rewrite. Either way, it is •understanding• and •critically evaluating• code — not merely •generating• it — that will be the truly essential ability. Always has been; will be even more so. •That• is what you are learning here.❞
11/

@tiotasram@kolektiva.social
2025-06-28 13:30:10

In Ursula K. Le Guin's "A Man of the People" (part of "Four Ways to Forgiveness") there's a scene where the Hainish protagonist begins studying history. It's excellent in many respects, but what stood out the most to me was the softly incomprehensible idea of a people with multiple millions of years of recorded history. As one's mind starts to try to trace out the implications of that, it dawns on you that you can't actually comprehend the concept. Like, you read the sentence & understood all the words, and at first you were able to assemble them into what seemed like a conceptual understanding, but as you started to try to fill out that understating, it began to slip away, until you realized you didn't in fact have the mental capacity to build a full understanding and would have you paper things over with a shallow placeholder instead.
I absolutely love that feeling, as one of the ways in which reading science fiction can stretch the brain, and I connected it to a similar moment in Tsutomu Nihei's BLAME, where the android protagonists need to ride an elevator through the civilization/galaxy-spanning megastructure, and turn themselves off for *millions of years* to wait out the ride.
I'm not sure why exactly these scenes feel more beautifully incomprehensible than your run-of-the-mill "then they traveled at lightspeed for a millennia, leaving all their family behind" scene, other than perhaps the authors approach them without trying to use much metaphor to make them more comprehensible (or they use metaphor to emphasize their incomprehensibility).
Do you have a favorite mind=expanded scene of this nature?
#AmReading

@stefan@gardenstate.social
2025-06-20 19:24:27

took me 10 tries on this early expedition 33 boss. this does not bode well.
ign.com/wikis/clair-obscur-exp

@samir@functional.computer
2025-07-21 09:38:06

@… It definitely lessens with practice and familiarity (and an editor that gives you lots of hints), but I don’t think it goes away entirely.
I definitely prefer to have my static type system and wrestle with it sometimes, but not always.
And I appreciate that Haskell, for example, *will* allow me to ignore cases in pattern-matching with a warnin…

@midtsveen@social.linux.pizza
2025-07-23 19:54:07

Hey #ActuallyAutistic, asking for a friend (okay, maybe asking for me, sorry not sorry), what do you do when you get bored?
#Autism #Autistic

@tiotasram@kolektiva.social
2025-07-22 00:03:45

Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: chelseatroy.com/2024/08/28/doe which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.

@midtsveen@social.linux.pizza
2025-06-23 09:16:32

I'm feeling severely depressed right now, like the weight of everything is crushing me.
I did the right thing by reaching out to my dad, and he’s on his way to be with me.
I’m not okay, but I hope to see you all another day.
Thank you for being here.
❤😔
#ActuallyAutistic

@shoppingtonz@mastodon.social
2025-06-30 05:09:57

@… thank you for using the (#)FediMusic hashtag when embedding attachments into the fediverse.
It makes me pleased to see that there are other fediverse users that are not me and that are adding more music to the fediverse.
#UnityInMusic

@midtsveen@social.linux.pizza
2025-07-23 03:11:07

Operating system drama is basically YouTube drama but with way more keyboard clacking and less ukuleles. Friends turn into rivals overnight just because of which OS you use. It is wild to see people treat software choices like reality TV show rivalries.
What makes me even sadder is how something that should come from understanding and research turns into a full-blown philosophical fight. Choosing an OS should be about what works for you, not a reason to start a digital soap opera.

@midtsveen@social.linux.pizza
2025-07-09 20:03:22

I’m just a baby anarchist, okay? If you start arguing anarchism with me, I’ll probably blurt out, “Chomsky,” wait, no, not Chomsky, I mean, um, oh no, did I just say Chomsky? Sorry, my bad, I meant Noam, actually scratch that, he’s not even spicy enough for the group chat.
Actually, I meant Rudolf Rocker all along. I have a crush on his syndicalist takes, if you test me on theory, I’ll just pretend I’m reading Chomsky so I can collect some extra hate from the group chat, honestly, noth…

A person wearing headphones is sitting indoors with a red and black diagonal flag, commonly associated with anarcho-syndicalism and anarchist movements, hanging on the wall behind them.
@tiotasram@kolektiva.social
2025-05-15 17:02:17

The full formula for the probability of "success" is:
p = {
1/(2^(-n 1)) if n is negative, or
1 - (1/(2^(n 1))) if n is zero or positive
}
(Both branches have the same value when n is 0, so the behavior is smooth around the origin.)
How can we tweak this?
First, we can introduce fixed success and/or failure chances unaffected by level, with this formula only taking effect if those don't apply. For example, you could do 10% failure, 80% by formula, and 10% success to keep things from being too sure either way even when levels are very high or low. On the other hand, this flattening makes the benefit of extra advantage levels even less exciting.
Second, we could allow for gradations of success/failure, and treat the coin pools I used to explain that math like dice pools a bit. An in-between could require linearly more success flips to achieve the next higher grade of success at each grade. For example, simple success on a crit role might mean dealing 1.5x damage, but if you succeed on 2 of your flips, you get 9/4 damage, or on 4 flips 27/8, or on 7 flips 81/16. In this world, stacking crit levels might be a viable build, and just giving up on armor would be super dangerous. In the particular case I was using this for just now, I can't easily do gradations of success (that's the reason I turned to probabilities in the first place) but I think I'd favor this approach when feasible.
The main innovation here over simple dice pools is how to handle situations where the number of dice should be negative. I'm almost certain it's not a truly novel innovation though, and some RPG fan can point out which system already does this (please actually do this, I'm an RPG nerd too at heart).
I'll leave this with one more tweak we could do: what if the number 2 in the probability equation were 3, or 2/3? I think this has a similar effect to just scaling all the modifiers a bit, but the algebra escapes me in this moment and I'm a bit lazy. In any case, reducing the base of the probability exponent should let you get a few more gradations near 50%, which is probably a good thing, since the default goes from 25% straight to 50% and then to 75% with no integer stops in between.

@midtsveen@social.linux.pizza
2025-07-08 15:18:56

Just to clarify, are you envisioning something like PayPal, where creators can receive direct payments from their supporters?
Or is it more like a system where users can pay to boost posts so they appear more prominently in others’ feeds?
@…

@midtsveen@social.linux.pizza
2025-06-11 15:14:54

Oh, absolutely, I live in Norway, because apparently, people need proof before they believe anything these days. Next time you see me, I’ll be casually dropping the phrase “Norsk Syndikalistisk Forbund” in perfect Norwegian, just to remind you that I’m not a figment of your imagination but a real, live, radicalized-by-proxy Norwegian who reads obscure early 20th-century syndicalist newspapers in my spare time.
Don’t worry, I’ll also make sure to quote both Emma Goldman and Rudolf Rocke…

A person stands indoors, holding books that are partially covered by the Norwegian Syndicalist Federation logo. The background is plain, featuring a simple door. At the top of the image is the word "me," and at the bottom is the website "nsf-iaa.org." The individual wears a sweater, vest, and pants, and carries a bag.
@tiotasram@kolektiva.social
2025-07-10 13:31:32

"As we approach the coming jobs cliff, we're entering a period where a college isn't going to be worth it for the majority of people, since AI will take over most white-collar jobs. Combined with the demographic cliff, the entire higher education system will crumble."
This is the kind of statement you don't hear that much from sub-CEO-level #AI boosters, because it's awkward for them to admit that the tech they think is improving their life is going to be disastrous for society. Or if they do admit this, they spin it like it's a good thing (don't get me wrong, tuition is ludicrously high and higher education absolutely could be improved by a wholesale reinvention, but the potential AI-fueled collapse won't be an improvement).
I'm in the "anti-AI" crowd myself, and I think the current tech is in a hype bubble that will collapse before we see wholesale replacement of white-collar jobs, with a re-hiring to come that will somewhat make up for the current decimation. There will still be a lot of fallout for higher ed (and hopefully some productive transformation), but it might not be apocalyptic.
Fun question to ask the next person who extols the virtues of using generative AI for their job: "So how long until your boss can fire you and use the AI themselves?"
The following ideas are contradictory:
1. "AI is good enough to automate a lot of mundane tasks."
2. "AI is improving a lot so those pesky issues will be fixed soon."
3. "AI still needs supervision so I'm still needed to do the full job."