Tootfinder

Opt-in global Mastodon full text search. Join the index!

@mikeymikey@hachyderm.io
2025-08-19 23:28:46

Unfortunately CrossOver Preview seems to have gotten rid of the "Display Settings" sidebar on the right which let you simulate different display geometries than what wine auto-detected, which was very helpful for on-the-go #Steam gaming on #macOS on my MBP since rarely does it detect proper 16:10 or 16:9.
I used this, for example, with #BluePrince to ensure the left and right sides of the display were not cropped.
As an odd, but very workable, alternative solution - if you get the free edition of BetterDisplay and have it create a Virtual Display of the aspect ratio you want and then enable it, you can go into Settings -> Displays and configure your built-in display as a mirror for the virtual display.
The net result will be that it forces your only real display to conform to match the mirrored alternative geometry. Doesn't seem to be too much of a performance impact so far, but I haven't tried it with any super intense games either yet.

@compfu@mograph.social
2025-09-18 21:19:59

@… Hi, at the start of the year you collected bread knife measurements. What was that for?
digipres.club/@timixretroplays

@saraislet@infosec.exchange
2025-07-15 09:32:48

LinkedIn tips:
1. You don't have to read the posts.
2. If you don't like a post from someone you follow, then stop following them. You can keep a connection without following them!
3. If you don't like a post from someone you don't follow, then mark it as not interested. Send the signal to inform both the recommendation algorithm, and the people who design the recommendation algorithm, what content you don't want to see.
Whatever other people are…

@tiotasram@kolektiva.social
2025-09-14 12:01:38

TL;DR: what if instead of denying the harms of fascism, we denied its suppressive threats of punishment
Many of us have really sharpened our denial skills since the advent of the ongoing pandemic (perhaps you even hesitated at the word "ongoing" there and thought "maybe I won't read this one, it seems like it'll be tiresome"). I don't say this as a preface to a fiery condemnation or a plea to "sanity" or a bunch of evidence of how bad things are, because I too have honed my denial skills in these recent years, and I feel like talking about that development.
Denial comes in many forms, including strategic information avoidance ("I don't have time to look that up right now", "I keep forgetting to look into that", "well this author made a tiny mistake, so I'll click away and read something else", "I'm so tired of hearing about this, let me scroll farther", etc.) strategic dismissal ("look, there's a bit of uncertainty here, I should ignore this", "this doesn't line up perfectly with my anecdotal experience, it must be completely wrong", etc.) and strategic forgetting ("I don't remember what that one study said exactly; it was painful to think about", "I forgot exactly what my friend was saying when we got into that argument", etc.). It's in fact a kind of skill that you can get better at, along with the complementary skill of compartmentalization. It can of course be incredibly harmful, and a huge genre of fables exists precisely to highlight its harms, but it also has some short-term psychological benefits, chiefly in the form of muting anxiety. This is not an endorsement of denial (the harms can be catastrophic), but I want to acknowledge that there *are* short-term benefits. Via compartmentalization, it's even possible to be honest with ourselves about some of our own denials without giving them up immediately.
But as I said earlier, I'm not here to talk you out of your denials. Instead, given that we are so good at denial now, I'm here to ask you to be strategic about it. In particular, we live in a world awash with propaganda/advertising that serves both political and commercial ends. Why not use some of our denial skills to counteract that?
For example, I know quite a few people in complete denial of our current political situation, but those who aren't (including myself) often express consternation about just how many people in the country are supporting literal fascism. Of course, logically that appearance of widespread support is going to be partly a lie, given how much our public media is beholden to the fascists or outright in their side. Finding better facts on the true level of support is hard, but in the meantime, why not be in denial about the "fact" that Trump has widespread popular support?
To give another example: advertisers constantly barrage us with messages about our bodies and weight, trying to keep us insecure (and thus in the mood to spend money to "fix" the problem). For sure cutting through that bullshit by reading about body positivity etc. is a better solution, but in the meantime, why not be in denial about there being anything wrong with your body?
This kind of intentional denial certainly has its own risks (our bodies do actually need regular maintenance, for example, so complete denial on that front is risky) but there's definitely a whole lot of misinformation out there that it would be better to ignore. To the extent such denial expands to a more general denial of underlying problems, this idea of intentional denial is probably just bad. But I sure wish that in a world where people (including myself) routinely deny significant widespread dangers like COVID-19's long-term risks or the ongoing harms of escalating fascism, they'd at least also deny some of the propaganda keeping them unhappy and passive. Instead of being in denial about US-run concentration camps, why not be in denial that the state will be able to punish you for resisting them?

@blaise@mastodon.cloud
2025-09-19 19:15:10

"Investigators have reason to believe the majority of funds came from criminal sources."
Oh, really? What reason?
"This type of platform, which doesn’t require users to identify themselves, hides the source of funds. This is a common tactic used by criminal organizations"
So your reason to believe the funds are illegal is private citizens don't want you to know their business?
I guess cops everywhere are fascist assholes, not just here in the…

@SmartmanApps@dotnet.social
2025-08-17 00:41:55

"Administrators don’t have the guts to ban cellphones but expect teachers to spend every day swinging wildly at the slithering AI tech we’re supposed to embrace"
sunny.garden/@himantra/1150395

@pre@boing.world
2025-08-11 18:01:41
Content warning: re: UKPol, Palestine Action, reply from my MP

Emily Thornberry's formberry reply:
Thank you for writing to me regarding the Home Secretary's decision to proscribe Palestine Action.
I believe the right to protest is a fundamental right in our democracy and I will continue to wholeheartedly defend this. I appreciate the concerns you have raised regarding proscribing Palestine
Action, however, as there is an upcoming judicial review into the ban, I am limited as to what I can say on the matter at the moment.
I was pleased that the near weekly protests in London and across the country, calling for an end to Palestinian suffering, have continued. I am certain we all want to see an immediate end to the immense suffering the Palestinian people are being subjected to, and the resumption of the critical aid deliveries which are so desperately needed in Gaza.
I am thankful that the Government have now set out an approach to recognising the Palestinian state as a step towards a lasting ceasefire. If you would like to know more about my wider views on the Israel-Palestine conflict, which you can view below.
Thank you again for writing to me on this very important issue. Let me assure you I will continue to push from within Parliament for an end to the violence and a peaceful two-state solution.
Best wishes,
Rt Hon Emily Thornberry MP

@paulbusch@mstdn.ca
2025-07-16 00:33:20

I'm retired, so my visits to LinkedIn are rare but scrolling through today, I came across this nugget. I don't know what Gen AI even is - and not interested in finding out - but why do you want something that you have no clue what it is or does.
#WhatTheCoolKidsAreDoing

@blakes7bot@mas.torpidity.net
2025-09-12 09:06:10

Series B, Episode 08 - Hostage
BLAKE: You heard him, he's got the girl. Do you believe that he won't kill her?
AVON: Nope.
BLAKE: That's why I want to go to Exbar. I don't expect any of you to come with me, I just want you to put me down, that's all.
blake.torpidity.net/m/208/165

Claude 3.7 describes the image as: "The image shows a person with blonde, feathered hair styled in a popular 1970s/early 1980s fashion. They're wearing what appears to be a patterned top with a necklace, and have a serious or contemplative expression. The lighting and film quality suggest this is from a television production of that era, with a somewhat dark background and dramatic lighting typical of science fiction or drama productions from that time period. The styling, costume and productio…
@johl@mastodon.xyz
2025-09-09 21:19:34

“What is political agency?” is an interesting text about how to approach political activism that has impact. I also learned something about garlic preserved with honey. write.as/conjure-utopia/what-i

@tiotasram@kolektiva.social
2025-07-19 07:51:05

AI, AGI, and learning efficiency
My 4-month-old kid is not DDoSing Wikipedia right now, nor will they ever do so before learning to speak, read, or write. Their entire "training corpus" will not top even 100 million "tokens" before they can speak & understand language, and do so with real intentionally.
Just to emphasize that point: 100 words-per-minute times 60 minutes-per-hour times 12 hours-per-day times 365 days-per-year times 4 years is a mere 105,120,000 words. That's a ludicrously *high* estimate of words-per-minute and hours-per-day, and 4 years old (the age of my other kid) is well after basic speech capabilities are developed in many children, etc. More likely the available "training data" is at least 1 or 2 orders of magnitude less than this.
The point here is that large language models, trained as they are on multiple *billions* of tokens, are not developing their behavioral capabilities in a way that's remotely similar to humans, even if you believe those capabilities are similar (they are by certain very biased ways of measurement; they very much aren't by others). This idea that humans must be naturally good at acquiring language is an old one (see e.g. #AI #LLM #AGI

Homan, who doesn't have a Senate-approved position and no formal authority over any agency, is a giant trial balloon, right?
Not being sarcastic.
I feel like that is his job: say something outrageous ("Of course we stop you for being brown!") and then see how it goes over, walk back what fails.
-- John Pfaff

@unchartedworlds@scicomm.xyz
2025-07-19 09:37:38
Content warning: what I plan to contribute to BiCon (July 2025, Nottingham and online)

I have plans for a few different things...
• Wednesday evening, 23 July, I'm hosting a 1-hour online thing that'll be open to whoever's already booked by then. It'll be a somewhat structured talky session on a theme of "inventing the BiCon you want", and an opportunity to meet other people who are going. Newcomers especially welcome :-)
• On the Friday morning at in-person BiCon, I'm offering a session called "Curiosity Skills". It's about which kinds of questions are genuinely "open", versus which kinds of questions allow your own assumptions and biases to sneak in! It'll be partly me explaining, and partly the chance for some little conversational experiments, to notice how the different questions work in practice.
• Subject to finding a nice quiet airy place to do it, I plan to run a mask-decoration session at some point on the Friday. I'll bring a few different kinds of masks, plus lace, beads and sequins, and some past experience of how to decorate masks without compromising the seal or the breathability. I'll invite donations for the materials. Decorate your mask for Pride! or for BiCon partying! or just because you like to :-)
• Also I will bring my badges and zines, and have them on sale!
=
By the way, if you might come to the Wednesday evening online bit, let me know what time you'd like it to start, because that's a question I have open at the moment. Could be 19.00, 19.30, 20.00. For myself I don't really mind, but I'm aware that some people have teatimes or child-bedtimes that can't easily be moved.
#bi #trans #Nottingham #EastMidlands #England #UK #BiCon #bisexual #bisexuality #queer #LGBT #LGBTQ

@chris@mstdn.chrisalemany.ca
2025-09-18 04:47:28

after a few minutes... and a few... clumps... i thought I had maybe got it all. You should have SEEN... what I saw.
no.. no you don't want to know and I didn't take a picture.
But unfortunately it wasn't enough. I will give it another go tomorrow. Sometimes it just needs to sit. #plumbing #yuck #gross #diy

@thomasfuchs@hachyderm.io
2025-07-29 02:22:16

You know what it is though?
It’s just like racism or xenophobia—the lowest level of human assholery.
If you don’t want any to live in a society where other people just exist, or have your children grow up in such a society—why don’t you walk into the ocean?

@ruari@velocipederider.com
2025-09-09 13:00:54

I need to show progress for an action in a shell script I made for work. I am not a dev and I am not going to vibe code something to replace the script, so the "UI" is just the dialog utililty using ncurses. Sadly this does not have a spinner, which is what I actually want. I want this, as I do not know in advance how long extraction of archives it extracts will take. Dialog expects you to give it progress updates up to 100%.

@blakes7bot@mas.torpidity.net
2025-08-16 06:02:45

#Blakes7 Series B, Episode 10 - Voice from the Past
JENNA: He's a hard man to rescue when he doesn't want to be rescued.
AVON: More to the point, are you yourself?
BLAKE: What happened? Why aren't we at Del Ten? What's going on, Avon.

Claude 3.7 describes the image as: "The image shows two people in a science fiction setting, wearing futuristic costumes with metallic and gray elements typical of classic sci-fi television production design from the late 1970s/early 1980s. 

The individuals are dressed in similar styled uniforms with high collars, though the person on the right has more prominent silver detailing on their costume. Both have distinctive period hairstyles that align with the aesthetic of British television produ…
@pgcd@mastodon.online
2025-08-11 14:28:41

I'm currently hitting a huge impostor syndrome wall-cum-quicksands state of mind.
I don't want to talk about it with friends because "no you're actually good" is something I tell myself already and I donì't trust myself about it, let alone non-mes saying it.
I have watched videos and read articles and it's not enough right now.
What do I do?

@tiotasram@kolektiva.social
2025-06-21 02:34:13

Why AI can't possibly make you more productive; long
#AI and "productivity", some thoughts:
Edit: fixed some typos.
Productivity is a concept that isn't entirely meaningless outside the context of capitalism, but it's a concept that is heavily inflected in a capitalist context. In many uses today it effectively means "how much you can satisfy and/or exceed your boss' expectations." This is not really what it should mean: even in an anarchist utopia, people would care about things like how many shirts they can produce in a week, although in an "I'd like to voluntarily help more people" way rather than an "I need to meet this quota to earn my survival" way. But let's roll with this definition for a second, because it's almost certainly what your boss means when they say "productivity", and understanding that word in a different (even if truer) sense is therefore inherently dangerous.
Accepting "productivity" to mean "satisfying your boss' expectations," I will now claim: the use of generative AI cannot increase your productivity.
Before I dive in, it's imperative to note that the big generative models which most people think of as constituting "AI" today are evil. They are 1: pouring fuel on our burning planet, 2: psychologically strip-mining a class of data laborers who are exploited for their precarity, 3: enclosing, exploiting, and polluting the digital commons, and 4: stealing labor from broad classes of people many of whom are otherwise glad to give that labor away for free provided they get a simple acknowledgement in return. Any of these four "ethical issues" should be enough *alone* to cause everyone to simply not use the technology. These ethical issues are the reason that I do not use generative AI right now, except for in extremely extenuating circumstances. These issues are also convincing for a wide range of people I talk to, from experts to those with no computer science background. So before I launch into a critique of the effectiveness of generative AI, I want to emphasize that such a critique should be entirely unnecessary.
But back to my thesis: generative AI cannot increase your productivity, where "productivity" has been defined as "how much you can satisfy and/or exceed your boss' expectations."
Why? In fact, what the fuck? Every AI booster I've met has claimed the opposite. They've given me personal examples of time saved by using generative AI. Some of them even truly believe this. Sometimes I even believe they saved time without horribly compromising on quality (and often, your boss doesn't care about quality anyways if the lack of quality is hard to measure of doesn't seem likely to impact short-term sales/feedback/revenue). So if generative AI genuinely lets you write more emails in a shorter period of time, or close more tickets, or something else along these lines, how can I say it isn't increasing your ability to meet your boss' expectations?
The problem is simple: your boss' expectations are not a fixed target. Never have been. In virtue of being someone who oversees and pays wages to others under capitalism, your boss' game has always been: pay you less than the worth of your labor, so that they can accumulate profit and thus more capital to remain in charge instead of being forced into working for a wage themselves. Sure, there are layers of management caught in between who aren't fully in this mode, but they are irrelevant to this analysis. It matters not how much you please your manager if your CEO thinks your work is not worth the wages you are being paid. And using AI actively lowers the value of your work relative to your wages.
Why do I say that? It's actually true in several ways. The most obvious: using generative AI lowers the quality of your work, because the work it produces is shot through with errors, and when your job is reduced to proofreading slop, you are bound to tire a bit, relax your diligence, and let some mistakes through. More than you would have if you are actually doing and taking pride in the work. Examples are innumerable and frequent, from journalists to lawyers to programmers, and we laugh at them "haha how stupid to not check whether the books the AI reviewed for you actually existed!" but on a deeper level if we're honest we know we'd eventually make the same mistake ourselves (bonus game: spot the swipe-typing typos I missed in this post; I'm sure there will be some).
But using generative AI also lowers the value of your work in another much more frightening way: in this era of hype, it demonstrates to your boss that you could be replaced by AI. The more you use it, and no matter how much you can see that your human skills are really necessary to correct its mistakes, the more it appears to your boss that they should hire the AI instead of you. Or perhaps retain 10% of the people in roles like yours to manage the AI doing the other 90% of the work. Paradoxically, the *more* you get done in terms of raw output using generative AI, the more it looks to your boss as if there's an opportunity to get enough work done with even fewer expensive humans. Of course, the decision to fire you and lean more heavily into AI isn't really a good one for long-term profits and success, but the modern boss did not get where they are by considering long-term profits. By using AI, you are merely demonstrating your redundancy, and the more you get done with it, the more redundant you seem.
In fact, there's even a third dimension to this: by using generative AI, you're also providing its purveyors with invaluable training data that allows them to make it better at replacing you. It's generally quite shitty right now, but the more use it gets by competent & clever people, the better it can become at the tasks those specific people use it for. Using the currently-popular algorithm family, there are limits to this; I'm not saying it will eventually transcend the mediocrity it's entwined with. But it can absolutely go from underwhelmingly mediocre to almost-reasonably mediocre with the right training data, and data from prompting sessions is both rarer and more useful than the base datasets it's built on.
For all of these reasons, using generative AI in your job is a mistake that will likely lead to your future unemployment. To reiterate, you should already not be using it because it is evil and causes specific and inexcusable harms, but in case like so many you just don't care about those harms, I've just explained to you why for entirely selfish reasons you should not use it.
If you're in a position where your boss is forcing you to use it, my condolences. I suggest leaning into its failures instead of trying to get the most out of it, and as much as possible, showing your boss very clearly how it wastes your time and makes things slower. Also, point out the dangers of legal liability for its mistakes, and make sure your boss is aware of the degree to which any of your AI-eager coworkers are producing low-quality work that harms organizational goals.
Also, if you've read this far and aren't yet of an anarchist mindset, I encourage you to think about the implications of firing 75% of (at least the white-collar) workforce in order to make more profit while fueling the climate crisis and in most cases also propping up dictatorial figureheads in government. When *either* the AI bubble bursts *or* if the techbros get to live out the beginnings of their worker-replacement fantasies, there are going to be an unimaginable number of economically desperate people living in increasingly expensive times. I'm the kind of optimist who thinks that the resulting social crucible, though perhaps through terrible violence, will lead to deep social changes that effectively unseat from power the ultra-rich that continue to drag us all down this destructive path, and I think its worth some thinking now about what you might want the succeeding stable social configuration to look like so you can advocate towards that during points of malleability.
As others have said more eloquently, generative AI *should* be a technology that makes human lives on average easier, and it would be were it developed & controlled by humanists. The only reason that it's not, is that it's developed and controlled by terrible greedy people who use their unfairly hoarded wealth to immiserate the rest of us in order to maintain their dominance. In the long run, for our very survival, we need to depose them, and I look forward to what the term "generative AI" will mean after that finally happens.

@drgeraint@glasgow.social
2025-09-06 20:24:42

UK emergency phone alert test tomorrow (Sunday) at 15:00. If you have a secret phone, you might want to turn it off.
bbc.co.uk/news/articles/c8e1n4

@inthehands@hachyderm.io
2025-08-31 19:38:53

The headline here — I think, I’m out of my depth — is that, contrary to what they did before, the Trump admin •did• in fact comply with an emergency order from a judge to stop a deportation flight.
Do not let anyone tell you authoritarianism is a done deal. Judges still have power. Activists still have power. We still have power. Nothing’s certain, nothing’s safe — but nothing is already decided either.
They want your defeatism. They want a fait accompli. Don’t give it to them. toad.social/@KimPerales/115125

@shoppingtonz@mastodon.social
2025-07-06 20:49:12

You want adonthell's soundtrack, or more specifically Waste's Edge's soundtrack?
What you want is, if you are searching in Debian related channels, the debian package adonthell-data!
Though if you're on the fediverse you wouldn't mind someone made it available to you right here, right?
I won't have time for that cause right now I'm focusing on Wesnoth main soundtrack...if I get time...

@crell@phpc.social
2025-07-26 16:53:27

A lot of websites have this thing where if you open multiple tabs, *any one of them* can time out and sign you out. Other tabs are not informed of this, and so break in mysterious ways without telling you what's going on.
If this is your website, *fix your fucking system*! Session timeouts belong on the server, not the browser.
This is inexcusably user-hostile design. And happens most with sites where you will want multiple tabs open, of course.

@azonenberg@ioc.exchange
2025-07-04 20:12:55

Finally finished the VSC8512 writeup! Ended up being just a biiiit longer than I had expected but there was a lot to talk about.
I still want to refactor my code a bit to be cleaner and more OO, what I have now is a bit quick-and-dirty, but it works.
serd.es/2025/07/04/Switch-proj

@christydena@zirk.us
2025-07-08 03:23:00

Question for any folk that do reader/audience/play testing.
I find that the more fidelity (detail, production value) a project has, the less accurate the user's identification of an issue. I try to interpret accordingly.
Do you know of any studies regarding this or personal experiences (that confirm or refute this)? I have studies on how people don't know what they want, etc. But keen on the connection btw fidelity & inaccuracy
Thank you! :)
Other relate…

@andycarolan@social.lol
2025-08-03 11:17:44

I need to rethink my Ko-fi shop. Most of what I have on there is free/pay what you want.
Recently, only a few people have paid anything for those items… this just leaves me feeling that my work isn’t valued.
#kofi #work #illustration

@unchartedworlds@scicomm.xyz
2025-07-12 22:23:27
Content warning: nice quote about science

"Fundamentally, what we're trying to do when we have evidence here in medicine or science is prevent ourselves from confusing randomness for a signal. ... we don't want to mistake something, we think it's going on and it's not. And the challenge, particularly with any intervention is you only get to see one version of reality. You can't give someone a drug, follow them, rewind history, not give them the drug and then follow them again."
- Adam Kucharski, being interviewed by Eric Topol
#science

@unixviking@social.linux.pizza
2025-07-05 06:59:28

Well, my first conclusion after, well, about two weeks of using Linux Mint: mixed.
If you've been using Fedora Linux for years, it's a noticeable step backwards. What runs smoothly under Fedora, where devices are recognized without any problems, requires a little extra help with Mint - and often more time...
Examples: I have a Brother DCP3515 multifunction laser printer. Under Fedora, it is recognized immediately via WLAN and if you want to print something, it can be done…

Netanyahu suggested that new plans for the forced relocation of refugees to other countries would give Palestinians
the “freedom” to choose.
But what Palestinians actually want is “the freedom to return to the places from which their families were expelled,”
says Peter Beinart, editor-at-large at Jewish Currents. “
"What kind of freedom is it when you have an area where most of the buildings and the hospitals and the schools and the bakeries and the agriculture …

@FandaSin@social.linux.pizza
2025-06-24 10:43:16

HumbleBundle.com have Usagi Yojimbo bundle.
Those are comic books about Rabbit Rōnin set (mostly) in Edo period (I have no clue what that period was🤦, but it sounds cool 😆)
If you like samurais, old Japan or rabbits, you might like this.
humblebundle.com/books/us…

@arXiv_csCL_bot@mastoxiv.page
2025-08-06 10:01:00

Pay What LLM Wants: Can LLM Simulate Economics Experiment with 522 Real-human Persona?
Junhyuk Choi, Hyeonchu Park, Haemin Lee, Hyebeen Shin, Hyun Joung Jin, Bugeun Kim
arxiv.org/abs/2508.03262

@teledyn@mstdn.ca
2025-08-04 04:46:31

This must be the ultimate #monsterdon
- monster? Check
- nuclear terror? Check
- hip soundtrack? Check
- unbearably bad acting? Check
- unbearably bad writing? Check
- unbearably bad voice-over? Check
- unbearably bad editing? Check
- long sequences of irrelevant stock footage? Check
- bizarre plot twist you didn't expect? Check
- solid inspiration to any aspiring film maker who thinks they aren't good enough or have budget enough or skills enough to gain eternal global distribution? Check
What more could you want?
"Monster a Go-Go" (Herschell Gordon Lewis, 1965) - FULL MOVIE
youtube.com/watch?v=btJoXBIv2S

@tiotasram@kolektiva.social
2025-07-28 13:04:34

How popular media gets love wrong
Okay, so what exactly are the details of the "engineered" model of love from my previous post? I'll try to summarize my thoughts and the experiences they're built on.
1. "Love" can be be thought of like a mechanism that's built by two (or more) people. In this case, no single person can build the thing alone, to work it needs contributions from multiple people (I suppose self-love might be an exception to that). In any case, the builders can intentionally choose how they build (and maintain) the mechanism, they can build it differently to suit their particular needs/wants, and they will need to maintain and repair it over time to keep it running. It may need winding, or fuel, or charging plus oil changes and bolt-tightening, etc.
2. Any two (or more) people can choose to start building love between them at any time. No need to "find your soulmate" or "wait for the right person." Now the caveat is that the mechanism is difficult to build and requires lots of cooperation, so there might indeed be "wrong people" to try to build love with. People in general might experience more failures than successes. The key component is slowly-escalating shared commitment to the project, which is negotiated between the partners so that neither one feels like they've been left to do all the work themselves. Since it's a big scary project though, it's very easy to decide it's too hard and give up, and so the builders need to encourage each other and pace themselves. The project can only succeed if there's mutual commitment, and that will certainly require compromise (sometimes even sacrifice, though not always). If the mechanism works well, the benefits (companionship; encouragement; praise; loving sex; hugs; etc.) will be well worth the compromises you make to build it, but this isn't always the case.
3. The mechanism is prone to falling apart if not maintained. In my view, the "fire" and "appeal" models of love don't adequately convey the need for this maintenance and lead to a lot of under-maintained relationships many of which fall apart. You'll need to do things together that make you happy, do things that make your partner happy (in some cases even if they annoy you, but never in a transactional or box-checking way), spend time with shared attention, spend time alone and/or apart, reassure each other through words (or deeds) of mutual beliefs (especially your continued commitment to the relationship), do things that comfort and/or excite each other physically (anywhere from hugs to hand-holding to sex) and probably other things I'm not thinking of. Not *every* relationship needs *all* of these maintenance techniques, but I think most will need most. Note especially that patriarchy teaches men that they don't need to bother with any of this, which harms primarily their romantic partners but secondarily them as their relationships fail due to their own (cultivated-by-patriarchy) incompetence. If a relationship evolves to a point where one person is doing all the maintenance (& improvement) work, it's been bent into a shape that no longer really qualifies as "love" in my book, and that's super unhealthy.
4. The key things to negotiate when trying to build a new love are first, how to work together in the first place, and how to be comfortable around each others' habits (or how to change those habits). Second, what level of commitment you have right now, and what how/when you want to increase that commitment. Additionally, I think it's worth checking in about what you're each putting into and getting out of the relationship, to ensure that it continues to be positive for all participants. To build a successful relationship, you need to be able to incrementally increase the level of commitment to one that you're both comfortable staying at long-term, while ensuring that for both partners, the relationship is both a net benefit and has manageable costs (those two things are not the same). Obviously it's not easy to actually have conversations about these things (congratulations if you can just talk about this stuff) because there's a huge fear of hearing an answer that you don't want to hear. I think the range of discouraging answers which actually spell doom for a relationship is smaller than people think and there's usually a reasonable "shoulder" you can fall into where things aren't on a good trajectory but could be brought back into one, but even so these conversations are scary. Still, I think only having honest conversations about these things when you're angry at each other is not a good plan. You can also try to communicate some of these things via non-conversational means, if that feels safer, and at least being aware that these are the objectives you're pursuing is probably helpful.
I'll post two more replies here about my own experiences that led me to this mental model and trying to distill this into advice, although it will take me a moment to get to those.
#relationships #love

@tml@urbanists.social
2025-06-30 20:44:21

What is your favourite physical book that you want to be seen reading in public? Wrong answers only.
theguardian.com/lifeandstyle/2

@thomasfuchs@hachyderm.io
2025-08-06 16:06:38

There’s a crisis in tech product innovation. From when I got into tech when I was maybe 8 or 9 in the late 80s to around 2010 or so there seemed to be something new and innovative—sometimes even world-changing—out at least once a year.
Now my iPad Pro is 7 years old, and I have literally no idea why I would want to upgrade it.
I don't even know other than "faster but you probably won't notice it" what the current iPad Pro has over the one from 2018.
Fwiw 7 years is the span of time between these two Apple products:

@steadystatemcr@mstdn.social
2025-08-26 16:37:09

We've updated our FAQs | Steady State Manchester #degrowth

@azonenberg@ioc.exchange
2025-08-01 04:52:10

Google calendar notifications are not cutting it... Anybody have suggestions on a better "organize all the stuff you have to / want to do" tool?
I'm not even quite sure what I want, other than "tasks that sat around for a year uncompleted should not auto-delete" and "tasks should be able to block other tasks".
I guess the "easy" option is a private github repo that is empty and only used as an issue tracker, but then I'd have to sig…

@thesaigoneer@social.linux.pizza
2025-07-30 04:34:33

Slackpkg: you have two versions of the same file (kernel-generic 6.15.8 & 6.16), what do you want to do?
R (remove) or I (ignore)?
Me: R of course!
Slackpkg: selects both
Me: hits Enter
Reboot: there's no kernel image to be found 🤪
A few minutes later...
Me: reboots into live iso, mount and chroot, reinstall kernel, generate grub, reboot, all good.
There's no limit to my stupidity 😂

@brandizzi@mastodon.social
2025-09-03 15:07:14

Do you have to put images in documents and sites, want to add alt text, but this is just soooo boring? Here is a lifehack that helped me out a lot, still helps:
When you save the image file, use a description of the image as its name.
Many apps use the file name as alt text. Even if not, the alt text will be there already to be copied.
What is surprising is how easy it is: I never struggle to name the file, but typing the same in an alt text textbox is somehow so annoying..…

@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@june_thalia_michael@literatur.social
2025-06-25 05:27:49

#EroticMusings week 4: Is your MC the kind of person you’d want a relationship with? How would they feel about you?
I would love to have Fabiola as a friend (though without the additional benefits - while I enjoy writing about this kind of relationship, it's not something I personally want. What I'm into when writing and outside of it are actually pretty different things). S…

@hikingdude@mastodon.social
2025-07-09 05:49:36

The new #bergwelten magazine arrived!
I'm looking forward to seeing some good photos and reading some nice articles.
Btw, what do you do with read magazines? I don't want to throw them away but I don't look at them either when I put them into the bookshelf 🤔

A yellow magazine is seen lying on a brown table, with a dominant color scheme of yellow and brown. The magazine appears to have a house on top of a hill on its cover. The setting seems to be outdoors, possibly near a lake, with water visible in the background.  The magazine is the main focus, showcasing a publication that may offer articles or stories related to outdoor living or nature.
@pre@boing.world
2025-06-20 22:54:36
Content warning: Doctor Who - Future, why Billie?
:tardis:

There's a woman I know who, when she was pregnant, was very keen to hear the opinions of crystal diviners and homeopath medics on what sex her new baby would be but wouldn't let the ultrasound-scan technician that actually knows tells her because Spoilers.
On that note, I'm happy to watch #doctorWho #badWolf #tv

@tiotasram@kolektiva.social
2025-08-11 13:30:26

Speculative politics
As an anarchist (okay, maybe not in practice), I'm tired of hearing why we have to suffer X and Y indignity to "preserve the rule of law" or "maintain Democratic norms." So here's an example of what representative democracy (a form of government that I believe is inherently flawed) could look like if its proponents had even an ounce of imagination, and/or weren't actively trying to rig it to favor a rich donor class:
1. Unicameral legislature, where representatives pass laws directly. Each state elects 3 statewide representatives: the three most-popular candidates in a statewide race where each person votes for one candidate (ranked preference voting would be even better but might not be necessary, and is not a solution by itself). Instead of each representative getting one vote in the chamber, they get N votes, where N is the number of people who voted for them. This means that in a close race, instead of the winner getting all the power, the power is split. Having 3 representatives trades off between leisure size and ensuring that two parties can't dominate together.
2. Any individual citizen can contact their local election office to switch or withdraw their vote at any time (maybe with a 3-day delay or something). Voting power of representatives can thus shift even without an election. They are limited to choosing one of the three elected representatives, or "none of the above." If the "none of the above" fraction exceeds 20% of eligible voters, a new election is triggered for that state. If turnout is less than 80%, a second election happens immediately, with results being final even at lower turnout until 6 months later (some better mechanism for turnout management might be needed).
3. All elections allow mail-in ballots, and in-person voting happens Sunday-Tuesday with the Monday being a mandatory holiday. (Yes, election integrity is not better in this system and that's a big weakness.)
4. Separate nationwide elections elect three positions for head-of-state: one with diplomatic/administrative powers, another with military powers, and a third with veto power. For each position, the top three candidates serve together, with only the first-place winner having actual power until vote switches or withdrawals change who that is. Once one of these heads loses their first-place status, they cannot get it again until another election, even if voters switch preferences back (to avoid dithering). An election for one of these positions is triggered when 20% have withdrawn their votes, or if all three people initially elected have been disqualified by losing their lead in the vote count.
5. Laws that involve spending money are packaged with specific taxes to pay for them, and may only be paid for by those specific revenues. Each tax may be opted into or out of by each taxpayer; where possible opting out of the tax also opts you out of the service. (I'm well aware of a lot of the drawbacks of this, but also feel like they'd not necessarily be worse than the drawbacks of our current system.) A small mandatory tax would cover election expenses.
6. I'm running out of attention, but similar multi-winner elections could elect panels of judges from which a subset is chosen randomly to preside in each case.
Now I'll point out once again that this system, in not directly confronting capitalism, racism, patriarchy, etc., is probably doomed to the same failures as our current system. But if you profess to want a "representative democracy" as opposed to something more libratory, I hope you'll at least advocate for something like this that actually includes meaningful representation as opposed to the current US system that's engineered to quash it.
Key questions: "Why should we have winner-take-all elections when winners-take-proportionately-to-votes is right there?" and "Why should elected officials get to ignore their constituents' approval except during elections, when vote-withdrawal or -switching is possible?"
2/2
#Democracy

@inthehands@hachyderm.io
2025-08-21 17:07:23

What is happening? Have I gone through the looking glass?? C-suite types are saying things about AI that…actually make sense?!?
AWS CEO Matt Garman: “My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have.” mastodon.social/@fromjason/115

@blakes7bot@mas.torpidity.net
2025-09-04 09:15:34

Series B, Episode 13 - Star One
DURKIM: There isn't.
SERVALAN: And if you want to keep your job you'll find it.
DURKIM: Why won't you face the facts?
SERVALAN: Because I'm not convinced. And even if I were, there would be nothing I could do about it.
blake.torpidity.net/m/213/33

Claude 3.7 describes the image as: "This black and white image shows a scene with two people in conversation. The person on the left has a very short, pixie-style haircut and is wearing what appears to be a light-colored outfit with a pearl necklace. They have an intense, contemplative expression while looking up at the other person. The individual on the right, shown in profile, has darker hair and appears to be wearing a dark high-collared garment.

The setting features a geometric patterned …
@thomastraynor@social.linux.pizza
2025-06-27 13:03:08

User request for access to the main SharePoint site I administer. No, you will NOT get access to the main work SharePoint site. Even though many subsites and file repositories are locked with special permissions just knowing what is there isn't something we want everyone to see.
Only a few people have access to the main site and most of them is browse only and is required for their job.

@unchartedworlds@scicomm.xyz
2025-07-24 07:30:11
Content warning: a nice thing - yesterday's BiCon pre-meet

Hosted a BiCon pre-meet yesterday, online. Conveniently there were exactly 12 people there for most of it (not counting me), perfect for dividing into threes! I kept switching the groups so that people could meet different people.
We talked about how we'd each like BiCon to be, and how we could make it more likely to turn out that way.
Top tips: get enough sleep, eat enough food, and don't try to do everything!
Then we also talked about what contribution we might like to make - though I also said, just being there and being friendly and making BiCon more varied is a contribution in itself :-)
Several of the people who'd come along turned out to be already signed up to offer workshop sessions, so we heard a little bit about those.
Two tasks currently available if you want one are (a) keeping an eye on the Zoom setup for the hybrid events, (b) leafleting at Pride on Saturday, so that more people know about BiCon for Sunday. There's usually also opportunities to assist with being welcoming at reception.
In-person BiCon starts tomorrow, and runs Friday till Sunday. The venue is a couple of buildings belonging to the girls' high school, in between the Forest and the Arboretum. I tagged along for a site visit the other day and I think it's pretty good for air quality.
Apparently about 70 people have booked so far. It's also possible to buy a ticket on the day, so that might not be the final total.
As I reminded people last night, you don't have to be bi to come to BiCon! And if you _are_ bi, you don't have to be any particular amount of bi :-)
#BiCon #Nottingham

@tiotasram@kolektiva.social
2025-06-24 09:39:49

Subtooting since people in the original thread wanted it to be over, but selfishly tagging @… and @… whose opinions I value...
I think that saying "we are not a supply chain" is exactly what open-source maintainers should be doing right now in response to "open source supply chain security" threads.
I can't claim to be an expert and don't maintain any important FOSS stuff, but I do release almost all of my code under open licenses, and I do use many open source libraries, and I have felt the pain of needing to replace an unmaintained library.
There's a certain small-to-mid-scale class of program, including many open-source libraries, which can be built/maintained by a single person, and which to my mind best operate on a "snake growth" model: incremental changes/fixes, punctuated by periodic "skin-shedding" phases where make rewrites or version updates happen. These projects aren't immortal either: as the whole tech landscape around them changes, they become unnecessary and/or people lose interest, so they go unmaintained and eventually break. Each time one of their dependencies breaks (or has a skin-shedding moment) there's a higher probability that they break or shed too, as maintenance needs shoot up at these junctures. Unless you're a company trying to make money from a single long-lived app, it's actually okay that software churns like this, and if you're a company trying to make money, your priorities absolutely should not factor into any decisions people making FOSS software make: we're trying (and to a huge extent succeeding) to make a better world (and/or just have fun with our own hobbies share that fun with others) that leaves behind the corrosive & planet-destroying plague which is capitalism, and you're trying to personally enrich yourself by embracing that plague. The fact that capitalism is *evil* is not an incidental thing in this discussion.
To make an imperfect analogy, imagine that the peasants of some domain have set up a really-free-market, where they provide each other with free stuff to help each other survive, sometimes doing some barter perhaps but mostly just everyone bringing their surplus. Now imagine the lord of the domain, who is the source of these peasants' immiseration, goes to this market secretly & takes some berries, which he uses as one ingredient in delicious tarts that he then sells for profit. But then the berry-bringer stops showing up to the free market, or starts bringing a different kind of fruit, or even ends up bringing rotten berries by accident. And the lord complains "I have a supply chain problem!" Like, fuck off dude! Your problem is that you *didn't* want to build a supply chain and instead thought you would build your profit-focused business in other people's free stuff. If you were paying the berry-picker, you'd have a supply chain problem, but you weren't, so you really have an "I want more free stuff" problem when you can't be arsed to give away your own stuff for free.
There can be all sorts of problems in the really-free-market, like maybe not enough people bring socks, so the peasants who can't afford socks are going barefoot, and having foot problems, and the peasants put their heads together and see if they can convince someone to start bringing socks, and maybe they can't and things are a bit sad, but the really-free-market was never supposed to solve everyone's problems 100% when they're all still being squeezed dry by their taxes: until they are able to get free of the lord & start building a lovely anarchist society, the really-free-market is a best-effort kind of deal that aims to make things better, and sometimes will fall short. When it becomes the main way goods in society are distributed, and when the people who contribute aren't constantly drained by the feudal yoke, at that point the availability of particular goods is a real problem that needs to be solved, but at that point, it's also much easier to solve. And at *no* point does someone coming into the market to take stuff only to turn around and sell it deserve anything from the market or those contributing to it. They are not a supply chain. They're trying to help each other out, but even then they're doing so freely and without obligation. They might discuss amongst themselves how to better coordinate their mutual aid, but they're not going to end up forcing anyone to bring anything or even expecting that a certain person contribute a certain amount, since the whole point is that the thing is voluntary & free, and they've all got changing life circumstances that affect their contributions. Celebrate whatever shows up at the market, express your desire for things that would be useful, but don't impose a burden on anyone else to bring a specific thing, because otherwise it's fair for them to oppose such a burden on you, and now you two are doing your own barter thing that's outside the parameters of the really-free-market.

@tiotasram@kolektiva.social
2025-07-28 13:06:20

How popular media gets love wrong
Now a bit of background about why I have this "engineered" model of love:
First, I'm a white straight cis man. I've got a few traits that might work against my relationship chances (e.g., neurodivergence; I generally fit pretty well into the "weird geek" stereotype), but as I was recently reminded, it's possible my experience derives more from luck than other factors, and since things are tilted more in my favor than most people on the planet, my advice could be worse than useless if it leads people towards strategies that would only have worked for someone like me. I don't *think* that's the case, but it's worth mentioning explicitly.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
I'm lucky in that I had some mixed-gender social circles already like intramural soccer and a graduate-student housing potluck. Graduate school makes a *lot* more of these social spaces accessible, so I recognize that those not in school of some sort have a harder time of things, especially if like me they don't feel like they fit in in typical adult social spaces like bars.
However, at one point I just decided that my desire for a relationship would need action on my part and so I'd try to build a relationship and see what happened. I worked up my courage and asked one of the people in my potluck if she'd like to go for a hike (pretty much clearly a date but not explicitly one; in retrospect not the best first-date modality in a lot of ways, but it made a little more sense in our setting where we could go for a hike from our front door). To emphasize this point: I was not in love with (or even infatuated with) my now-wife at that point. I made a decision to be open to building a relationship, but didn't follow the typical romance story formula beyond that. Now of course, in real life as opposed to popular media, this isn't anything special. People ask each other out all the time just because they're lonely, and some of those relationships turn out fine (although many do not).
I was lucky in that some aspects of who I am and what I do happened to be naturally comforting to my wife (natural advantage in the "appeal" model of love) but of course there are some aspects of me that annoy my wife, and we negotiate that. In the other direction, there's some things I instantly liked about my wife, and other things that still annoy me. We've figured out how to accept a little, change a little, and overall be happy with each other (though we do still have arguments; it's not like the operation/construction/maintenance of the "love mechanism" is always perfectly smooth). In particular though, I approached the relationship with the attitude of "I want to try to build a relationship with this person," at first just because of my own desires for *any* relationship, and then gradually more and more through my desire to build *this specific* relationship as I enjoyed the rewards of companionship.
So for example, while I think my wife is objectively beautiful, she's also *subjectively* very beautiful *to me* because having decided to build a relationship with her, I actively tried to see her as beautiful, rather than trying to judge whether I wanted a relationship with her based on her beauty. In other words, our relationship is more causative of her beauty-to-me than her beauty-to-me is causative of our relationship. This is the biggest way I think the "engineered" model of love differs from the "fire" and "appeal" models: you can just decide to build love independent of factors we typically think of as engendering love (NOT independent of your partner's willingness to participate, of course), and then all of those things like "thinking your partner is beautiful" can be a result of the relationship you're building. For sure those factors might affect who is willing to try building a relationship with you in the first place, but if more people were willing to jump into relationship building (not necessarily with full commitment from the start) without worrying about those other factors, they might find that those factors can come out of the relationship instead of being prerequisites for it. I think this is the biggest failure of the "appeal" model in particular: yes you *do* need to do things that appeal to your partner, but it's not just "make myself lovable" it's also: is your partner putting in the effort to see the ways that you are beautiful/lovable/etc., or are they just expecting you to become exactly some perfect person they've imagined (and/or been told to desire by society)? The former is perfectly possible, and no less satisfying than the latter.
To cut off my rambling a bit here, I'll just add that in our progress from dating through marriage through staying-married, my wife and I have both talked at times explicitly about commitment, and especially when deciding to get married, I told her that I knew I couldn't live up to the perfect model of a husband that I'd want to be, but that if she wanted to deepen our commitment, I was happy to do that, and so we did. I also rearranged my priorities at that point, deciding that I knew I wanted to prioritize this relationship above things like my career or my research interests, and while I've not always been perfect at that in my little decisions, I've been good at holding to that in my big decisions at least. In the end, *once we had built a somewhat-committed relationship*, we had something that we both recognized was worth more than most other things in life, and that let us commit even more, thus getting even more out of it in the long term. Obviously you can't start the first date with an expectation of life-long commitment, and you need to synchronize your increasing commitment to a relationship so that it doesn't become lopsided, which is hard. But if you take the commitment as an active decision and as the *precursor* to things like infatuation, attraction, etc., you can build up to something that's incredibly strong and rewarding.
I'll follow this up with one more post trying to distill some advice from my ramblings.
#relationships #love

@tiotasram@kolektiva.social
2025-07-03 15:21:37

#ScribesAndMakers for July 3: When (and if) you procrastinate, what do you do? If you don't, what do you do to avoid it?
I'll swap right out of programming to read a book, play a video game, or watch some anime. Often got things open in other windows so it's as simple as alt-tab.
I've noticed recently I tend to do this more often when I have a hard problem to solve that I'm not 100% sure about. I definitely have cycles of better & worse motivation and I've gotten to a place where I'm pretty relaxed about it instead of feeling guilty. I work how I work, and that includes cycles of rest, and that's enough (at least, for me it has been so far, and I'm in a comfortable career, married with 2 kids).
Some projects ultimately lose steam and get abandoned, and I've learned to accept that too. I learn a lot and grow from each project, so nothing is a true waste of time, and there remains plenty of future ahead of me to achieve cool things.
The procrastination does sometimes impact my wife & kids, and that's something I do sometimes feel bad about, but I think I keep that in check well enough, and for things my wife worries about, I usually don't procrastinate those too much (used to be worse about this).
Right now I'm procrastinating a big work project by working on a hobby project instead. The work project probably won't get done by the start of the semester as a result. But as I remind myself, my work doesn't actually pay me to work during the summer, and things will be okay without the work project being finished until later.
When I want to force myself into a more productive cycle, talking to people about project details sometimes helps, as does finding some new tech I can learn about by shoehorning it into a project. Have been thinking about talking to a rubber duck, but haven't motivated myself to try that yet, and I'm not really in doldrums right now.

@unchartedworlds@scicomm.xyz
2025-08-30 14:23:55
Content warning: the knock-on effects of open sign-ups

What happens when you don't vet sign-ups is that mods on other instances who value the safety of their users have to pick up your slack.
The extensive work illustrated in the linked post (from @…) is also taking place to varying degrees on every other instance which still federates with mastodon.social and the other open-sign-up ones.
This is like house-sharing with someone who repeatedly leaves the front door unlocked.
Yes of course there are much horribler instances, but those tend to be blocked wholesale in my part of Fedi. Among the instances we do federate with, the spam & scam accounts I see are nearly always on m.s.
If mastodon.social mods (who apparently are paid!) were to make people introduce themselves before approving new accounts, then a lot of this spam wouldn't be getting in the door. Quash once at source, save multiple other people from having to repeat the same work.
I appreciate that they're trying to make it easy for newcomers to join, but at what cost? And is an intro message really beyond the typical non-techie person? I think there are some considerably higher barriers to adoption than that. Not convinced it's a good tradeoff.
I don't actually want this instance to defederate from m.s, because lots of the people I follow are on there. But I can really see why people sometimes do.
#FediMeta #moderation #OpenSignups

@tiotasram@kolektiva.social
2025-09-13 12:42:44

Obesity & diet
I wouldn't normally share a positive story about the new diet drugs, because I've seen someone get obsessed with them who was at a perfectly acceptable weight *by majority standards* (surprise: every weight is in fact perfectly acceptable by *objective* standards, because every "weight-associated" health risk is its own danger that should be assessed *in individuals*). I think two almost-contradictory things:
1. In a society shuddering under the burden of metastasized fatmisia, there's a very real danger in promoting the new diet drugs because lots of people who really don't need them will be psychologically bullied into using them and suffer from the cost and/or side effects.
2. For many individuals under the assault of our society's fatmisia, "just ignore it" is not a sufficient response, and also for specific people for whom decreasing their weight can address *specific* health risks/conditions that they *want* to address that way, these drugs can be a useful tool.
I know @… to be a trustworthy & considerate person, so I think it's responsible to share this:
#Fat #Diet #Obesity

@shoppingtonz@mastodon.social
2025-07-03 09:02:26

I don't like my reliance on DumbTube...
If you have noticed sometimes I upload videos...
that is sort of like my reaction to DumbTube, I usually don't find what I want in there, some sort of "algorithm" "tries to make things better" while in actually it wastes thousands of good videos that will remain unwatched cause "algo over good reasoning" or whatever.
#DumbTube

@tiotasram@kolektiva.social
2025-07-06 12:45:11

So I've found my answer after maybe ~30 minutes of effort. First stop was the first search result on Startpage (millennialhawk.com/does-poop-h), which has some evidence of maybe-AI authorship but which is better than a lot of slop. It actually has real links & cites research, so I'll start by looking at the sources.
It claims near the top that poop contains 4.91 kcal per gram (note: 1 kcal = 1 Calorie = 1000 calories, which fact I could find/do trust despite the slop in that search). Now obviously, without a range or mention of an average, this isn't the whole picture, but maybe it's an average to start from? However, the citation link is to a study (pubmed.ncbi.nlm.nih.gov/322359) which only included 27 people with impaired glucose tolerance and obesity. Might have the cited stat, but it's definitely not a broadly representative one if this is the source. The public abstract does not include the stat cited, and I don't want to pay for the article. I happen to be affiliated with a university library, so I could see if I have access that way, but it's a pain to do and not worth it for this study that I know is too specific. Also most people wouldn't have access that way.
Side note: this doing-the-research protect has the nice benefit of letting you see lots of cool stuff you wouldn't have otherwise. The abstract of this study is pretty cool and I learned a bit about gut microbiome changes from just reading the abstract.
My next move was to look among citations in this article to see if I could find something about calorie content of poop specifically. Luckily the article page had indicators for which citations were free to access. I ended up reading/skimming 2 more articles (a few more interesting facts about gut microbiomes were learned) before finding this article whose introduction has what I'm looking for: pmc.ncbi.nlm.nih.gov/articles/
Here's the relevant paragraph:
"""
The alteration of the energy-balance equation, which is defined by the equilibrium of energy intake and energy expenditure (1–5), leads to weight gain. One less-extensively-studied component of the energy-balance equation is energy loss in stools and urine. Previous studies of healthy adults showed that ≈5% of ingested calories were lost in stools and urine (6). Individuals who consume high-fiber diets exhibit a higher fecal energy loss than individuals who consume low-fiber diets with an equivalent energy content (7, 8). Webb and Annis (9) studied stool energy loss in 4 lean and 4 obese individuals and showed a tendency to lower the fecal energy excretion in obese compared with lean study participants.
"""
And there's a good-enough answer if we do some math, along with links to more in-depth reading if we want them. A Mayo clinic calorie calculator suggests about 2250 Calories per day for me to maintain my weight, I think there's probably a lot of variation in that number, but 5% of that would be very roughly 100 Calories lost in poop per day, so maybe an extremely rough estimate for a range of humans might be 50-200 Calories per day. Interestingly, one of the AI slop pages I found asserted (without citation) 100-200 Calories per day, which kinda checks out. I had no way to trust that number though, and as we saw with the provenance of the 4.91 kcal/gram, it might not be good provenance.
To double-check, I visited this link from the paragraph above: sciencedirect.com/science/arti
It's only a 6-person study, but just the abstract has numbers: ~250 kcal/day pooped on a low-fiber diet vs. ~400 kcal/day pooped on a high-fiber diet. That's with intakes of ~2100 and ~2350 kcal respectively, which is close to the number from which I estimated 100 kcal above, so maybe the first estimate from just the 5% number was a bit low.
Glad those numbers were in the abstract, since the full text is paywalled... It's possible this study was also done on some atypical patient group...
Just to come full circle, let's look at that 4.91 kcal/gram number again. A search suggests 14-16 ounces of poop per day is typical, with at least two sources around 14 ounces, or ~400 grams. (AI slop was strong here too, with one including a completely made up table of "studies" that was summarized as 100-200 grams/day). If we believe 400 grams/day of poop, then 4.91 kcal/gram would be almost 2000 kcal/day, which is very clearly ludicrous! So that number was likely some unrelated statistic regurgitated by the AI. I found that number in at least 3 of the slop pages I waded through in my initial search.

@chris@mstdn.chrisalemany.ca
2025-07-09 04:52:46

Toronto Star: “New numbers reveal 10,000-plus Ontario college layoffs, 600 programs cancelled or suspended over past year”
The bloodbath in our higher education system in Canada has gotten NO press even though provincial governments and likely federal KNOW. This is the first in depth article I have seen. ALL governments micro-manage university and college policy/finances, no matter what they might say to the contrary.
Here are some truths you need to know:
Yes, I am biased as a 25 year employee of a University.
Yes, my University has also had completely unprecedented cuts in the past 12-24 months, with more coming.
Yes, it is because of the loss of International Students and their tuition revenue. Without that loss, many domestic enrollment numbers have actually been growing, but the money per student is orders of magnitude less. (ie. International was a cash cow)
Yes, faculty and even many admin, have been warning about the government downloading funding onto International tuitions for decades.
Yes, government will claim they are “investing more than ever”, but this is usually about Capital expenses (buildings, residences, infrastructure) or meeting contractual increases for staff salaries, *not* operating expenses.
Yes, in BC in the 1980s 80-90% of a University or College operating budget was covered by “base funding” from the province. Now, it is often below 50%. (If this makes you ask… is it still a “public University system”, please do!!)
And finally, yes, if we want to consider ourselves a modern country, we cannot possibly think this kind of contraction in educational opportunity (while domestic tuitions continue to increase!) is at all healthy for our society as a whole.
Toronto Star: #canpoli #cdnpoli #education #internationalEd #immigration #postsecondary #educationShouldBeFree

@tiotasram@kolektiva.social
2025-07-30 17:56:35

Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
social.coop/@eloquence/1149406
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.

@unchartedworlds@scicomm.xyz
2025-09-03 20:36:36
Content warning: Getting into playing music as an adult ...?

I'm learning Clean Interviewing - skills of not letting your own biases shape the other person's answers - and I want to do an assessment thing to get a qualification in it, which means I need a few people to "practise on"!
Would anyone like to volunteer to be interviewed over Zoom for 20 mins or so? about their musical journey as an adult, or current wish for that?
I could pick _any_ topic, but I thought this would be a good theme to go with, because creating adult-learner music groups, or just encouraging people to have a go and enjoy it, are things I'm planning to do more of! So your thoughts and experiences along the way could feed in to better support for other people on similar paths :-)
For example,
•you could be just now resolving "I want to be playing music"
•you could've recently acquired an instrument or dusted one off, or joined a group or started looking into possibilities
•it could be you're playing regularly now.
Doesn't matter what kind of music!
And you could be starting fresh with pretty much no experience yet, or you could be coming back to music a bit "rusty" after leaving off in childhood.
(Or maybe you _did_ get into music as an adult, a while ago, and you'd be happy to think back about that. Or maybe you're someone who's supported _other_ people to get into music.)
Time zone considerations: I'm in England, so people on the America/Canada side of the world would probably need to be available in a morning or early afternoon.
I'd like to find at least one or two people who wouldn't mind their interview being recorded, so that I can pick one or more of the recordings to use for being assessed for the qualification. This would only be seen/heard by me and the people reviewing it - who'd be interested primarily in my interviewing-skills, rather than your actual answers :-)
Or, if you don't want to be recorded, I'd still potentially be up for one or two unrecorded ones, just as practice and for the interest of the topic.
Let me know if you might be up for it, or feel free to pass the info on to a friend!
Boosts appreciated :-)
#music #learning #AdultLearners #AskFedi

@tiotasram@kolektiva.social
2025-07-22 00:03:45

Overly academic/distanced ethical discussions
Had a weird interaction with @/brainwane@social.coop just now. I misinterpreted one of their posts quoting someone else and I think the combination of that plus an interaction pattern where I'd assume their stance on something and respond critically to that ended up with me getting blocked. I don't have hard feelings exactly, and this post is only partly about this particular person, but I noticed something interesting by the end of the conversation that had been bothering me. They repeatedly criticized me for assuming what their position was, but never actually stated their position. They didn't say: "I'm bothered you assumed my position was X, it's actually Y." They just said "I'm bothered you assumed my position was X, please don't assume my position!" I get that it's annoying to have people respond to a straw man version of your argument, but when I in response asked some direct questions about what their position was, they gave some non-answers and then blocked me. It's entirely possible it's a coincidence, and they just happened to run out of patience on that iteration, but it makes me take their critique of my interactions a bit less seriously. I suspect that they just didn't want to hear what I was saying, while at the same time they wanted to feel as if they were someone who values public critique and open discussion of tricky issues (if anyone reading this post also followed our interaction and has a different opinion of my behavior, I'd be glad to hear it; it's possible In effectively being an asshole here and it would be useful to hear that if so).
In any case, the fact that at the end of the entire discussion, I'm realizing I still don't actually know their position on whether they think the AI use case in question is worthwhile feels odd. They praised the system on several occasions, albeit noting some drawbacks while doing so. They said that the system was possibly changing their anti-AI stance, but then got mad at me for assuming this meant that they thought this use-case was justified. Maybe they just haven't made up their mind yet but didn't want to say that?
Interestingly, in one of their own blog posts that got linked in the discussion, they discuss a different AI system, and despite listing a bunch of concrete harms, conclude that it's okay to use it. That's fine; I don't think *every* use of AI is wrong on balance, but what bothered me was that their post dismissed a number of real ethical issues by saying essentially "I haven't seen calls for a boycott over this issue, so it's not a reason to stop use." That's an extremely socially conformist version of ethics that doesn't sit well with me. The discussion also ended up linking this post: chelseatroy.com/2024/08/28/doe which bothered me in a related way. In it, Troy describes classroom teaching techniques for introducing and helping students explore the ethics of AI, and they seem mostly great. They avoid prescribing any particular correct stance, which is important when teaching given the power relationship, and they help students understand the limitations of their perspectives regarding global impacts, which is great. But the overall conclusion of the post is that "nobody is qualified to really judge global impacts, so we should focus on ways to improve outcomes instead of trying to judge them." This bothers me because we actually do have a responsibility to make decisive ethical judgments despite limitations of our perspectives. If we never commit to any ethical judgment against a technology because we think our perspective is too limited to know the true impacts (which I'll concede it invariably is) then we'll have to accept every technology without objection, limiting ourselves to trying to improve their impacts without opposing them. Given who currently controls most of the resources that go into exploration for new technologies, this stance is too permissive. Perhaps if our objection to a technology was absolute and instantly effective, I'd buy the argument that objecting without a deep global view of the long-term risks is dangerous. As things stand, I think that objecting to the development/use of certain technologies in certain contexts is necessary, and although there's a lot of uncertainly, I expect strongly enough that the overall outcomes of objection will be positive that I think it's a good thing to do.
The deeper point here I guess is that this kind of "things are too complicated, let's have a nuanced discussion where we don't come to any conclusions because we see a lot of unknowns along with definite harms" really bothers me.

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding