Tootfinder

Opt-in global Mastodon full text search. Join the index!

@ruth_mottram@fediscience.org
2025-09-18 17:31:40

The Danish independent media organisation @… are on a member drive with 10 promises if they make it to 50,000, including that they'll share more on social media. I can dream they'll make it to the #fediverse I suppose?
In any case their journalism is outstanding and they carefully avoid making an explicit political stance and contributing to polarisation. #UnbreakingNews
In depth journalism you can read or listen to (try deepL if you don't speak Danish), often beautifully produced, always fascinating. They also don't report on problems without reporting solutions.
It's by far the most uplifting read/listen of my day.
Free link here, pay what you want and free to first time voters...
zetland.dk/a/rmottram?og=amba2

@yaxu@post.lurk.org
2025-08-07 21:43:27

If you get an invite to this generative art software engineering call, note that if you submit something and it gets accepted, as far as I can tell it would cost you $3000 in open access fees... unless you want it to languish behind a paywall (you'd then only be allowed to share an unedited draft, and even then would have to advertise the paywall on it). They don't seem to want to make this clear in their call.

@tiotasram@kolektiva.social
2025-07-30 17:56:35

Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
social.coop/@eloquence/1149406
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.

@Sustainable2050@mastodon.energy
2025-09-04 05:23:58

Great one-man effort (in his spare time) by my former colleague Maarten Staats! He went where no organisation went before ;)
Still quite tough to find the information underneath - crossing language and other barriers - but by far the best overview of Europe's grid capacity issues there is.
Rightfully getting a lot of traction!

@ErikJonker@mastodon.social
2025-10-13 14:15:31

And more to read in your freetime, if you are interested in AI, from Bluesky, Phillip Isola.
"Over the past year, my lab has been working on fleshing out theory applications of the Platonic Representation Hypothesis.
Today I want to share two new works on this topic:"
Eliciting higher alignment:
arxiv.org/…

@geant@mstdn.social
2025-10-16 12:06:46

The first step on the road to #TNC26 starts today… with the opening of the Call for Proposals!
Next year, our community will gather in Helsinki 🇫🇮 under the theme Digital Sisu—inspired by the Finnish word “sisu,” which embodies inner strength, tenacity, and determination in the face of adversity.
💡 If you have a story to share, or a vision that can shape the conversation—we want to hear f…

TNC26 Call for Proposals | Helsinki, Finland | 8–12 June 2026 | Hosted by CSC - IT Center for Science
@azonenberg@ioc.exchange
2025-09-11 00:44:18

Periodic reminder to new followers or anyone just swinging by: I have high speed digital design / SI experience and lots of nice lab toys. Most folks can't afford this kind of stuff so I try to share the love.
If your project is open source or generally noncommercial / hobby in nature, there's NO CHARGE for a quick design review or some basic lab measurements. If you send me a board for characterization and want it back, all I ask is that you send me a prepaid label or reimburs…

@ethanwhite@hachyderm.io
2025-10-08 12:52:24

If you're advertising (academic) jobs via email/Slack/Zulip/etc and want broad circulation it's really helpful to also have a link to an associated web page to make it easier for folks to share these positions on socials, in other Slacks/Zulips, etc. It doesn't need to be an official job ad, just a page on your website/blog that includes the same information.

@tiotasram@kolektiva.social
2025-09-13 12:42:44

Obesity & diet
I wouldn't normally share a positive story about the new diet drugs, because I've seen someone get obsessed with them who was at a perfectly acceptable weight *by majority standards* (surprise: every weight is in fact perfectly acceptable by *objective* standards, because every "weight-associated" health risk is its own danger that should be assessed *in individuals*). I think two almost-contradictory things:
1. In a society shuddering under the burden of metastasized fatmisia, there's a very real danger in promoting the new diet drugs because lots of people who really don't need them will be psychologically bullied into using them and suffer from the cost and/or side effects.
2. For many individuals under the assault of our society's fatmisia, "just ignore it" is not a sufficient response, and also for specific people for whom decreasing their weight can address *specific* health risks/conditions that they *want* to address that way, these drugs can be a useful tool.
I know @… to be a trustworthy & considerate person, so I think it's responsible to share this:
#Fat #Diet #Obesity

@avalon@jazztodon.com
2025-08-09 22:21:43

Elizabeth Cotton: Ontario Blues
For the guitarists out there, if you want to play this yourself, remember: you must hold the (otherwise normal) guitar upside down.
Yes, and nail all those ragtime licks and walking bass lines while you do. ☺️
youtube.com/watch?v=5G8CShJZz6

@dav@social.maleo.uk
2025-08-06 01:53:57

It’s opportunistic but I’m drunk so sod it: if you want to buy me a book for my birthday, here’s what I want to read: amazon.co.uk/hz/wishlist/ls/2A

@tiotasram@kolektiva.social
2025-07-28 10:41:42

How popular media gets love wrong
Had some thoughts in response to a post about loneliness on here. As the author emphasized, reassurances from people who got lucky are not terribly comforting to those who didn't, especially when the person who was lucky had structural factors in their favor that made their chances of success much higher than those is their audience. So: these are just my thoughts, and may not have any bearing on your life. I share them because my experience challenged a lot of the things I was taught to believe about love, and I think my current beliefs are both truer and would benefit others seeing companionship.
We're taught in many modern societies from an absurdly young age that love is not something under our control, and that dating should be a process of trying to kindle love with different people until we meet "the one" with whom it takes off. In the slightly-less-fairytale corners of modern popular media, we might fund an admission that it's possible to influence love, feeding & tending the fire in better or worse ways. But it's still modeled as an uncontrollable force of nature, to be occasionally influenced but never tamed. I'll call this the "fire" model of love.
We're also taught (and non-boys are taught more stringently) a second contradictory model of love: that in a relationship, we need to both do things and be things in order to make our partner love us, and that if we don't, our partner's love for us will wither, and (especially if you're not a boy) it will be our fault. I'll call this the "appeal" model of love.
Now obviously both of these cannot be totally true at once, and plenty of popular media centers this contradiction, but there are really very few competing models on offer.
In my experience, however, it's possible to have "pre-meditated" love. In other words, to decide you want to love someone (or at least, try loving them), commit to that idea, and then actually wind up in love with them (and them with you, although obviously this second part is not directly under your control). I'll call this the "engineered" model of love.
Now, I don't think that the "fire" and "appeal" models of love are totally wrong, but I do feel their shortcomings often suggest poor & self-destructive relationship strategies. I do think the "fire" model is a decent model for *infatuation*, which is something a lot of popular media blur into love, and which drives many (but not all) of the feelings we normally associate with love (even as those feelings have other possible drivers too). I definitely experienced strong infatuation early on in my engineered relationship (ugh that sounds terrible but I'll stick with it; I promise no deception was involved). I continue to experience mild infatuation years later that waxes and wanes. It's not a stable foundation for a relationship but it can be a useful component of one (this at least popular media depicts often).
I'll continue these thoughts in a reply, by it might take a bit to get to it.
#relationships