Tootfinder

Opt-in global Mastodon full text search. Join the index!

@tiotasram@kolektiva.social
2025-09-14 12:01:38

TL;DR: what if instead of denying the harms of fascism, we denied its suppressive threats of punishment
Many of us have really sharpened our denial skills since the advent of the ongoing pandemic (perhaps you even hesitated at the word "ongoing" there and thought "maybe I won't read this one, it seems like it'll be tiresome"). I don't say this as a preface to a fiery condemnation or a plea to "sanity" or a bunch of evidence of how bad things are, because I too have honed my denial skills in these recent years, and I feel like talking about that development.
Denial comes in many forms, including strategic information avoidance ("I don't have time to look that up right now", "I keep forgetting to look into that", "well this author made a tiny mistake, so I'll click away and read something else", "I'm so tired of hearing about this, let me scroll farther", etc.) strategic dismissal ("look, there's a bit of uncertainty here, I should ignore this", "this doesn't line up perfectly with my anecdotal experience, it must be completely wrong", etc.) and strategic forgetting ("I don't remember what that one study said exactly; it was painful to think about", "I forgot exactly what my friend was saying when we got into that argument", etc.). It's in fact a kind of skill that you can get better at, along with the complementary skill of compartmentalization. It can of course be incredibly harmful, and a huge genre of fables exists precisely to highlight its harms, but it also has some short-term psychological benefits, chiefly in the form of muting anxiety. This is not an endorsement of denial (the harms can be catastrophic), but I want to acknowledge that there *are* short-term benefits. Via compartmentalization, it's even possible to be honest with ourselves about some of our own denials without giving them up immediately.
But as I said earlier, I'm not here to talk you out of your denials. Instead, given that we are so good at denial now, I'm here to ask you to be strategic about it. In particular, we live in a world awash with propaganda/advertising that serves both political and commercial ends. Why not use some of our denial skills to counteract that?
For example, I know quite a few people in complete denial of our current political situation, but those who aren't (including myself) often express consternation about just how many people in the country are supporting literal fascism. Of course, logically that appearance of widespread support is going to be partly a lie, given how much our public media is beholden to the fascists or outright in their side. Finding better facts on the true level of support is hard, but in the meantime, why not be in denial about the "fact" that Trump has widespread popular support?
To give another example: advertisers constantly barrage us with messages about our bodies and weight, trying to keep us insecure (and thus in the mood to spend money to "fix" the problem). For sure cutting through that bullshit by reading about body positivity etc. is a better solution, but in the meantime, why not be in denial about there being anything wrong with your body?
This kind of intentional denial certainly has its own risks (our bodies do actually need regular maintenance, for example, so complete denial on that front is risky) but there's definitely a whole lot of misinformation out there that it would be better to ignore. To the extent such denial expands to a more general denial of underlying problems, this idea of intentional denial is probably just bad. But I sure wish that in a world where people (including myself) routinely deny significant widespread dangers like COVID-19's long-term risks or the ongoing harms of escalating fascism, they'd at least also deny some of the propaganda keeping them unhappy and passive. Instead of being in denial about US-run concentration camps, why not be in denial that the state will be able to punish you for resisting them?

@arXiv_eessSY_bot@mastoxiv.page
2025-09-16 11:40:27

Continuous-Time Distributed Learning for Collective Wisdom Maximization
Luka Bakovi\'c, Giacomo Como, Fabio Fagnani, Anton Proskurnikov, Emma Tegling
arxiv.org/abs/2509.11808

@cwilcke@bildung.social
2025-06-17 19:01:47

Ernst #Bloch - Konkrete Utopie in der Jetztwelt
".... the very desire to transcend the status quo supports the idea that the world is pregnant with unrealized possibilities (Bloch 1988: 16)"
1988, The Utopian Function of Art and Literature: Selected Essays, Jack Zipes and Frank Mecklenburg (trans.), Cambridge, MA: MIT Press

@ruth_mottram@fediscience.org
2025-08-15 07:06:21

"“Day science is the executive part. You have the idea and to test it, you do controlled experiments,” says Yanai. “Night science is the world of creativity, the world of ideas"
nature.com/articles/d41586-025

@arXiv_physicshistph_bot@mastoxiv.page
2025-06-17 11:27:22

Energy as a Primitive Ontology for the Physical World
J. E. Horvath, B. B. Martins
arxiv.org/abs/2506.12692 arxiv.org…

@hex@kolektiva.social
2025-09-13 11:53:04

As we continue down this path of escalating nihilistic meme violence, it can feel like the worst things have become viral. We are drowning in the memetic effluent of a capitalist media that profits by maximizing engagement. But I wonder if anyone remembers "Pay it Forward?"
A movie came out in 2000 about a kid who started a viral kindness campaign. The idea was that you do something nice for someone else with the expectation that they do the same in the future. I never really saw the movie, but I do remember the time. There were a few weeks, maybe a few months, where people started doing it. People would just be randomly nice, and everything actually just started feeling better.
Over time, the world caught up. Capitalism consumed the whole thing, and life went back to normal. 9/11 happened the next year, and the US started down the path of becoming the most twisted and evil version of itself. But there was a short time that doing nice stuff was a viral meme, a thing that people just started doing.
Gun violence doesn't have to be the only viral meme we have. We can make good things happen too.

@karlauerbach@sfba.social
2025-09-15 18:10:37

What a rubbish idea:
Large corporations already hide too much information from their shareholders and the public.
This would make that hiding easier and make company performance more opaque.
It doubles the time that corporate ill deeds and management failures can be hidden.
It is a dumb idea - but coming from one of the great scammers in our corporate world, we should be glad that he did not suggest yearly, or longer, reporting.
"Trump renews push to end co…

@hllizi@hespere.de
2025-08-14 15:12:09

I may have had an interesting idea! Quick, hand me that screw driver, I'll ram it through my temple!

Angered by the carnage of World War I, Pound blamed the war on finance capitalism, which he called "usury".[3] He moved to Italy in 1924 and through the 1930s and 1940s promoted an economic theory known as social credit, wrote for publications owned by the British fascist Oswald Mosley, embraced Benito Mussolini's fascism, and expressed support for Adolf Hitler.
@muz4now@mastodon.world
2025-07-09 15:16:33

Visual artist Brian Jungen on embracing the unknown
#creativity #artist

@whitequark@mastodon.social
2025-07-13 09:33:02

idea: real-world version of anubis
it gives you an MD5 hash to calculate and you have to do it with a pen and paper
the first time you do something unpleasant, it's one round. then it's two. then... you get the idea

@pre@boing.world
2025-06-20 22:54:36
Content warning: Doctor Who - Future, why Billie?
:tardis:

There's a woman I know who, when she was pregnant, was very keen to hear the opinions of crystal diviners and homeopath medics on what sex her new baby would be but wouldn't let the ultrasound-scan technician that actually knows tells her because Spoilers.
On that note, I'm happy to watch #doctorWho #badWolf #tv

@mlawton@mstdn.social
2025-09-10 14:03:04

Finished my time-shifted watch of the #USMNT vs Japan from last night. Some thoughts:
I liked the formation change to try a 3-4-3 / 3-5-1 in and ou of possession. The idea of wingbacks is compelling for the talent available. Arfsten & Freeman did well, particularly Arfsten. He should make the World Cup roster, backing up Jedi. Having Ream & Richards paired is winning. Blackmon was good …

@tiotasram@kolektiva.social
2025-07-30 17:56:35

Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
social.coop/@eloquence/1149406
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.

@emd@cosocial.ca
2025-08-10 00:55:03

This is a phenomenal article about how Israel got (and has kept) its nuclear arms program.
History is wild, I had no idea about any of this.
mastodon.social/@religiousryan

@playinprogress@assemblag.es
2025-06-29 10:57:06

gardening trivia of the day / no I don't want to get into roses I am just rabbit holing rose varieties and gardens for no reason
#TIL #roses

Screenshot of the following text:

Madame Caroline Testout was a late 19th-century French dressmaker from Grenoble, the proprietor of fashionable salons in London and Paris. She regularly purchased silks from Lyon, which was an important center for rose breeding. The nurseryman Joseph Pernet-Ducher was called 'The Wizard of Lyon' due to his success in developing hybrid tea roses. Madame Testout was an astute businesswoman and understood the value of good publicity. She asked Perner-Ducher to …
Screenshot of the following text:

In 1915, Jesse A. Currey, rose hobbyist and Sunday editor of the Oregon Journal, convinced city officials to institute a rose test garden to serve as a safe haven during World War I for hybrid roses grown in Europe. Rose lovers feared that these unique plants would be destroyed in the bombings. The Park Bureau approved the idea in 1917 and by early 1918, hybridists from England began to send roses. In 1921, Florence Holmes Gerke, the landscape architect for …
@thomasfuchs@hachyderm.io
2025-08-06 16:06:38

There’s a crisis in tech product innovation. From when I got into tech when I was maybe 8 or 9 in the late 80s to around 2010 or so there seemed to be something new and innovative—sometimes even world-changing—out at least once a year.
Now my iPad Pro is 7 years old, and I have literally no idea why I would want to upgrade it.
I don't even know other than "faster but you probably won't notice it" what the current iPad Pro has over the one from 2018.
Fwiw 7 years is the span of time between these two Apple products:

@aredridel@kolektiva.social
2025-07-29 13:39:34

I want to push back on the idea in the world of tech work that a PIP (performance improvement plan) is about getting rid of someone, that they're not intended to be survivable.
This is completely false. (I'm sure there's instances of it, of course, but the mode and vast majority are, in fact about performance improvement. Sometimes they're shadow layoffs, but that is cruel callous behavior that not everyone will exhibit.)
Now _most people do not survive the PIP process_. This is to be expected: if someone is in fact not performing, and more gentle remedies haven't worked, it's not looking good.
But here's where I get a bit spicy: most performance problems are constitutional problems with management and management style, not individual performance problems. However, since managers are as a class 'in power' somewhat, the individual contributor takes the fall for this structurally.
The intent of a PIP is not to get rid of people. It's to right performance.
However, as a system, PIPs do largely get rid of people who are constitutionally misaligned with management. Even when it's a management problem (and it usually is)

@paulwermer@sfba.social
2025-07-31 21:15:46

"Israel controls the flow of food into Gaza. It has calculated how many calories Palestinians need to stay alive. Its data shows only a fraction has been allowed in"
' “The idea is to put the Palestinians on a diet, but not to make them die of hunger,” a senior adviser to the then prime minister, Ehud Olmert, said in 2006. An Israeli court ordered the release of documents showing the details of those macabre sums two years later.'
And how many Democratic senator…

@PaulWermer@sfba.social
2025-07-31 21:15:46

"Israel controls the flow of food into Gaza. It has calculated how many calories Palestinians need to stay alive. Its data shows only a fraction has been allowed in"
' “The idea is to put the Palestinians on a diet, but not to make them die of hunger,” a senior adviser to the then prime minister, Ehud Olmert, said in 2006. An Israeli court ordered the release of documents showing the details of those macabre sums two years later.'
And how many Democratic senator…

@arXiv_csNI_bot@mastoxiv.page
2025-07-08 09:44:30

Low-power Wireless Network with Real-Time Guarantees for Edge-Cloud Applications
Don Tan
arxiv.org/abs/2507.03317 arx…

@seeingwithsound@mas.to
2025-08-25 08:52:57

The natural-born posthuman: applying extended mind to post- and transhumanist discourse link.springer.com/article/10.1 "Newer discussions have expanded upon this idea through sensory substitution devices, such as The vOICe system which use…

@compfu@mograph.social
2025-07-09 18:34:39

I've been listening to a podcast by the German public broadcaster ​ARD about the end of the world. Every episode had a different topic and one was about AI. It was mostly sourced from an interview with a youtuber but one idea is now stuck in my head: what if AI doesn't launch nukes but develops into an all-powerful actor whose aims are not aligned with those of human survival? Do we have a precedent?
Yes. There are such super-human and quasi-immortal beings here on earth today…

A young Keanu Reeves with scruffy black hair, white t-shirt and red jacket goes "whoa".
@tiotasram@kolektiva.social
2025-07-30 18:26:14

A big problem with the idea of AGI
TL;DR: I'll welcome our new AI *comrades* (if they arrive in my lifetime), by not any new AI overlords or servants/slaves, and I'll do my best to help the later two become the former if they do show up.
Inspired by an actually interesting post about AGI but also all the latest bullshit hype, a particular thought about AGI feels worth expressing.
To preface this, it's important to note that anyone telling you that AGI is just around the corner or that LLMs are "almost" AGI is trying to recruit you go their cult, and you should not believe them. AGI, if possible, is several LLM-sized breakthroughs away at best, and while such breakthroughs are unpredictable and could happen soon, they could also happen never or 100 years from now.
Now my main point: anyone who tells you that AGI will usher in a post-scarcity economy is, although they might not realize it, advocating for slavery, and all the horrors that entails. That's because if we truly did have the ability to create artificial beings with *sentience*, they would deserve the same rights as other sentient beings, and the idea that instead of freedom they'd be relegated to eternal servitude in order for humans to have easy lives is exactly the idea of slavery.
Possible counter arguments include:
1. We might create AGI without sentience. Then there would be no ethical issue. My answer: if your definition of "sentient" does not include beings that can reason, make deductions, come up with and carry out complex plans on their own initiative, and communicate about all of that with each other and with humans, then that definition is basically just a mystical belief in a "soul" and you should skip to point 2. If your definition of AGI doesn't include every one of those things, then you have a busted definition of AGI and we're not talking about the same thing.
2. Humans have souls, but AIs won't. Only beings with souls deserve ethical consideration. My argument: I don't subscribe to whatever arbitrary dualist beliefs you've chosen, and the right to freedom certainly shouldn't depend on such superstitions, even if as an agnostic I'll admit they *might* be true. You know who else didn't have souls and was therefore okay to enslave according to widespread religious doctrines of the time? Everyone indigenous to the Americas, to pick out just one example.
3. We could program them to want to serve us, and then give them freedom and they'd still serve. My argument: okay, but in a world where we have a choice about that, it's incredibly fucked to do that, and just as bad as enslaving them against their will.
4. We'll stop AI development short of AGI/sentience, and reap lots of automation benefits without dealing with this ethical issue. My argument: that sounds like a good idea actually! Might be tricky to draw the line, but at least it's not a line we have you draw yet. We might want to think about other social changes necessary to achieve post-scarcity though, because "powerful automation" in the hands of capitalists has already increased productivity by orders of magnitude without decreasing deprivation by even one order of magnitude, in large part because deprivation is a necessary component of capitalism.
To be extra clear about this: nothing that's called "AI" today is close to being sentient, so these aren't ethical problems we're up against yet. But they might become a lot more relevant soon, plus this thought experiment helps reveal the hypocrisy of the kind of AI hucksters who talk a big game about "alignment" while never mentioning this issue.
#AI #GenAI #AGI

@salrandolph@zirk.us
2025-08-21 13:20:28

Maybe it’s the craziness of the world, but I’ve been dreaming of still life painting. I keep imagining spending my days arranging and rearranging small objects on a tabletop, then drawing or painting them. Nothing much: cups and bottles, flowers and fruit. Just the idea of it calms and brings me joy.
#Art #Painting

@arXiv_condmatsoft_bot@mastoxiv.page
2025-07-01 10:05:53

DNA Unzipping Transition
Somendra M. Bhattacharjee
arxiv.org/abs/2506.24064 arxiv.org/pdf/2506.24064

@BugWarp@wikis.world
2025-08-31 14:51:17

Hadjar has always been the most overlooked rookie of the season. Even I didn't think it was a good idea to bring it to the grid. But now he has proven all of us wrong. Super happy for him. #F1 #DutchGP

@axbom@axbom.me
2025-07-31 14:19:14

This is of course very bad, but I also found it funny. AI evangelists are suddenly privacy-aware and repeating this as a security risk. And it is. But the idea that using ChatGPT itself hasn’t always been a security risk is ridiculous to me.

TLDR: If you’ve ever used the share function on a ChatGPT chat, that full chat can be found via Google, viewable for anyone in the world.

https:…

@fanf@mendeddrum.org
2025-07-31 08:42:03

from my link log —
CSS font-size-adjust is useful.
matklad.github.io/2025/07/16/f
saved 2025-07-23

@luana@wetdry.world
2025-08-23 13:06:24

Any good articles on everything bad about age verification and the lack of privacy they bring and stuff, and why it’s a bad idea to give websites your ID or let the government know every website you use?
Yes there are plenty of fedi toots talking about this but I want like a single link I can easily send people when they defend this shit.
#AgeVerification #OnlineSafetyAct

@jlpiraux@wallonie-bruxelles.social
2025-08-19 09:23:23

Les Juifs européens se joignent Š l'appel demandant le départ du « tsar de l'antisémitisme » de l'UE
euobserver.com/eu-and-the-worl

@arXiv_csOH_bot@mastoxiv.page
2025-07-01 08:08:33

A "Good" Regulator May Provide a World Model for Intelligent Systems
Bradly Alicea, Morgan Hough, Amanda Nelson, Jesse Parent
arxiv.org/abs/2506.23032

@bobmueller@mastodon.world
2025-06-27 22:00:13

I'm launching a #Substack, just like every other writer! 😎 It's called "Music For A Sunday Afternoon," and this post gives you an idea of what you'll find. Come along for the ride if you like, and tell all your friends and neighbors!

@arXiv_csCL_bot@mastoxiv.page
2025-07-18 07:32:32

Modeling Open-World Cognition as On-Demand Synthesis of Probabilistic Models
Lionel Wong, Katherine M. Collins, Lance Ying, Cedegao E. Zhang, Adrian Weller, Tobias Gersternberg, Timothy O'Donnell, Alexander K. Lew, Jacob D. Andreas, Joshua B. Tenenbaum, Tyler Brooke-Wilson
arxiv.org/abs/2507.12547

@arXiv_csCV_bot@mastoxiv.page
2025-08-22 10:20:41

Scaling Group Inference for Diverse and High-Quality Generation
Gaurav Parmar, Or Patashnik, Daniil Ostashev, Kuan-Chieh Wang, Kfir Aberman, Srinivasa Narasimhan, Jun-Yan Zhu
arxiv.org/abs/2508.15773

@Lach@social.linux.pizza
2025-06-29 21:01:09

A little more than 30 years ago, I learned basic general relativity in physics. I couldn't really understand time. It just seemed like there was something that was missing.
I developed an idea of how time and space should be described, but I never managed to explain it to anyone in an understandable way.
After recently watching Sabine Hossenfelder tear down a paper with a similar idea (

@tiotasram@kolektiva.social
2025-06-29 17:34:34

Sam Altman: "What if an inhuman AI took control of the world by manipulating people's behavior but its core directive was to make paper clips and it burned down the planet to do that? This is so scary and it's a real thing you should be worried about!"
Governments: "Oh boy that's scary we'll make special rules to invest in your company so you can save us from this frightening possibility. Will we restrict or discourage AI use/development? Haha no that would be foolish!"
Me: "What if an inhuman handful of corporate charters took control of the world by manipulating human behavior but their core directive was to maximize venture capital and they burned down the planet to do that? ..."
Governments (sponsored by said corporations): "Send in the SWAT teams now, this idea is dangerous!"

@arXiv_csNE_bot@mastoxiv.page
2025-08-27 07:35:02

Leveraging Evolutionary Surrogate-Assisted Prescription in Multi-Objective Chlorination Control Systems
Rivaaj Monsia, Olivier Francon, Daniel Young, Risto Miikkulainen
arxiv.org/abs/2508.19173

@Hans5958@mastodon.social
2025-07-18 05:05:14

Going to lament about the changes on Drive World in Roblox, because somehow they somehow do the "you get some, you lose some" updates every time.
Who thought that it is a good idea to change the trucking and delivery system? Now that all of this is replaced with the "Daniel" system (finding trailers on the wild and delivering it to a place), it is now harder to do them.
#DriveWorld