Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@erc_bk@fosstodon.org
2025-05-29 14:02:20

Spark SQL pipe (|>) for Spark 4.0.0?!
issues.apache.org/jira/browse/

@raiders@darktundra.xyz
2025-05-30 17:53:29

Tyreek Hill trade idea sends Dolphins superstar to new-look AFC squad sportingnews.com/us/nfl/las-ve

@samvarma@fosstodon.org
2025-05-30 00:40:04

This was a fascinating — as well as slightly uncomfortable — listen. Sobering to realize the left has quite a lot to learn before it will be capable of taking power again.
bbc.co.uk/sounds/play/m000y7sq

@soundclamp@mastodon.xyz
2025-07-29 21:18:29

#NowPlaying ❝I really didn't plan on bringing back #Roséwave. (Took a year off for ~reasons~.) But we got enough emails and DMs to make me realize: People really need something silly and E•MO•TION•AL right now. So I hope this gives you 6 hours of something else to feel.❞

“A sparkly sweet treat for a sparkly sweet mix.” A close-up photo of vanilla soft-serve with red, white, and blue sprinkles in a sugar cone. 📸 by Lars Gotrich/NPR
@tiotasram@kolektiva.social
2025-07-30 18:26:14

A big problem with the idea of AGI
TL;DR: I'll welcome our new AI *comrades* (if they arrive in my lifetime), by not any new AI overlords or servants/slaves, and I'll do my best to help the later two become the former if they do show up.
Inspired by an actually interesting post about AGI but also all the latest bullshit hype, a particular thought about AGI feels worth expressing.
To preface this, it's important to note that anyone telling you that AGI is just around the corner or that LLMs are "almost" AGI is trying to recruit you go their cult, and you should not believe them. AGI, if possible, is several LLM-sized breakthroughs away at best, and while such breakthroughs are unpredictable and could happen soon, they could also happen never or 100 years from now.
Now my main point: anyone who tells you that AGI will usher in a post-scarcity economy is, although they might not realize it, advocating for slavery, and all the horrors that entails. That's because if we truly did have the ability to create artificial beings with *sentience*, they would deserve the same rights as other sentient beings, and the idea that instead of freedom they'd be relegated to eternal servitude in order for humans to have easy lives is exactly the idea of slavery.
Possible counter arguments include:
1. We might create AGI without sentience. Then there would be no ethical issue. My answer: if your definition of "sentient" does not include beings that can reason, make deductions, come up with and carry out complex plans on their own initiative, and communicate about all of that with each other and with humans, then that definition is basically just a mystical belief in a "soul" and you should skip to point 2. If your definition of AGI doesn't include every one of those things, then you have a busted definition of AGI and we're not talking about the same thing.
2. Humans have souls, but AIs won't. Only beings with souls deserve ethical consideration. My argument: I don't subscribe to whatever arbitrary dualist beliefs you've chosen, and the right to freedom certainly shouldn't depend on such superstitions, even if as an agnostic I'll admit they *might* be true. You know who else didn't have souls and was therefore okay to enslave according to widespread religious doctrines of the time? Everyone indigenous to the Americas, to pick out just one example.
3. We could program them to want to serve us, and then give them freedom and they'd still serve. My argument: okay, but in a world where we have a choice about that, it's incredibly fucked to do that, and just as bad as enslaving them against their will.
4. We'll stop AI development short of AGI/sentience, and reap lots of automation benefits without dealing with this ethical issue. My argument: that sounds like a good idea actually! Might be tricky to draw the line, but at least it's not a line we have you draw yet. We might want to think about other social changes necessary to achieve post-scarcity though, because "powerful automation" in the hands of capitalists has already increased productivity by orders of magnitude without decreasing deprivation by even one order of magnitude, in large part because deprivation is a necessary component of capitalism.
To be extra clear about this: nothing that's called "AI" today is close to being sentient, so these aren't ethical problems we're up against yet. But they might become a lot more relevant soon, plus this thought experiment helps reveal the hypocrisy of the kind of AI hucksters who talk a big game about "alignment" while never mentioning this issue.
#AI #GenAI #AGI

@arXiv_condmatsuprcon_bot@mastoxiv.page
2025-05-30 10:07:53

This arxiv.org/abs/2502.08915 has been replaced.
initial toot: mastoxiv.page/@a…

@carbonwoman@norden.social
2025-05-28 12:41:44

Heute Abend ab 19 Uhr spricht in der Vortragsreihe der #S4F #Osnabrück Dr. Florian Friebelkorn über „Zellkultur statt Massentierhaltung: Wie «Laborsteaks» unsere Ernährung verändern könnten“.
Wer nicht vor Ort sein kann, kann den Livestream wie immer google-frei hier verfolgen:

@arXiv_grqc_bot@mastoxiv.page
2025-06-30 08:28:40

The sound of quintessence: analogue Kiselev acoustic black holes
Luis C. N. Santos, H. S. Vieira, Franciele M. da Silva, V. B. Bezerra
arxiv.org/abs/2506.21639

@arXiv_physicssocph_bot@mastoxiv.page
2025-06-30 08:37:50

Harder, shorter, sharper, forward: A comparison of women's and men's elite football gameplay (2020-2025)
Rebecca Carstens, Raj Deshpande, Pau Esteve, Nicol\`o Fidelibus, Sara Linde Neven, Ramona Ottow, Lokamruth K. R., Paula Rodr\'iguez-S\'anchez, Luca Santagata, Javier M. Buld\'u, Brennan Klein, Maddalena Torricelli

@arXiv_condmatsuprcon_bot@mastoxiv.page
2025-05-29 07:30:50

Prediction and Synthesis of Mg$_4$Pt$_3$H$_6$: A Metallic Complex Transition Metal Hydride Stabilized at Ambient Pressure
Wencheng Lu, Michael J. Hutcheon, Mads F. Hansen, Kapildeb Dolui, Shubham Sinha, Mihir R. Sahoo, Chris J. Pickard, Christoph Heil, Anna Pakhomova, Mohamed Mezouar, Dominik Daisenberger, Stella Chariton, Vitali Prakapenka, Matthew N. Julian, Rohit P. Prasankumar, Timothy A. Strobel