Tootfinder

Opt-in global Mastodon full text search. Join the index!

@tiotasram@kolektiva.social
2025-07-05 15:47:25

You're not being forced to use AI because your boss thinks it will make you more productive. You're being forced to use AI because either your boss is invested in the AI hype and wants to drive usage numbers up, or because your boss needs training data from your specific role so they can eventually replace you with an AI, or both.
Either way, it's not in your interests to actually use it, which is convenient, because using it is also harmful in 4-5 different ways (briefly: resource overuse, data laborer abuse, commons abuse, psychological hazard, bubble inflation, etc.)
#AI

@tiotasram@kolektiva.social
2025-07-19 08:14:41

AI, AGI, and learning efficiency
An addendum to this: I'm someone who would accurately be called "anti-AI" in the modern age, yet I'm also an "AI researcher" in some ways (have only dabbled in neutral nets).
I don't like:
- AI systems that are the product of labor abuses towards the data workers who curate their training corpora.
- AI systems that use inordinate amounts of water and energy during an intensifying climate catastrophe.
- AI systems that are fundamentally untrustworthy and which reinforce and amplify human biases, *especially* when those systems are exposed in a way that invites harms.
- AI systems which are designed to "save" my attention or brain bandwidth but such my doing so cripple my understating of the things I might use them for when I fact that understanding was the thing I was supposed to be using my time to gain, and where the later lack of such understanding will be costly to me.
- AI systems that are designed by and whose hype fattens the purse of people who materially support genocide and the construction of concentration campus (a.k.a. fascists).
In other words, I do not like and except in very extenuating circumstances I will not use ChatGPT, Claude, Copilot, Gemini, etc.
On the other hand, I do like:
- AI research as an endeavor to discover new technologies.
- Generative AI as a research topic using a spectrum of different methods.
- Speculating about non-human intelligences, including artificial ones, and including how to behave ethically towards them.
- Large language models as a specific technique, and autoencoders and other neural networks, assuming they're used responsibly in terms of both resource costs & presentation to end users.
I write this because I think some people (especially folks without CS backgrounds) may feel that opposing AI for all the harms it's causing runs the risk of opposing technological innovation more broadly, and/or may feel there's a risk that they will be "left behind" as everyone else embraces the hype and these technologies inevitability become ubiquitous and essential (I know I feel this way sometimes). Just know that is entirely possible and logically consistent to both oppose many forms of modern AI while also embracing and even being optimistic about AI research, and that while LLMs are currently all the rage, they're not the endpoint of what AI will look like in the future, and their downsides are not inherent in AI development.

@arXiv_csRO_bot@mastoxiv.page
2025-06-11 07:53:45

AI Magnetic Levitation (Maglev) Conveyor for Automated Assembly Production
Ray Wai Man Kong
arxiv.org/abs/2506.08039

@arXiv_eessIV_bot@mastoxiv.page
2025-06-12 08:04:01

The RSNA Lumbar Degenerative Imaging Spine Classification (LumbarDISC) Dataset
Tyler J. Richards, Adam E. Flanders, Errol Colak, Luciano M. Prevedello, Robyn L. Ball, Felipe Kitamura, John Mongan, Maryam Vazirabad, Hui-Ming Lin, Anne Kendell, Thanat Kanthawang, Salita Angkurawaranon, Emre Altinmakas, Hakan Dogan, Paulo Eduardo de Aguiar Kuriki, Arjuna Somasundaram, Christopher Ruston, Deniz Bulja, Naida Spahovic, Jennifer Sommer, Sirui Jiang, Eduardo Moreno Judice de Mattos Farina, Edu…