Tootfinder

Opt-in global Mastodon full text search. Join the index!

@cyrevolt@mastodon.social
2025-10-14 11:07:00

TIL Intel has already published a few more FIT spec revisions that I had missed.
edc.intel.com/content/www/de/d

@arXiv_csHC_bot@mastoxiv.page
2025-10-13 09:02:30

Co-Authoring the Self: A Human-AI Interface for Interest Reflection in Recommenders
Ruixuan Sun, Junyuan Wang, Sanjali Roy, Joseph A. Konstan
arxiv.org/abs/2510.08930

@thomasfuchs@hachyderm.io
2025-11-26 15:30:16

The whole thing is optimized for scams, deception and other criminal behavior:
- user interface that deceptively pretends it's a human you're talking to
- claims from companies highly exaggerate capabilities
companies and "experts" constantly hype "AGI" which they (funnily enough) do to both make investors greedier and spread fear and as a distraction because these algorithms can't actually do what they keep promising
- large-scale accounting and financial fraud (e.g. what Nvidia is doing with circular selling)
- biggest case of copyright infringement in history
Note: I think the underlying technology is really cool, and definitely has use cases and can be used for actually good things. But: some technology just has more downsides than upsides, and some should only be used by experts in controlled environments. Leaded gasoline, asbestos and chlorofluorocarbon are also all really cool technology.
In this case perhaps the techology itself doesn't do anything inherently bad, however the people making it are lying about what it can do, the people selling it are motivated purely by greed and the people using it (often forced to do so) are being deceived.

@stefanlaser@social.tchncs.de
2025-11-24 12:49:43

The chat interface was a marketing bet. Selling #AI as if it is not auto complete. It still is.
"#ChatGPT shifted the user’s relationship to text, moving the prompt from a ‘piece of writing for the model to finish’ to a ‘question calling for an answer’."