Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@Techmeme@techhub.social
2026-01-10 16:55:47

Morgan Stanley survey of audio habits in the US: 50% to 60% of listeners aged 18-44 reported listening to AI-generated music for 2.5 to 3 hours per week (Luke Kawa/Sherwood News)
sherwood.news/markets/morgan-s

@rene_mobile@infosec.exchange
2025-11-08 23:25:44

The #KeepassXC discussion about GenAI coding tool use seems a bit too simplistic at the moment.
There is room for nuance:
1. Yes, LLM based code generators consume insane amounts of electricity and generate collateral environment damage. That's bad, and we should talk much more about energy efficiency and reasonable use of resources.
2. Yes, LLMs generate a lot of bad o…

@scottmiller42@mstdn.social
2026-01-10 09:41:55

We should track down whomever decided streaming TV apps need to block taking a screenshot. They need to know a couple things:
1) Sharing screenshots generates interest, which means money for streamers & IP holders alike.
2) You know what doesn’t stop me from making a screenshot? Pirated media.
Do you even gain anything from stopping screenshots?
#Netflix

@seeingwithsound@mas.to
2026-01-06 07:03:13

To ChatGPT: By analogy to generative artificial intelligence for images (#GenAI), is there something like generative human intelligence for images (#GenHI)?

@tiotasram@kolektiva.social
2025-11-09 12:09:40

Imagine ChatGPT but instead of predicting text it just linked you to the to 3 documents most-influential on the probabilities that would have been used to predict that text.
Could even generate some info about which parts of each would have been combined how.
There would still be issues with how training data is sourced and filtered, but these could be solved by crawling normally respecting robots.txt and by paying filterers a fair wage with a more relaxed work schedule and mental health support.
The energy issues are mainly about wild future investment and wasteful query spam, not optimized present-day per-query usage.
Is this "just search?"
Yes, but it would have some advantages for a lot of use cases, mainly in synthesizing results across multiple documents and in leveraging a language model more fully to find relevant stuff.
When we talk about the harms of current corporate LLMs, the opportunity cost of NOT building things like this is part of that.
The equivalent for art would have been so amazing too! "Here are some artists that can do what you want, with examples pulled from their portfolios."
It would be a really cool coding assistant that I'd actually encourage my students to use (with some guidelines).
#AI #GenAI #LLMs

@arXiv_csGT_bot@mastoxiv.page
2025-12-09 13:25:19

Crosslisted article(s) found for cs.GT. arxiv.org/list/cs.GT/new
[1/1]:
- AI-Generated Compromises for Coalition Formation: Modeling, Simulation, and a Textual Case Study
Eyal Briman, Ehud Shapiro, Nimrod Talmon
arxiv.org/abs/2512.05983 mastoxiv.page/@arXiv_csMA_bot/
- Going All-In on LLM Accuracy: Fake Prediction Markets, Real Confidence Signals
Michael Todasco
arxiv.org/abs/2512.05998
- Small-Gain Nash: Certified Contraction to Nash Equilibria in Differentiable Games
Vedansh Sharma
arxiv.org/abs/2512.06791 mastoxiv.page/@arXiv_csLG_bot/
- Characterizing Lane-Changing Behavior in Mixed Traffic
Sungyong Chung, Alireza Talebpour, Samer H. Hamdar
arxiv.org/abs/2512.07219 mastoxiv.page/@arXiv_csMA_bot/
- Understanding LLM Agent Behaviours via Game Theory: Strategy Recognition, Biases and Multi-Agent ...
Kiet Huynh, et al.
arxiv.org/abs/2512.07462 mastoxiv.page/@arXiv_csMA_bot/
- Optimal Auction Design under Costly Learning
Kemal Ozbek
arxiv.org/abs/2512.07798 mastoxiv.page/@arXiv_econTH_bo
toXiv_bot_toot

@Techmeme@techhub.social
2025-11-06 13:30:56

Google says Ironwood, its seventh-gen TPU, will launch in the coming weeks and is more than 4x faster than its sixth-gen TPU; it comes in a 9,216-chip config (CNBC)
cnbc.com/2025/11/06/google-unv

@Techmeme@techhub.social
2026-01-06 06:10:34

Nvidia unveils DLSS 4.5 with a new 6x Multi Frame Generation for the RTX 50 series, and a second-generation Super Resolution transformer model for all RTX GPUs (Tom Warren/The Verge)
theverge.com/tech/854610/nvidi