Morgan Stanley survey of audio habits in the US: 50% to 60% of listeners aged 18-44 reported listening to AI-generated music for 2.5 to 3 hours per week (Luke Kawa/Sherwood News)
https://sherwood.news/markets/morgan-stanley-most-gen…
The #KeepassXC discussion about GenAI coding tool use seems a bit too simplistic at the moment.
There is room for nuance:
1. Yes, LLM based code generators consume insane amounts of electricity and generate collateral environment damage. That's bad, and we should talk much more about energy efficiency and reasonable use of resources.
2. Yes, LLMs generate a lot of bad o…
We should track down whomever decided streaming TV apps need to block taking a screenshot. They need to know a couple things:
1) Sharing screenshots generates interest, which means money for streamers & IP holders alike.
2) You know what doesn’t stop me from making a screenshot? Pirated media.
Do you even gain anything from stopping screenshots?
#Netflix
Imagine ChatGPT but instead of predicting text it just linked you to the to 3 documents most-influential on the probabilities that would have been used to predict that text.
Could even generate some info about which parts of each would have been combined how.
There would still be issues with how training data is sourced and filtered, but these could be solved by crawling normally respecting robots.txt and by paying filterers a fair wage with a more relaxed work schedule and mental health support.
The energy issues are mainly about wild future investment and wasteful query spam, not optimized present-day per-query usage.
Is this "just search?"
Yes, but it would have some advantages for a lot of use cases, mainly in synthesizing results across multiple documents and in leveraging a language model more fully to find relevant stuff.
When we talk about the harms of current corporate LLMs, the opportunity cost of NOT building things like this is part of that.
The equivalent for art would have been so amazing too! "Here are some artists that can do what you want, with examples pulled from their portfolios."
It would be a really cool coding assistant that I'd actually encourage my students to use (with some guidelines).
#AI #GenAI #LLMs
Crosslisted article(s) found for cs.GT. https://arxiv.org/list/cs.GT/new
[1/1]:
- AI-Generated Compromises for Coalition Formation: Modeling, Simulation, and a Textual Case Study
Eyal Briman, Ehud Shapiro, Nimrod Talmon
https://arxiv.org/abs/2512.05983 https://mastoxiv.page/@arXiv_csMA_bot/115688474865840195
- Going All-In on LLM Accuracy: Fake Prediction Markets, Real Confidence Signals
Michael Todasco
https://arxiv.org/abs/2512.05998
- Small-Gain Nash: Certified Contraction to Nash Equilibria in Differentiable Games
Vedansh Sharma
https://arxiv.org/abs/2512.06791 https://mastoxiv.page/@arXiv_csLG_bot/115689591150148735
- Characterizing Lane-Changing Behavior in Mixed Traffic
Sungyong Chung, Alireza Talebpour, Samer H. Hamdar
https://arxiv.org/abs/2512.07219 https://mastoxiv.page/@arXiv_csMA_bot/115688571373683355
- Understanding LLM Agent Behaviours via Game Theory: Strategy Recognition, Biases and Multi-Agent ...
Kiet Huynh, et al.
https://arxiv.org/abs/2512.07462 https://mastoxiv.page/@arXiv_csMA_bot/115688610063828863
- Optimal Auction Design under Costly Learning
Kemal Ozbek
https://arxiv.org/abs/2512.07798 https://mastoxiv.page/@arXiv_econTH_bot/115688939067758036
toXiv_bot_toot
Google says Ironwood, its seventh-gen TPU, will launch in the coming weeks and is more than 4x faster than its sixth-gen TPU; it comes in a 9,216-chip config (CNBC)
https://www.cnbc.com/2025/11/06/google-unveils-ironwood-seventh-gener…
Nvidia unveils DLSS 4.5 with a new 6x Multi Frame Generation for the RTX 50 series, and a second-generation Super Resolution transformer model for all RTX GPUs (Tom Warren/The Verge)
https://www.theverge.com/tech/854610/nvidia-dlss-4-5-announ…