A Simple but Accurate Approximation for Multivariate Gaussian Rate-Distortion Function and Its Application in Maximal Coding Rate Reduction
Zhenglin Huang, Qifa Yan, Bin Dai, Xiaohu Tang
https://arxiv.org/abs/2506.18613
Je viens de donner 30 € pour une seconde saison de l'émission "Argent Magique"
https://www.ulule.com/argentmagique/
Ahhhh! Our #BattleOfTarot prototype demo we're building for Gamescom is coming along so nicely!
Absolutely killer programming work by the one and only @…
and fantastic writing by our narrative designer Len Cunningham as well (
ADHO (adho.org) updates at the opening of #DH2025 from Diane and Michael - the Italian Digital Humanities association officially joins ADHO, and announces awards including the Zampoli prize to Stylo software, the conference bursaries winners and Fortier prize nominees. Also note the Code of Conduct!
CBS News Digital's WGA East members reach a deal with management on their first collective bargaining agreement, a week before Skydance is set to take over (Todd Spangler/Variety)
https://variety.com/2025/digital/news/cbs-news-digital-wga-east-u…
GazeDETR: Gaze Detection using Disentangled Head and Gaze Representations
Ryan Anthony Jalova de Belen, Gelareh Mohammadi, Arcot Sowmya
https://arxiv.org/abs/2508.12966 https://…
To add a single example here (feel free to chime in with your own):
Problem: editing code is sometimes tedious because external APIs require boilerplate.
Solutions:
- Use LLM-generated code. Downsides: energy use, code theft, potential for legal liability, makes mistakes, etc. Upsides: popular among some peers, seems easy to use.
- Pick a better library (not always possible).
- Build internal functions to centralize boilerplate code, then use those (benefits: you get a better understanding of the external API, and a more-unit-testable internal code surface; probably less amortized effort).
- Develop a non-LLM system that actually reasons about code at something like the formal semantics level and suggests boilerplate fill-ins based on rules, while foregrounding which rules it's applying so you can see the logic behind the suggestions (needs research).
Obviously LLM use in coding goes beyond this single issue, but there are similar analyses for each potential use of LLMs in coding. I'm all cases there are:
1. Existing practical solutions that require more effort (or in many cases just seem to but are less-effort when amortized).
2. Near-term researchable solutions that directly address the problem and which would be much more desirable in the long term.
Thus in addition to disastrous LLM effects on the climate, on data laborers, and on the digital commons, they tend to suck us into cheap-seeming but ultimately costly design practices while also crowding out better long-term solutions. Next time someone suggests how useful LLMs are for some task, try asking yourself (or them) what an ideal solution for that task would look like, and whether LLM use moves us closer to or father from a world in which that solution exists.
Erg eens:
"Zij zegt te betwijfelen of je de beveiliging van gevoelige mails en bestanden van Nederlandse burgers wel moet overlaten aan de markt.
‘En als je vindt dat dat kan, moeten we dat dan overlaten aan niet-Europese partijen waar we geen grip op hebben? Ik vind dat we veel strategischer moeten nadenken over onze digitale infrastructuur en dit soort zaken veel beter moeten reguleren, bijvoorbeeld door encryptiediensten aan te merken als vitale infrastructuur.’"
…
Maximising Energy Efficiency in Large-Scale Open RAN: Hybrid xApps and Digital Twin Integration
Ahmed Al-Tahmeesschi, Yi Chu, Gurdeep Singh, Charles Turyagyenda, Dritan Kaleshi, David Grace, Hamed Ahmadi
https://arxiv.org/abs/2509.10097