«LLMs are cliché machines, trained on a resilient human weakness for generating maximum content with minimum effort.»
Bingo.
Unfortunately, this too hits the nail on the head: «Bad art is something human beings love to do, in vast numbers. It’s part of who we are, and when abandoned by inspiration we trust in the same methods we’ve programmed into LLMs.»
«LLMs are cliché machines, trained on a resilient human weakness for generating maximum content with minimum effort.»
Bingo.
Unfortunately, this too hits the nail on the head: «Bad art is something human beings love to do, in vast numbers. It’s part of who we are, and when abandoned by inspiration we trust in the same methods we’ve programmed into LLMs.»
'It is 2025, and seemingly everyone wants us in the humanities to do stuff “with AI,” informed not by what the technology avails but by the hopes it encodes.' Sonja Drimmer on 🔥on Art Forum
https://www.artforum.com/features/generative-ai-st…
‘When you plant something, it dies’: Brazil’s first arid zone is a stark warning for the whole country https://www.theguardian.com/global-development/2025/dec/28/brazil-first-arid-zone-stark-warning-for-country
ProphetKV: User-Query-Driven Selective Recomputation for Efficient KV Cache Reuse in Retrieval-Augmented Generation
Shihao Wang, Jiahao Chen, Yanqi Pan, Hao Huang, Yichen Hao, Xiangyu Zou, Wen Xia, Wentao Zhang, Haitao Wang, Junhong Li, Chongyang Qiu, Pengfei Wang
https://arxiv.org/abs/2602.02579 https://arxiv.org/pdf/2602.02579 https://arxiv.org/html/2602.02579
arXiv:2602.02579v1 Announce Type: new
Abstract: The prefill stage of long-context Retrieval-Augmented Generation (RAG) is severely bottlenecked by computational overhead. To mitigate this, recent methods assemble pre-calculated KV caches of retrieved RAG documents (by a user query) and reprocess selected tokens to recover cross-attention between these pre-calculated KV caches. However, we identify a fundamental "crowding-out effect" in current token selection criteria: globally salient but user-query-irrelevant tokens saturate the limited recomputation budget, displacing the tokens truly essential for answering the user query and degrading inference accuracy.
We propose ProphetKV, a user-query-driven KV Cache reuse method for RAG scenarios. ProphetKV dynamically prioritizes tokens based on their semantic relevance to the user query and employs a dual-stage recomputation pipeline to fuse layer-wise attention metrics into a high-utility set. By ensuring the recomputation budget is dedicated to bridging the informational gap between retrieved context and the user query, ProphetKV achieves high-fidelity attention recovery with minimal overhead. Our extensive evaluation results show that ProphetKV retains 96%-101% of full-prefill accuracy with only a 20% recomputation ratio, while achieving accuracy improvements of 8.8%-24.9% on RULER and 18.6%-50.9% on LongBench over the state-of-the-art approaches (e.g., CacheBlend, EPIC, and KVShare).
toXiv_bot_toot
Pinterest users, especially artists, says the platform has gotten worse in the past year due to AI moderation, AI-generated art, and AI features (Matthew Gault/404 Media)
https://www.404media.co/pinterest-is-drowning-in-a-sea-of-ai-slop-and-auto-moderat…
Fact Check: FAKE Image Shows U.S. Embassy In Saudi Arabia On Fire After Drone Attack -- It's AI-Generated: https://benborges.xyz/2026/03/03/fact-check-fake-image-shows.html
This could be long. It's hard to express my thoughts on this.
I had a bit of a heart to heart with my son today. My son is a student at Minneapolis College of Art and Design. He's 26. I'm white, his mother is third generation Mexican-American. To me, my kids are just my kids, I see myself and my ex-wife both in them. But he has been pushing me to realize for some time that isn't how the rest of the people around us see them. They don't see me in them at all, they ju…
@axbom@axbom.me