Tootfinder

Opt-in global Mastodon full text search. Join the index!

@matthiasott@mastodon.social
2026-01-16 15:16:07

Oh yesss! 🤩 The 2025 #Web #Almanac has just been released. Can’t wait to read a bit …
almanac.httparchive.org/en/202

@carstingaxion@dewp.space
2026-01-16 20:59:04

“ Collectively, these optimizations target common real-world bottlenecks, improving consistency across a wide variety of configurations. HTTP Archive data supports this pattern, showing that WordPress performance variance is driven more by configuration than by core limitations.”
💐 Flowers for the #WordPress performance team, gifted by

@aardrian@toot.cafe
2026-01-16 14:34:55

The 2025 Web Almanac mistook me.
I did *not* say LLMs provide better image descriptions. I cited SeeingAI and Be My Eyes as tools for undescribed IRL uses.
I said LLM-generated captions could be better than craptions. I mentioned abstracts / reading-level changes, which could be summaries?
But “better” image descriptions is right out.

Adrian Roselli acknowledges that recent advances in computer vision and LLMs have brought real benefits, such as better image descriptions and improved captions and summaries. However, he argues these tools still lack context and authorship. They can’t know why content was created, what a joke or meme depends on, or how an interface is meant to work. Their descriptions and code suggestions can easily miss the point or mislead users.