Let's say you find a really cool forum online that has lots of good advice on it. It's even got a very active community that's happy to answer questions very quickly, and the community seems to have a wealth of knowledge about all sorts of subjects.
You end up visiting this community often, and trusting the advice you get to answer all sorts of everyday questions you might have, which before you might have found answers to using a web search (of course web search is now full of SEI spam and other crap so it's become nearly useless).
Then one day, you ask an innocuous question about medicine, and from this community you get the full homeopathy treatment as your answer. Like, somewhat believable on the face of it, includes lots of citations to reasonable-seeming articles, except that if you know even a tiny bit about chemistry and biology (which thankfully you do), you know that the homoeopathy answers are completely bogus and horribly dangerous (since they offer non-treatments for real diseases). Your opinion of this entire forum suddenly changes. "Oh my God, if they've been homeopathy believers all this time, what other myths have they fed me as facts?"
You stop using the forum for anything, and go back to slogging through SEI crap to answer your everyday questions, because one you realize that this forum is a community that's fundamentally untrustworthy, you realize that the value of getting advice from it on any subject is negative: you knew enough to spot the dangerous homeopathy answer, but you know there might be other such myths that you don't know enough to avoid, and any community willing to go all-in on one myth has shown itself to be capable of going all in on any number of other myths.
...
This has been a parable about large language models.
#AI #LLM
“it is a truth of this website that no matter what position you hold on any topic, one of the most terminally online people on earth whose full time job appears to be getting angry on the internet will appear to declare that you are a centrist” https://bsky.app/profile/did:plc:3s5wt
As an OEM, I want to make my customers angry, so that they keep knocking on my RCE vulns.
https://mrbruh.com/asus_p2/
Earlier this month, the Copyright Office issued the third part of its report on on AI: this one covering how generative AI may infringe on copyright and whether that's fair use (short answer: maybe, maybe not). I have a new article up summarizing the report.
https://www.…
The challenge of HEPA filters in the classrooms.
h/t @…
source: https://xcancel.com/kadamssl/status/1938267359233429683#m
Nearly 80 BBC journalists, including presenter Martine Croxall, call on the NUJ to schedule a vote on a strike over colleagues facing compulsory layoffs (Jake Kanter/Deadline)
https://deadline.com/2025/06/bbc-news-presenters-strike-vote-mar…
Today I recalled an event from my youth.
One day, as I was walking home from school, a car driver stopped me. He asked if I'm from around here. Naturally, I found that quite inappropriate, since he has no business learning where do I live. But I answered. So he's asking me if I know where tire repair shop is. I answered that I don't. So he asked me again, "but are you from around here?" Well, that was too much, so he got a short explanation that just because I live here, that doesn't mean that I need to know every single company around here, and since I am not a driver (obviously — after all I was a school kid), I have never needed tire repair shop.
I suppose that worked quite well, since he drove away at this point. As he drove away, his car revealed a tire repair shop poster on the fence opposite.
The obvious lesson here is: if you need something, ask straight instead of going through silly helper questions.
Earlier this month, the Copyright Office issued the third part of its report on on AI: this one covering how generative AI may infringe on copyright and whether that's fair use (short answer: maybe, maybe not). I have a new article up summarizing the report.
https://www.…
Response Quality Assessment for Retrieval-Augmented Generation via Conditional Conformal Factuality
Naihe Feng, Yi Sui, Shiyi Hou, Jesse C. Cresswell, Ga Wu
https://arxiv.org/abs/2506.20978