
2025-06-16 19:02:46
Wissenschaftler:innen haben herausgefunden: Wer ChatGPT oder andere Bullshit-Generatoren nutzt, verblödet innerhalb kurzer Zeit.
#LLM
Wissenschaftler:innen haben herausgefunden: Wer ChatGPT oder andere Bullshit-Generatoren nutzt, verblödet innerhalb kurzer Zeit.
#LLM
I just saw an all-caps instruction file that someone uses to 'instruct' an LLM to help with coding, and it's just "don't hallucinate", "check your work", "don't say you did something when you didn't" with multiple exclamation marks.
So, basically the whole 'vibe coding,' or having "AI" "help" with coding just devolves into shouting at your computer.
Which reminded me of something, and then it hit me!
#ai #llm #vibecoding
https://www.youtube.com/watch?v=q8SWMAQYQf0
Agentic AI as the enemy's agent.
It is a bad idea to allow an LLM access to internal data and external communication (web pages, APIs, email, …) at the same time.
#AgenticAI #DataLeak #LLM
This should not be surprising for anyone who knows how LLMs work but holy shit is this scary!
The article is about regular people whose conspiracy beliefs were encouraged by #ChatGPT.
I think the fact that humans are lonelier than ever makes it easy to prey on a large amount of vulnerable people, which is why #LLM
Focus and Context and LLMs | Taras' Blog on AI, Perf, Hacks
#AI
20. März, 13:30 - 14:00 - neue Ausgabe unseres #TTT zu #storm ai, das mit #LLM #wikipedia -ähnliche Be…
GenAI is the new Offshoring #ai #llm
https://ardalis.com/genai-is-the-new-offshoring/
Mein Vortrag für die Tübix wurde angenommen:
Wie uns LLMs beim Programmieren helfen
#LLM
I’m sorry, but I cannot help a tiny bit of Schadenfreude. A colleague is an enthusiastic user of ChatGPT and recently told me that one does not need traditional reference managers like Zotero anymore, since you can just ask the LLM to re-format your references according to a given style. Now he got article proofs back with countless comments that the dates in in-text references don’t match the dates in the bibliography. 🙃
Fascinating. Leaked LLM prompt instructions from most of the chat sites.
#AI
The #OpenAI paper by Baker et al, "Monitoring Reasoning Models for Misbehavior and the Risks of Promoting Obfuscation" comes to a troubling conclusion: #LLM s with #reasoning or
I think someone has a lot of spare time, money, and energy.
#AI #LLM
https://youtube.com/watch?v=7fNYj0EXxM
Let's say you find a really cool forum online that has lots of good advice on it. It's even got a very active community that's happy to answer questions very quickly, and the community seems to have a wealth of knowledge about all sorts of subjects.
You end up visiting this community often, and trusting the advice you get to answer all sorts of everyday questions you might have, which before you might have found answers to using a web search (of course web search is now full of SEI spam and other crap so it's become nearly useless).
Then one day, you ask an innocuous question about medicine, and from this community you get the full homeopathy treatment as your answer. Like, somewhat believable on the face of it, includes lots of citations to reasonable-seeming articles, except that if you know even a tiny bit about chemistry and biology (which thankfully you do), you know that the homoeopathy answers are completely bogus and horribly dangerous (since they offer non-treatments for real diseases). Your opinion of this entire forum suddenly changes. "Oh my God, if they've been homeopathy believers all this time, what other myths have they fed me as facts?"
You stop using the forum for anything, and go back to slogging through SEI crap to answer your everyday questions, because one you realize that this forum is a community that's fundamentally untrustworthy, you realize that the value of getting advice from it on any subject is negative: you knew enough to spot the dangerous homeopathy answer, but you know there might be other such myths that you don't know enough to avoid, and any community willing to go all-in on one myth has shown itself to be capable of going all in on any number of other myths.
...
This has been a parable about large language models.
#AI #LLM
Der Begriff „KI Grooming“ meint dasselbe wie logisch-semantische Injektion: https://seagent.de/ki-als-logisch-semantische-cloud-logisch-semantische-souveraenitaet/
Someone in my LinkedIn network posted this, and I have no inkling if it is genuine or sarcasm (see: Poe's Law).
Full text of the post in the image Alt Text.
NOTE: Please do not dogpile this person due to my toot.
#LLMs #WorkerReplacement
After months of coding with an #LLM I'm going back to using my brain
https://simonwillison.net/2025/May/20/after-months-of-coding-with-llms/#ato…
Hiddenlayer came up with a security bypass for all LLMs. Just ask for a script of a Dr. House episode and inject some policy XML. Also, use l337sp33k: #llmsecurity #llm
Every company is undergoing an invisible reorg. You report to your boss but your boss reports to an #AI, offloading the job of management entirely onto a bot and then merely communicating its wishes back to the team.
This is the Nothing Manager, surrounded by #LLM tools to avoid having to interact with…
A post from the archive 📫:
If LLMs Can Code, Why Are We Building More IDEs?
https://www.poppastring.com/blog/if-llms-can-code-why-are-we-building-more-ides
WTF #LLM #LLMs #AI #UK #UKPolitics