Tootfinder

Opt-in global Mastodon full text search. Join the index!

@chpietsch@fedifreu.de
2025-06-16 19:02:46

Wissenschaftler:innen haben herausgefunden: Wer ChatGPT oder andere Bullshit-Generatoren nutzt, verblödet innerhalb kurzer Zeit.
#LLM

@gedankenstuecke@scholar.social
2025-06-17 14:18:54

I just saw an all-caps instruction file that someone uses to 'instruct' an LLM to help with coding, and it's just "don't hallucinate", "check your work", "don't say you did something when you didn't" with multiple exclamation marks.
So, basically the whole 'vibe coding,' or having "AI" "help" with coding just devolves into shouting at your computer.
Which reminded me of something, and then it hit me!
#ai #llm #vibecoding
youtube.com/watch?v=q8SWMAQYQf

@livia@sciences.social
2025-06-18 08:02:00

We’re at a point where #SEO optimisation is checking if the #LLM „interprets“ things right and DAMN it’s awful.

@pavelasamsonov@mastodon.social
2025-06-14 17:00:59

In 300BC, Zeno proved that it's impossible to code an app using #LLM tools.
Imagine a vibe coder who generates an app. The LLM can only provide working code for half of the features requested.
So he has to ask the #AI to generate the other half. Once again, the AI can only fulfill half of the…

@fell@ma.fellr.net
2025-05-16 16:01:23

Is it just me, or do LLMs have a strong tendency to try and guess what you want to hear and then tell you exactly that, regardless of factual accuracy?
#AI #ML #LLM

@marcel@waldvogel.family
2025-06-12 08:08:57

Agentic AI as the enemy's agent.
It is a bad idea to allow an LLM access to internal data and external communication (web pages, APIs, email, …) at the same time.
#AgenticAI #DataLeak #LLM

@michabbb@social.vivaldi.net
2025-06-15 02:02:16

Building a Code-Editing #Agent in 400 Lines of #Go Code 🤖

🔧 Complete agent implementation requires only #LLM integration, loop structure & sufficient token allocation
🧵 👇

@markhburton@mstdn.social
2025-06-14 10:36:56

I'm appalled to see self-professed environmentalists with a good track record uncritically using Chatgpt or whatever it's called for simple web enquiries. Lazy, ignorant, really don't care?
#LLM, '#AI'

@AimeeMaroux@mastodon.social
2025-06-14 19:44:41
Content warning:

This should not be surprising for anyone who knows how LLMs work but holy shit is this scary!
The article is about regular people whose conspiracy beliefs were encouraged by #ChatGPT.
I think the fact that humans are lonelier than ever makes it easy to prey on a large amount of vulnerable people, which is why #LLM

@usul@piaille.fr
2025-06-11 11:31:32

Focus and Context and LLMs | Taras' Blog on AI, Perf, Hacks
#AI

@DGIInfo@openbiblio.social
2025-03-18 11:00:23

20. März, 13:30 - 14:00 - neue Ausgabe unseres #TTT zu #storm ai, das mit #LLM #wikipedia -ähnliche Be…

@mia@hcommons.social
2025-06-03 09:43:50

An LLM (Copilot, in this case) made an impressive looking graph of collection items over time for a colleague, but after a bit of probing he came to the most 2025 realisation possible:
'Oh wait I've just realised it's made the whole thing up....DOH!'
#AI #LLM

@stsquad@mastodon.org.uk
2025-06-03 18:39:30

The #QEMU contribution policy is being updated to make it clear we don't currently accept #llm generated code:

@marcel@waldvogel.family
2025-06-12 14:32:51

Wer mit einem #LLM experimentieren will, ohne Daten an #KI-Firmen abzudrücken, kann unterdessen viele "Open-Source"-Modelle herunterladen.
Auf etlichen Rechnern läuft das sogar erstaunlich schnell. So hat @…

@hw@fediscience.org
2025-04-10 06:13:49

Ai2 now has a tool, where you can trace the outputs of LLMs to their possible sources in the training materials. It's very interesting.
Obviously only works with fully open models like their OLMo family of models. More info here: #LLM #OLMo2 #AI

@pavelasamsonov@mastodon.social
2025-06-11 04:03:18

There is a lot of conflict between developers who say #LLM tools are making them more productive, and developers who want to quit and move to a cabin in the woods.
Recently I discovered a possible reason why. #AI is just a bad fit for conventional, reality-based models of value creation like

@poppastring@dotnet.social
2025-06-05 23:05:17

GenAI is the new Offshoring #ai #llm
ardalis.com/genai-is-the-new-o

@simon_brooke@mastodon.scot
2025-06-05 09:13:37

If you're (like me) trying to create a Local Place Plan for your locality, and are struggling to analyse data from the Place Standard Tool, I've written a wee #Clojure program, leveraging Google's Gemini #LLM, to do it for you.
If you're just trying to analyse data from some other spre…

@veit@mastodon.social
2025-05-06 09:21:14

Mein Vortrag für die Tübix wurde angenommen:
Wie uns LLMs beim Programmieren helfen
#LLM

@felwert@fedihum.org
2025-06-11 09:34:10

I’m sorry, but I cannot help a tiny bit of Schadenfreude. A colleague is an enthusiastic user of ChatGPT and recently told me that one does not need traditional reference managers like Zotero anymore, since you can just ask the LLM to re-format your references according to a given style. Now he got article proofs back with countless comments that the dates in in-text references don’t match the dates in the bibliography. 🙃

@publicvoit@graz.social
2025-05-30 07:12:47

#LLM tech is about to kill anything where people are able to contribute:
The Future of #Comments is Lies, I Guess

@dichotomiker@dresden.network
2025-06-05 08:20:01

#TIL: Bei der Betrachtung vom Stromverbrauch von #LLM floss bei mir bisher nicht der zusätzliche Verbrauch ein, der durch das Webscraping bei Webservern entsteht,
#KI

@dennisfaucher@infosec.exchange
2025-05-29 23:21:27

Fascinating. Leaked LLM prompt instructions from most of the chat sites.
#AI

@whophd@ioc.exchange
2025-06-09 03:00:58

#AI #LLM technology isn’t like a cal…

@samvarma@fosstodon.org
2025-06-04 15:32:47

This author is invaluable to me because they always have a fresh take that I haven't seen anywhere else. Was a fave follow on the bad place.
In this case, re #LLMs
#AI #LLM

@JGraber@mastodon.social
2025-05-09 11:20:02

#Python Friday #278: Optimise the #LLM Client - #AI

@thomasrenkert@hcommons.social
2025-05-23 08:15:29

The #OpenAI paper by Baker et al, "Monitoring Reasoning Models for Misbehavior and the Risks of Promoting Obfuscation" comes to a troubling conclusion: #LLM s with #reasoning or

If CoT pressures are used to improve agent capabilities or alignment, there may be no alternative approach to yield the same improvements without degrading monitorability. In the worst case, where the agent learns to fully obscure its intent in its CoT, we ultimately revert to the same model safety conditions that existed prior to the emergence of reasoning models and must rely on monitoring activations, monitoring potentially adversarial CoTs and outputs, or improved alignment methods. Model a…
@sjn@chaos.social
2025-05-26 23:04:32

What thought-terminating clichés are used to prevent critiquing #AI and #LLM ?
I can think about one or two, but I'd love to hear if there are more in circulation...

@alsutton@snapp.social
2025-05-20 10:57:15

I think someone has a lot of spare time, money, and energy.
#AI #LLM
youtube.com/watch?v=7fNYj0EXxM

@tiotasram@kolektiva.social
2025-05-26 12:51:54

Let's say you find a really cool forum online that has lots of good advice on it. It's even got a very active community that's happy to answer questions very quickly, and the community seems to have a wealth of knowledge about all sorts of subjects.
You end up visiting this community often, and trusting the advice you get to answer all sorts of everyday questions you might have, which before you might have found answers to using a web search (of course web search is now full of SEI spam and other crap so it's become nearly useless).
Then one day, you ask an innocuous question about medicine, and from this community you get the full homeopathy treatment as your answer. Like, somewhat believable on the face of it, includes lots of citations to reasonable-seeming articles, except that if you know even a tiny bit about chemistry and biology (which thankfully you do), you know that the homoeopathy answers are completely bogus and horribly dangerous (since they offer non-treatments for real diseases). Your opinion of this entire forum suddenly changes. "Oh my God, if they've been homeopathy believers all this time, what other myths have they fed me as facts?"
You stop using the forum for anything, and go back to slogging through SEI crap to answer your everyday questions, because one you realize that this forum is a community that's fundamentally untrustworthy, you realize that the value of getting advice from it on any subject is negative: you knew enough to spot the dangerous homeopathy answer, but you know there might be other such myths that you don't know enough to avoid, and any community willing to go all-in on one myth has shown itself to be capable of going all in on any number of other myths.
...
This has been a parable about large language models.
#AI #LLM

@gratianriter@bildung.social
2025-05-29 05:21:32

Der Begriff „KI Grooming“ meint dasselbe wie logisch-semantische Injektion: seagent.de/ki-als-logisch-sema

@scottmiller42@mstdn.social
2025-06-02 10:43:54

Someone in my LinkedIn network posted this, and I have no inkling if it is genuine or sarcasm (see: Poe's Law).
Full text of the post in the image Alt Text.
NOTE: Please do not dogpile this person due to my toot.
#LLMs #WorkerReplacement

This is a screenshot of a LinkedIn post that has the following text.
FINALLY AI WILL KILL HR 🤖 💼

I’m so excited that AI will free us from the broken and inefficient HR paradigm of nearly worthless interviews, performance reviews, and promotion decisions.

When an LLM can gain access to your accounts (with proper permissions established), we’ll be managing Human Resources like we do cash flow and other resources: plugging the right people to the right places at the right time. The synthesis of …
@poppastring@dotnet.social
2025-05-22 01:51:55

After months of coding with an #LLM I'm going back to using my brain
simonwillison.net/2025/May/20/

@sjn@chaos.social
2025-05-26 23:04:32

What thought-terminating clichés are used to prevent critiquing #AI and #LLM ?
I can think about one or two, but I'd love to hear if there are more in circulation...

@hw@fediscience.org
2025-04-26 04:39:50

Hiddenlayer came up with a security bypass for all LLMs. Just ask for a script of a Dr. House episode and inject some policy XML. Also, use l337sp33k: #llmsecurity #llm

@pavelasamsonov@mastodon.social
2025-05-23 15:15:40

Every company is undergoing an invisible reorg. You report to your boss but your boss reports to an #AI, offloading the job of management entirely onto a bot and then merely communicating its wishes back to the team.
This is the Nothing Manager, surrounded by #LLM tools to avoid having to interact with…

@kirenida@social.linux.pizza
2025-04-13 08:05:00

Jeben je ovaj #meta #whatsapp #llama #ai

Screenshot whatsapp ai chata
Screenshot f messenger ai chata
@poppastring@dotnet.social
2025-05-31 19:35:26

A post from the archive 📫:
If LLMs Can Code, Why Are We Building More IDEs?
poppastring.com/blog/if-llms-c

@smurthys@hachyderm.io
2025-05-29 22:16:40

#CNN: Trump administration’s MAHA report on children’s health filled with flawed references, including some studies that don’t exist
Easy. They had AI draft parts of the report. 100 bucks if they didn't.
#uspol #health #LLM #AI

@henrikmillinge@fikaverse.club
2025-05-27 13:10:39

Det er nok nogenlunde det samme i Danmark. #KI #AI #LLM #DKMedier

@castarco@hachyderm.io
2025-03-19 09:45:48

WTF #LLM #LLMs #AI #UK #UKPolitics

@pavelasamsonov@mastodon.social
2025-05-29 21:13:07

Any second now... #LLM #AGI #GenAI #AI

r/agi
2 yr. ago
AGI 2 years away says CEO of leading AGI lab Anthropic
@michabbb@social.vivaldi.net
2025-05-29 20:43:09

My first impression of #google #gemini #diffusion
110 tokens/s is "okay".... but using any #LLM