Modern use of LLMs often involves giving them access to the local system: to read and write your project files, and to execute arbitrary commands, often unsupervised. So aren't people worried about a harness just doing what a remote #LLM tells it to do?
I think a statement I've heard lately summarizes the mindset well. It went something along the lines "I can't give you 100% guarantee, but I've noticed that LLMs are very good at following instructions, and they're getting better and better, so I don't worry about that anymore".
Like, it is completely fine to introduce a humongous security hole, because the probability that a model will *accidentally* do something horrible is decreasing.
#AI #NoAI #NoLLM #security