Show HN: ChatGPT Plugins are a Security Nightmare
github.comAmazing post, thank you.
I really can't see how security can be solved within a probabilistic model, which is what we'd need to happen here, and that in turn effectively puts a huge limit on the scale at which we can use LLMs.
Lots of food for thought.
Soo.. Expect your personal GPT to be persistently compromised/hacked, remote-controlled and used to exfiltrate all your data. Security of LLMs is in a bad state right now.