AI
CaMeL offers a promising new direction for mitigating prompt injection attacks
[Simon Willison] Prompt injection attacks have been one of the bugbears for modern AI models: it's an unsolved problem that has meant that it can be quite dangerous to expose LLMs to direct user input, among other things. A lot of people have worked on the problem, but