Reader

DeepMind Researchers Propose Defense Against LLM Prompt Injection

| InfoQ | Default

To prevent prompt injection attacks when working with untrusted sources, Google DeepMind researchers have proposed CaMeL, a defense layer around LLMs that blocks malicious inputs by extracting the control and data flows from the query. According to their results, CaMeL can neutralize 67% of attacks in the AgentDojo security benchmark.

By Sergio De Simone