Hijacking AI Agents with ChatML Role Injection

Large-language-model wrappers increasingly rely on the “ChatML” format to segregate system, assistant, and user roles, yet those delimiters introduce a critical appsec flaw: there is a role hierarchy but no ChatML/server-side RBAC or parameter-level trust boundary built in to ChatML or its chat-completions JSON wrapper. Any client that can speak ChatML can also impersonate privilege, similar to the logical flaws of early-2000s webapps. To make it worse: everybody and their mother forked this thing with roles/privileges but no built-in RBAC pioneered by leading model providers.

In twenty minutes we will walk through the anatomy of that oversight and unveil three vendor-agnostic role-injection techniques that bypass guardrails, trigger unbounded consumption, and hijack function calls in under 50 tokens. We then pivot to parameter pollution, showing how key overrides (temperature, system, tools) can be further used to abuse agents.

OWASP AAI001: Agent Authorization and Control Hijacking