Agents based on Large Language Models (LLMs) are increasingly susceptible to vulnerabilities reminiscent of early‑2000s software bugs. One such emerging technique is Special Token Injection (STI). This attack chains manipulations of reserved token sequences (e.g., role‑separators in structured prompts) to alter how the model interprets, parses, and generates responses, leading to agent hijacking.
In this talk, we’ll demystify STI: what it is, where it lurks, and why it matters. Through a penetration tester's perspective, we’ll walk through real‑world examples, explore its broader implications in AI security.
Based on the latest research by Armend Gashi, Robert Shala, and Anit Hajdari and presented at AppSec Village @ DEF CON 33, BSides Kraków 2025, and BSides Tirana 2025, this presentation will arm attendees with essential insights into a novel threat vector and practical tactics to anticipate and mitigate it.