2026-03-23 –, TALKS
AI is not just changing the systems we build, but the kinds of issues that show up in a bug bounty queue. As someone who triages submissions for a large public bug bounty program, I've seen how AI related findings introduce new gray areas. These issues do not always look like traditional vulnerabilities. They often sit at the intersection of model behavior, product design, and real security impact.
In this workshop, I'll walk through how AI reports enter our bug bounty program, how policy boundaries are applied in practice, and how we evaluate whether a finding represents meaningful risk.
In the second half, we'll get hands-on with a vulnerable MCP style server adapted from the open source Vulnerable MCP Servers Lab. We'll reproduce a trust boundary failure, analyze its impact, and walk through how a report like this would be classified and triaged inside a real bug bounty program.
This session offers a practical look at how AI vulnerabilities are evaluated from the triage side and how architectural decisions determine whether an AI issue stays theoretical or becomes infrastructure risk.
Ani Turner is a Senior Security Engineer at Adobe, where she leads the bug bounty program and works closely with ethical hackers to help strengthen product security. She sits at the intersection of research and engineering, triaging vulnerability reports, assessing real world impact, and guiding findings from submission to remediation. With a background in full-stack development and psychology, Ani brings a unique, practical, and collaborative approach to building scalable security programs.