Stephen Brennan
Stephen Brennan is a mathematician who researches neural network model
internals in the quest to increase the explainability of AI. His hobbies
include hiking, camping, and deck building games. He has contributed
significantly to R&D for in-depth neural network analysis to identify
vulnerabilities, weaknesses, and inefficiencies, helping improve the
robustness and security of AI systems.
Session
AI models are increasingly delivered as compiled artifacts inside firmware images and native binaries, particularly in IoT, OT, and embedded environments. While these deployment models improve performance and reduce operational dependencies, they also create security blind spots that are poorly understood.
This session examines how AI models can be discovered and analyzed once deployed in embedded systems. The talk focuses on practical reverse engineering techniques used to identify model components, recover structural and behavioral information, and understand the risks introduced by different model packaging and compilation approaches. Attendees will leave with a clearer view of how embedded AI expands the attack surface and why it matters for both offensive and defensive security work.