HackTheBay 3.0

Reverse Engineering Embedded AI Models in Firmware and Binaries
2026-03-23 , TALKS

AI models are increasingly delivered as compiled artifacts inside firmware images and native binaries, particularly in IoT, OT, and embedded environments. While these deployment models improve performance and reduce operational dependencies, they also create security blind spots that are poorly understood.

This session examines how AI models can be discovered and analyzed once deployed in embedded systems. The talk focuses on practical reverse engineering techniques used to identify model components, recover structural and behavioral information, and understand the risks introduced by different model packaging and compilation approaches. Attendees will leave with a clearer view of how embedded AI expands the attack surface and why it matters for both offensive and defensive security work.


This presentation takes a technical, hands-on look at how reverse engineers encounter AI models once they are deployed inside firmware images and compiled binaries. Rather than treating AI as a black box, the session walks through concrete analysis workflows that expose how models are packaged, optimized, and executed at the binary level.

The talk covers multiple deployment patterns, including serialized model formats and fully compiled inference pipelines produced by modern AI toolchains. Attendees will see how common reverse engineering tools can be used to locate model artifacts, distinguish inference logic from surrounding code, and reason about model structure and behavior even when traditional metadata is absent.

Practical demonstrations illustrate how recovered information can be used to reconstruct portions of a model, validate assumptions about its architecture, and assess downstream risks such as unauthorized reuse, tampering, and adversarial manipulation. The session concludes by discussing defensive implications and what these findings mean for teams responsible for deploying or securing AI-enabled systems.