Most privacy-preserving AI systems are built on good intentions and untested assumptions what happens when you actually measure what you are protecting and discover the gap is catastrophic? In production IoT deployments, sensitive data exposure is rarely a policy failure; it is an architecture failure, and standard approaches to differential privacy and post-hoc anonymisation consistently underperform when applied to real-time sensor environments at scale. Through peer-reviewed research published across IEEE and ACM conferences, I developed a privacy-preserving sensor network architecture that achieved a 99.2% reduction in data exposure, a 45% improvement in energy efficiency, and an 87% increase in AI decision transparency without compromising model performance. This talk unpacks the specific engineering decisions behind those outcomes: how federated inference was combined with edge-level anonymisation, where differential privacy mechanisms broke down in practice and what replaced them, and how decision transparency was built into the inference layer rather than retrofitted after deployment. Attendees will leave with a concrete architecture pattern they can apply to their own systems, a clear-eyed understanding of where standard privacy tooling fails in constrained environments, and a practical method for instrumenting AI transparency so that it is measurable rather than merely claimed. This session is for data scientists, ML engineers, and technical leads who are responsible for systems where privacy is not optional and who need solutions that hold up beyond the whitepaper.

Technical Level of Session: Technical practitioner

Supported by