Unlocking Backdoor AI Poisoning with Dmitrijs Trizna
Dmitrijs Trizna, Security Researcher at Microsoft joins Nic Fillingham on this week's episode of The BlueHat Podcast. Dmitrijs explains his role at Microsoft, focusing on AI-based cyber threat detection for Kubernetes and Linux platforms. Dmitrijs explores the complex landscape of securing AI systems, focusing on the emerging challenges of Trustworthy AI. He delves into how threat actors exploit vulnerabilities through techniques like backdoor poisoning, using gradual benign inputs to deceive AI models. Dmitrijs highlights the multidisciplinary approach required for effective AI security, combining AI expertise with rigorous security practices. He also discusses the resilience of gradient-boosted decision trees against such attacks and shares insights from his recent presentation at Blue Hat India, where he noted a strong interest in AI security.
In This Episode You Will Learn:
- The concept of Trustworthy AI and its importance in today's technology landscape
- How threat actors exploit AI vulnerabilities using backdoor poisoning techniques
- The role of frequency and unusual inputs in compromising AI model integrity
Some Questions We Ask:
- Could you elaborate on the resilience of gradient-boosted decision trees in AI security?
- What interdisciplinary approaches are necessary for effective AI security?
- How do we determine acceptable thresholds for AI model degradation in security contexts?
Resources:
View Dmitrijs Trizna on LinkedIn
View Nic Fillingham on LinkedIn
Related Microsoft Podcasts:
Discover and follow other Microsoft podcasts at microsoft.com/podcasts
The BlueHat Podcast is produced by Microsoft and distributed as part of N2K media network.