Security Unlocked 12.9.20
Ep 7 | 12.9.20

Threat Modeling for Adversarial ML

Show Notes

How ready is your corporate security team to handle AI and ML threats? Many simply don’t have the bandwidth or don’t see it as a priority. That’s where security engineers like Microsoft’s Andrew Marshall step in. In this episode, hosts Nic Fillingham and Natalia Godyla speak with Andrew about just what his team is doing to teach security professionals and policy makers about the dangers of AI and ML attacks, and walks through some of the documentation, available for free online, that can help guide the response. Plus, why he really, really doesn’t want to talk about Windows Vista. 

Nic and Natalia then explore what it’s like to hunt down threats with Sam Schwartz, a program manager with Microsoft Threat Experts. She came to Microsoft right out of college and didn’t even know what malware was. Now, she’s helping coordinate a team of threat hunters on the cutting edge of attack prevention. 

In This Episode, You Will Learn:   

  • Why data science and security engineering skills don’t necessarily overlap 
  • How attackers are using ML to change decision making 
  • What security teams are doing to protect AI and ML systems 
  • How threat hunters are tracking down the newest security risks 
  • Why Microsoft Threat Experts are focused on human adversaries, not malware 

Some Questions We Ask:   

  • What does the ML landscape look like at Microsoft? 
  • How are ML attacks evolving? 
  • What is ‘data poisoning’? 
  • Why do threat hunters need to limit the scope of their work? 
  • What skills do you need to be a security program manager? 

Resources:

Threat Modeling AI Systems and Dependencies 

Andrew’s LinkedIn

Sam’s LinkedIn

Microsoft Security Blog

Related:

Listen to: Afternoon Cyber Tea with Ann Johnson

Listen to: Security Unlocked: CISO Series with Bret Arsenault 


Security Unlocked is produced by Microsoft and distributed as part of The CyberWire Network.