Threat Vector 7.27.23
Ep 1 | 7.27.23

AI & Cybersecurity with Michael "Siko" Sikorski

Transcript

Michael "Siko" Sikorski: Yeah, I think the biggest concern when it comes to ChatGPT, the LLM, everybody having access to this technology almost suddenly is where's it gonna impact and benefit the attacker the most.

David Moulton: Welcome to "Threat Vector," a segment where Unit 42 shares unique threat intelligence insights, new threat actor TPTs, and real-world case studies. Unit 42 is a global team of threat intelligence experts, incident responders, and proactive security consultants, dedicated to safeguarding our digital world.

David Moulton: In today's episode, I'm going to talk with Mike "Siko" Sikorski. Siko is a best-selling author and expert in reverse engineering and the CTO and Vice President of Engineering and Threat Intelligence for Unit 42. Siko, you got that name in college when there were, what, nearly a dozen Mikes on your track team?

Michael "Siko" Sikorski: Yeah, that's right. There was a lot of us and we needed ways to differentiate. Luckily, I had a pretty cool name 'cause my last name's Sikorski and Siko is kind of natural. And then kind of just ran with it into the- I guess that was a little bit of a pun. Ran with it into the hacking culture, right, and having a nickname like Siko is definitely a good one for- to build your street cred.

David Moulton: Well, it definitely works and it caught my attention when we first met. Before the show, I asked you what was top of mind or what should be top of mind for our audience right now. And you immediately jumped right to AI. And there are stories about AI everywhere right now, no matter where I look. What should our audience think and care about right now when it comes to AI?

Michael "Siko" Sikorski: Yeah, I think the biggest concern when it comes to ChatGPT, the LLM, everybody having access to this technology almost suddenly is where is it gonna impact and benefit the attacker the most. And that's with social engineering. We've all seen this technology used for, hey, write a song in the style of this artist, and, you know, with the lyrics to my friend or family member, and it comes out perfectly sounding like them. You can imagine now the attacker has the ability to do that same thing but say, "Hey, write an e-mail and sound like this person." And if you think about it, we respond to upwards of 1,000 instant response engagements a year in Unit 42 and the number one way that the attacker gets in is still through phishing. And now we've just lowered the bar for them to be able to craft better phishing attacks. So the days of them being caught due to broken English or unable to communicate properly to someone is gone, so they won't be getting caught as much. Which means phishing attacks is probably gonna go up.

David Moulton: So, Mike, you talked about lowering the bar from social engineering. Let's flip it around. A lot of people are using ChatGPT or different AI tools. And I'm wondering does that create a security vulnerability for enterprises today?

Michael "Siko" Sikorski: Yeah, I think companies need to be hyper aware of how their users and employees are using this technology. Do they understand that whatever they type in that- it's not a private conversation and there's a huge risk to data leakage, right? If you're having it rewrite sensitive e-mails for you so you sound more clear, yes, the LLM's gonna do a great job of rephrasing. But if you have information in there, it can create huge risks to an entity. And so corporations need to quickly roll out policies surrounding this technology.

David Moulton: So in about a month, Black Hat's gonna happen, and I'm wondering what would you tell our listeners to look for when they're at Black Hat?

Michael "Siko" Sikorski: I think it's one of those things where I think pretty much every vendor is probably gonna say the term AI when you're out there, so you're going to be getting a hit with a lot of that, a lot of talk of that. I think it's about realizing what are science projects that these- some of these physicists have rolled out, technologies being rolled out, that don't really provide a ton of benefit. Instead, I would look to say who's been on the AI journey for a long time and actually have other things outside of the LLM more recent wave to show for, right? For example, here at Palo Alto, we've been on a journey of AI for a really long time. Early days of malware detection, malware family identification using AI, and then more recently is how do you automate the SOC, right? You're getting flooded with tremendous amounts of alerts. And we've been investing for a long period of time of how to use AI to go from a whole pile of alerts just to a set of incidents that you could actually make it through. So I think it's about trying to maybe peel things back a little bit and figure out, you know, which one- which technologies are maybe implemented. And, you know, just using an LM really quickly and to get something out for Black Hat. Versus, you know, which ones have actually, you know, are gonna have an impact in your life in a larger scale.

David Moulton: So, Mike, thanks for joining me today on "Threat Vector" and sharing your insights about how AI is changing cybersecurity. We will be back in two weeks with a look at the top threats and trends seen by the Unit 42 threat intelligence team. In the meantime, stay secure, stay vigilant, and goodbye for now.