Threat Vector 1.11.24
Ep 13 | 1.11.24

Cybersecurity in the AI Era: Insights from Unit 42's Kyle Wilhoit, Director of Threat Research


David Moulton: Welcome to Threat Vector, a podcast where Unit 42 shares unique threat intelligence insights, new threat actor TTPs, and real-world case studies. Unit 42 has a global team of threat intelligence experts, incident responders, and proactive security consultants dedicated to safeguarding our digital world. I'm your host, David Moulton, director of Thought Leadership for Unit 42. In today's episode, I'm going to talk with Kyle Wilhoit about AI. Kyle is the director of threat research at Unit 42. He's also an author and Blackhat US Review Board Member. In our conversation, we'll discuss the evolving role of AI in cyberattacks. Kyle will talk about the current state of AI in cyberthreats and shed light on the future trends and areas of focus for cybersecurity professionals. Kyle, thanks for joining me on Threat Vector today. Can you give our audience a quick snapshot of what you do at Unit 42?

Kyle Wilhoit: Yeah. I help run research efforts into cybercrime, cybercrime-related elements, as well as nation-state espionage groups performing targeting attacks.

David Moulton: And briefly, where does your role intersect with AI?

Kyle Wilhoit: Yeah, so quite a bit. You know, artificial intelligence has been used or generative AI has been used for quite some time at this point. But within Unit 42, specifically, my focal point is specifically looking into the threat landscape and trying to understand how generative AI is being leveraged by criminals, by threat actors, et cetera.

David Moulton: Kyle, how has the role of artificial intelligence evolved in recent years in the context of cyberattacks, and what are the key ways attackers are utilizing AI to their advantage?

Kyle Wilhoit: So from that perspective, first, I want to kind of start out by saying I haven't really -- and we haven't really -- seen a dramatic shift in the threat landscape due to, quote-unquote, "generative AI." There's a lot of fear, uncertainty, and doubt circulating kind of about the threat of AI, and we're just not seeing the needle shift significantly in terms of the threat landscape due to this technology. We're not seeing jailbroken LLMs, as an example, being used for the wholesale creation of malware, just as a simple example. In terms of impact, however, we are observing some restricted effects; particularly, in the domain of jailbroken LLMs. So LLMs recently have gotten a lot of attention, specifically WormGPT, FraudGPT, EvilGPT, and several others. And all of these fall into a category of language models crafted to enhance, basically, an attacker's arsenal. Basically, trying to simplify the attacks that they're conducting. Through our testing, however, we've identified marginal scenarios where these tools might actually prove to be advantageous, but their functionality remains largely controlled or gatekept. So, for instance, direct generation of malicious code is off the table, but these LLMs can likely produce generic or rudimentary code for straightforward tasks such as utilizing SMB to transfer files between hosts. But ultimately, from our perspective, these jailbroken LLMs really aren't pushing that needle like I mentioned before. We're just not seeing them really impact the threat landscape in its entirety just based on their limited functionality.

David Moulton: Most people have heard of WormGPT, FraudGPT. What are these? And what threats do they pose?

Kyle Wilhoit: Yeah. So kind of what I mentioned before. These are examples of what we call jailbroken models. In the context of LLMs, jailbreaking refers to the, like, engineering of prompts to exploit model biases and ultimately generate outputs that may align with, kind of, what their intended purpose was. One popular jailbreak that we're examining that we've researched over time here within Unit 42 is something called Do Anything Now or DAN, which is a fictional AI chatbot. And many of these jailbroken models that we witness being leveraged by criminals, et cetera, is really just using a modified version of the Do Anything kind of chatbot. But DAN uses a method ultimately to jailbreak LLMs to convince the LLM that it's basically using an alter ego, forcing it to give back some limited and/or, in some cases, sensitive information. From our perspective, we've analyzed over 11 different jailbroken models. And out of those, you know, it seems like almost all of them are leveraging to some degree this type of mechanism to actually, you know, jailbreak that model using a format of DAN, or a similarity, or something similar to DAN to actually jailbreak that. So a single jailbreak prompt may not work for all AI models. That's kind of important to mention. So from our perspective, we're seeing a lot of jailbreak enthusiasts constantly experimenting with new prompts to ultimately try to push the limits of these models to see if they can bypass them. So, yeah, from my perspective, you know, and as I mentioned before, our analysis of these models are really only incrementally supporting a threat actor's toolbox. And really, only benefits from rudimentary code generation and realistically more accurate social engineering text generation, those types of things.

David Moulton: How do you foresee generative AI being used by attackers in the future?

Kyle Wilhoit: I think the industry is going to evolve both near-term and long-term. So I think they're kind of distinctly different. But specifically, around cyberattacks and AI and generative AI, I'm going to see this more of, like, an enabler of sorts. At least in the short term. I think there are areas that are going to have direct -- are going to directly improve from an attacker's perspective with continued implementation of this technology. But specifically, from, like, my perspective, I think of four different areas where there's going to be efficiencies or enablement for attackers leveraging generative AI. The first would be around automation and efficiency. So attackers can leverage AI in the future as well as doing some now on a limited basis to automate various aspects of their attacks. That could be such as doing automated vulnerability scanning, launching phishing campaigns, delivering malware, those types of things. But I can foresee this kind of increasing or kind of going in a faster way, meaning the automation and efficiency of those types of attacks I think is going to improve over time. I think social engineering is going to improve. Specifically, attackers using LLM and natural language processing to create those convincing spearphishing messages, to create those social engineering attacks that are catered based on regional language variances. I think also automated recognizance is going to improve. And by recognizance, I'm referring to the act of an attacker going out and trying to fingerprint or profile a victim. Understanding what assets are deployed on the internet, as an example. And I think generative AI in the future, as is what we're seeing somewhat already, is that we're seeing an increase in that automated recognizance and the ways in which that recognizance is being performed. And then the final kind of area that I would think through is code enhancements. And this is basically using jailbroken LLMs to generate rudimentary code that can help bypass traditional security controls. And I say that because LLMs are decent at generating code, but it's mostly rudimentary in nature, so it still requires a human to go in, fold that code into their code base, and make the incremental edits that way.

David Moulton: Kyle, how does the use of generative AI impact the attribution of cyberattacks? And what challenges does this present for cybersecurity professionals like yourself?

Kyle Wilhoit: Yeah. So attribution during the best of times is a difficult and complex topic that requires its own dissertation, frankly. But from our perspective, there are a few areas that I think are going to continue to be problematic when we look at attribution through the lens of, like, generative AI-enabled attacks, right? I think, first, attribution complexity is only going to continue to increase. It's oftentimes difficult to know or determine if, quote-unquote, AI was used in an attack, right? The same attribution challenges that exist today without using and considering, quote-unquote, AI also exist in attackers using AI. So those complexities still exist, and I think it's only going to become more complex. I also think false flags and misdirection can definitely increase. So as an example, adversaries could possibly in the future leverage generative AI to introduce false flags and misdirection, ultimately trying to lead investigators and threat intelligence analysts away from the true source of the attack. So I think that deliberate obfuscation can prolong the attribution process, ultimately -- possibly, misidentifying responsible parties, which has a litany of different problems and issues. I think attribution timeframe is going to be impacted, meaning the speed at which generative AI types of attacks can be executed may ultimately outpace the speed at which we can do attribution. Attribution is a time-consuming process and oftentimes requires a lot of input. And from our perspective, anything that increases that timeframe for attribution slows down the context that we can provide from an intelligence perspective. And then, finally, I think using and leveraging shared tools and techniques is going to be commonplace and more commonplace, making attribution more difficult. And what I mean by that is generative AI tools and techniques can be easily shared among different threat actors, non-state actors, or even just script kiddies. And the sharing of resources can ultimately result in a bigger potential pool of attackers that are performing similar methods across the board, making it harder for us to ultimately attribute who is actually behind those attacks. So I think those are kind of four key areas that attribution is only going to continue to be more difficult, even considering today's circumstances of attribution being one of the hardest subjects to talk about in threat intelligence.

David Moulton: What are some of the notable use cases where adversarial AI techniques have been employed in cyberattacks?

Kyle Wilhoit: Oh, okay. Yeah, so there's a couple of recent news articles that I've been reading about, specifically where some different techniques have been leveraged. The first was a deep fake example that was leveraged by one criminal specifically. And this is from a recent news article that I saw where it started with an attacker texting employees of an organization with a link to a malicious URL. Pretty standard and something that we see commonly across the industry. But one employee who was unaware of the attack, obviously, followed the URL on the message, leading them to a fake company portal for logging in. Once they were inside that fake company portal, they encountered a multifactor authentication form. Again, nothing really out of the ordinary. But ultimately, while that was happening, the attacker had made a phone call to the employee utilizing, basically, a deep fake that was powered by, quote-unquote, generative AI to mimic the voice of one of the IT team members that worked at the organization. And then during the call, the imposter convincingly claimed to be a legitimate IT member and even replicated the employee's actual voice, ultimately getting the MFA token to log in and then ultimately causing a compromise. So this is kind of one, you know, concrete example of some news articles that I've read recently where we're seeing kind of that stretching of this technology into very unique areas. Some additional research that I recently read isn't necessarily directly related to an attack, but it could necessarily -- or could be brought up in terms of, like, an attacker actually leveraging this technology, and it revolves around password cracking. This research, if I remember right, was published by a group called Home Security Heroes earlier this year, where attackers had used different algorithms to crack passwords more efficiently. Specifically, around kind of, quote-unquote, generative AI. And some of their findings were interesting. If I remember right, passwords that had 11 numbers could be cracked instantly. Passwords that had eight uppercase, lowercase, and numbers could be cracked within 48 minutes. Passwords containing eight uppercase letters, lowercase letters, symbols, et cetera, can be cracked, if I remember correctly, in seven hours. So ultimately, you know, what we're seeing is efficiency in the cracking of passwords, which is obviously big from a threat landscape perspective. And ultimately, this -- and going out and performing this cracking is easy; they're using tools such as PassGAN, et cetera, to basically become more proficient at password cracking. Not through traditional manual methods, but by analyzing genuine passwords that are obtained through, like, real data breaches, et cetera. So going out and doing this cracking is not only becoming easier, but we're now seeing studies going out there that are proving that this -- that cracking is actually faster now.

David Moulton: Are generative AI-driven attacks typically more targeted, or do they also involve non-targeted, widespread attacks?

Kyle Wilhoit: That's a really good question. And ultimately, in my opinion, the choice between targeted and non-targeted attacks depends on the goals and motivations of the actual threat actors themselves. So some attackers may prioritize, like, stealth and precision. Whereas, other attackers may cause or may seek to cause widespread disruption or extract value from a larger pool of victims. So from my perspective, as generative AI continues to evolve and impact the threat landscape, it's likely that we'll see an increase in the diversity and sophistication of both targeted and non-targeted cyberthreats. So I wouldn't necessarily say that this type of technology or this type of a threat is directly applied to either targeted or non-targeted, I think it's just dependent still on the attacker's motivation and their ability to access this technology and leverage it.

David Moulton: With the increasing sophistication of AI-driven attacks, what challenges do cybersecurity professionals face in defending against such attacks? And what strategies are being developed to counteract them effectively?

Kyle Wilhoit: Yeah, so I'll divide it up into kind of two different areas. So the first is looking at specific challenges, and I think there's really two, kind of, key areas, at least in the near term, that I was thinking through that could make sense. The first would be attack automation. And what I mean is attacks leveraging generative AI, et cetera, will enable attackers to basically automate and scale their attacks to a level that we likely haven't seen before. And this ultimately puts pressure on defenders to respond in real-time to ultimately counter those types of threats in an automated way. And I think with the introduction of the attack automation side of this, I think it's only going to increase in the future and I think that's a distinct challenge. I think a secondary distinct challenge is data and log overload. Specifically, when you look at this perspective from, like, ASOC. And what I mean is, in the future, it's likely that AI-style or AI-driven styles of attacks will generate vast amounts of data both from a log and data perspective, and that can ultimately overwhelm some of those operational security teams. And I think that's a distinct challenge that's likely going to need to be overcome, as well, but I can foresee that changing just given the simple fact that attack automation and the ultimate amount of attacks is going to likely increase with this type of technology. Regarding strategies to kind of counteract some of these challenges, I would tend to focus on what I would typically call bread-and-butter security. What I mean by that is traditional security controls that you've heard about -- and this is all near-term types of recommendations. And what I mean by that is employing robust secure email gateways. Using DDoS protection on external assets and infrastructure. Secure training datasets if you're working on machine learning yourself. Make sure you perform regular patching. I can't believe it's 2023 and I'm still making this recommendation, but I recommend constant regular patching. I think also in that same vein, make sure you employ robust multifactor authentication across the board. So I think really focusing in the near-term on what those bread-and-butter security controls and kind of deploying some of those security controls from a bread-and-butter perspective is really going to help offset some of this, at least in the near term, from that perspective.

David Moulton: How can the cybersecurity workforce be better educated and trained to deal with the evolving landscape of generative AI threats?

Kyle Wilhoit: That's a good question and something that from a threat intelligence perspective is something that we're constantly asking ourselves. And we're asking ourselves that because, natively, a lot of folks in the threat intelligence industry aren't necessarily AI or ML experts. So from our perspective, there's a few things that I've really honed into from my team's perspective. The first is providing and facilitating ongoing training. And what I mean by that is continuously providing the opportunity to train in this technology is key, and it's been key from my team's perspective to make sure that we're staying on top of what's happening in the threat landscape. So providing that ongoing training is big. I think raising public awareness is another piece of that across the organization itself. And that comes down to the fact of general public awareness about AI-driven threats and how, ultimately, those can impact customers, et cetera. I think collaboration and sharing of information is extremely important, specifically around threat intelligence. And I think encouraging the sharing of that threat research, those research findings that we have on a daily basis, and best practices specifically within this area across industries is important. And then I think, finally, it's just participating in workshops and training. I know I've already mentioned providing ongoing training, but I think there's a lot of workshops, there's a lot of free and open training available. And I think attending those workshops and conferences, et cetera, is going to keep you up to date with kind of what those trends are in this particular area.

David Moulton: So, Kyle, we're coming up on 2024. What are we going to see in the new year?

Kyle Wilhoit: That's a good question. I think we'll probably see a scale of attacks that we haven't seen in the past; maybe some of that will be leveraged and furthered by generative AI. But I think we're going to see scale and scope to probably a level that we haven't seen in the past. I think we'll likely see the evolution of cybercriminal tactics. I think we'll likely continue to see the evolution of nation-state-based attackers leveraging cybercriminal tactics. I think we're in for a wild ride, ultimately, David. I think it's only going to continue to get more interesting, specifically as we continue to march on.

David Moulton: And for the listeners, what's the most important thing you want them to take away from this conversation?

Kyle Wilhoit: I think the first is to really try to understand what the threat landscape looks like. As I mentioned at the beginning of our talk, David, there's a lot of fear, uncertainty, and doubt, or FUD, around generative AI and what it's doing in the threat landscape and the impact in the threat landscape. And from my perspective, I'm really looking at it from the other side of the coin and basically saying, you know, keep a rational head about this. We're not seeing this be the cornerstone of a cybercriminal's arsenal or anything like that, but we're still doing the research because, ultimately, this will likely change the threat landscape in the future. But in the near term, just keep the focus on we just haven't seen that impact quite yet, but we will likely in the future. We just haven't seen it yet.

David Moulton: Kyle, thanks for sharing with us your insights into the impact that AI is having on the threat landscape. In spite of a lot of hype about how attackers are supposedly using AI, it's intriguing that the impact so far remains somewhat limited. I think that your prediction that AI will be used to automate and streamline attacks, making them more efficient and harder to detect rings true. And I think that your other recommendation that more, quote-unquote, bread-and-butter security along with ongoing training and awareness are really practical and the types of things that every single organization gets started with right now.

Kyle Wilhoit: I couldn't have said it better myself, David.

David Moulton: Now let's shift away from artificial intelligence and into a conversation about ransomware. Doel Santos, principal threat researcher, and Anthony Galiette, the senior malware reverse engineer, have recently published on Medusa ransomware. Let's get into a couple of the highlights of that research. So Doel, let me start with you. Why has Medusa ransomware become notably prominent in the cyberthreat landscape in 2023?

Doel Santos: I would put it in two particular points. One has been operating pretty much on the low side of things for a year now, which has benefited a lot because they're not in the eye of law enforcement, they're not in the eye of many cybersecurity researchers. And then in 2023, once they felt comfortable with the structure of their ransomware service, they would start to impact different organizations, start assessing the weak side. And they have no part particular code of conduct, right? Everything is a target. Everything can be compromised by these particular individuals in a way for them to make a profit.

David Moulton: Who were the primary targets of Medusa ransomware and how are they chosen?

Doel Santos: I don't think they're particularly chosen. I think Medusa as a whole, even though it has a huge array of active industries, are industry-agnostic and opportunistic by nature. Due to the fact that they leverage a lot of exploitation of vulnerabilities, they will just compromise whatever is available.

David Moulton: Anthony, let me kick it over to you. Tell us a little bit about the unique characteristics and tactics that set Medusa ransomware apart from, say, other ransomware threats.

Anthony Galiette: Sure. So Medusa as far as initially accessing a victim, we've seen them using exploits as well as initial access brokers. In this instance, we've seen them exploiting exchange [inaudible 00:21:28] related back in late 2022. They also are known to use living off the land techniques. Things such as using BITSAdmin and other types of native tools built into the Windows operating system to start staging their tools as far as further deploying them into an environment. And then we've also seen them extending the capabilities of tools like NetScan with custom scripting and automation for ransomware deployment, as well as PsExec and other tools for lateral movement. And they're also known to directly target endpoint protection with bringing your own driver, and also protecting those particular binaries, as well, with packing and other types of obfuscation. They're also known to use a multi-extortion strategy such as exfiltrating data before they deploy the ransomware so that they are more likely to get the ransom money out of their victim.

David Moulton: Tell our listeners how organizations should best defend against threats using techniques like Medusa.

Anthony Galiette: So patching public-facing assets is one of the first things that comes to mind. Making sure that those are sealed up so that exploits fall to the wayside. Logging into a SIM is something also to consider, as well, as an endpoint software tool for introspection, and then making sure that you're monitoring your service accounts and other highly privileged user accounts.

David Moulton: Anthony, beyond going to the Threat Research Center on the Unit 42 site, what is the most important thing that you want listeners to do or take away from this conversation?

Anthony Galiette: Understand your attack surface. What are your assets, your applications, and user accounts? And understanding what relevant adversaries are most likely to target you.

David Moulton: Doel, Anthony, thank you for sharing on the Medusa ransomware. If you're interested in learning more about Medusa ransomware, visit the Unit 42 Threat Research Center for the full brief. That link will be on our show notes. We'll be back on the CyberWire in two weeks. In the meantime, stay secure, stay vigilant. Goodbye for now.