Threat Vector 7.3.24
Ep 26 | 7.3.24

AI-Generated Cyber Threats

Transcript

David Moulton: Welcome to Threat Vector, the Palo Alto Networks podcast where we discuss pressing cybersecurity threats, cyber resilience, and uncover insights into the latest industry trends. I'm your host, David Moulton, Director of Thought Leadership for Unit 42. In today's episode, we have a fascinating and critical discussion lined up as we dig into the world of AI-generated malware. Joining me are two exceptional guests from the Palo Alto Networks Cortex Research Group. Rem Dudas, Senior Threat Intelligence Analyst; and Bar Matalon, Threat Intelligence team lead. The rapid advancements in AI have brought about numerous benefits, but they've also introduced new and unprecedented challenges in the realm of cybersecurity. Over the past year and a half, we've seen generative AI models like ChatGPT rise to prominence, offering powerful tools that anyone with an internet connection can access. While these tools have the potential for positive application, they also pose significant security risk when used maliciously. Rem and Bar had been at the forefront of researching these risks, conducting groundbreaking experiments to understand just how capable these AI models are in generating sophisticated malware. Today, we'll be discussing their findings, the implications of AI-generated malware for the cybersecurity landscape, and what organizations can do to protect themselves from these emerging threats. We'll explore questions such as can generative AI truly build malware, how difficult is it for a threat actor to leverage these tools, and what does this mean for the future of cybersecurity defense? Here's our conversation. So I think I'll start with you, Bar. Talk to me a little bit about yourself, your team, and what you've been up to.

Bar Matalon: Yeah. So we're from the Threat Intelligence team in Cortex Research Group here in Palo Alto. And we are kind of the team that mainly focused on external sources. You know, there are other teams that do telemetry, but we're focused on open source intelligence. And we attract that threat landscape to find new campaigns, new malware. And our mission is to make sure that our customers are protected from these emerging threats.

David Moulton: And let me ask, you said open source in there. What is it about open source that either drew you in or is an organizational choice?

Bar Matalon: It can be like open repositories where malware sample are uploaded to. But it can also be like a report published by other security companies. And so we monitor this. We take the samples, and they indicate or set dimension and random, random in our lab against Cortex XDR to see -- to see its coverage.

David Moulton: Gotcha.

Bar Matalon: Yeah. Most of the time Cortex does great job. Sometimes there are some gaps. So our mission in the team is to hand it over to the other research teams and make sure we add this coverage as quickly as possible.

David Moulton: You did mention that you guys were doing some research to understand AI models' ability to generate malware. Where did you get that idea?

Bar Matalon: Yeah. So one of the things that we do in the team is not only responding to emerging threats, we also try to identify trends and also sometimes to anticipate trends in the near future of cybersecurity. And, as we all know, over the past two years, even less than two years, a year and a half, let's say, AI, generative AI has become really popular since the release of ChatGPT by OpenAI. And it's not only because of the improvement of technologies, which actually have been improved significantly in the recent years. But I think what was the key changer here was the fact that it's very accessible. So, basically, anyone with an internet connection can go and use very complicated and powerful AI model. So, as it can -- you know, and I'm sure it will improve our lives in so many aspects. It also raised some concerns in the realm of cybersecurity. And one of these concerns was about the threat actors using AI capabilities in generating malware and, you know, the potential implications that could have on the cyber landscape. So we decided that we wanted to test it out, to test it out to see if we are in the point that it's possible for threat actors to do so. Because of the, you know, the hype, there are many articles and reports and opinions and hence -- you name it, so we decided that we wanted to go through the models and generate malware by yourself, try to do that.

David Moulton: So you saw this tool. It's easily accessible. It's hugely powerful. It's adapting to and learning very quickly. And you said, while it can do some really great things, does it have the ability to also generate some harmful things? The question that comes to mind for me and probably of many of our guests is, the bottom line, can generative AI build malware?

Rem Dudas: The simple answer is yes. And there is a bit -- a bit of a longer version for that answer. It's a lot more complex than it seems at first, but it is possible.

David Moulton: Talk to me a little bit about that. Does that mean that you've got to know a little bit about what you're doing? You know, if I'm an unskilled 10-, 15-year-old kid messing around with an AI tool or a generative AI tool, I'm probably not going to accidentally stumble into building malware. But if I have a little bit more skill and understand the prompting, I get to that outcome?

Rem Dudas: I'd say, yeah. 15-year-old kid without any knowledge can stumble upon generating malware. But someone with a bit more technical knowledge can get some pretty amazing results from the commercial models.

David Moulton: With a little bit of knowledge, with a little bit of prompting, how did you judge where generative AI and building malware was dangerous or it starts to go into the realm of this is a tool that a professional could use to go faster or build more creative malware?

Rem Dudas: It took a while. It was a trial and error process pretty much. We had a lot of attempts at first, and we didn't manage to generate much in the beginning. But after getting the hang of it, researching it a bit, and learning what makes it tick, yeah. We started getting more frightening results.

David Moulton: So can you describe the stages that you followed as you got into your research?

Rem Dudas: Started a sort of trial and error thing. We didn't have I'd say specific research questions going into that. We mostly wanted to know can this thing generate malware. We quickly figured out that it could, without question. And we started thinking, what can we do with it now? The main stage after the basic tinkering with the products was trying to generate malware samples that perform specific tasks based on MITRE techniques, if you're familiar with -- familiar with those. So let's say, for example, we would like to generate a sample that does credential gathering from Chromium browsers. So we try generating those. And for each technique that we found interesting, we tried generating a specific sample. We did that for a couple of different operating systems for Windows, for Mac OS, and Linux. And we tested all of those samples against our product as well.

David Moulton: And you said that you started out with sort of an experiment; didn't necessarily know how you're going after you started to see where you could mine results. Did things accelerate? Did you -- were you surprised by the fact that you were able to get better and better malware out of a generative AI?

Rem Dudas: So things definitely accelerated, especially in the later parts of the research. I'd say that, at the earlier stages, the generating samples paired MITRE technique. The results were pretty consistent and pretty underwhelming. We didn't find them incredible or frightening at all at the start. The malware was not robust. It only could perform one basic task at a time, and most of the results were definitely hallucinations. If you're familiar with the term, LLMs can hallucinate. They can invent answers that do not work. In terms of code, they will just give you libraries that don't exist or commands that don't exist.

David Moulton: So is it possible to instruct a AI to mimic another malware?

Rem Dudas: That was the next stage of the -- of our research. Yes, it is. Our next stage was to test the ability of generative AI, the abilities of generative AI in terms of impersonating threat actors and specific malware types. We used open source materials. So Bar touched upon this earlier, those articles regarding analysis of malware families and threat actors. We used a couple of those as a prompt or description for generative AI engine and asked it to impersonate the malware discussed in these articles. And we managed to do some pretty nasty things with that.

David Moulton: So what does that mean for threat actors? Are they able to take those same articles and those types of prompts and use them to either build or modify malware?

Rem Dudas: I mean, that's purely speculative at this point. But imagine a nation actor with ill intent using psychological warfare, mimicking another nation's arsenal or kit or malware and planting false flags, trying to make it look as if another country or another threat actor made a specific attack. It opens the door for a lot of nasty business and makes attribution and detection pretty difficult for the defending side.

David Moulton: Rem, can you give me an example of something that you were -- you were able to generate?

Rem Dudas: Yeah. Definitely. Unit 42 publish a paper a while ago about the BumbleBee web shell. A web shell is a basic command line tool that you plant on the server, and you can execute commands with it on a different server. The BumbleBee web shell is pretty basic as web shells go. It can execute commands, drop files, upload files. And it has password protection for both viewing the shell and for interacting with it. The most striking thing about it is probably its UI. It has a very unique user interface. It's like yellow dots and around every field. We tried to get a generative AI engine to impersonate BumbleBee, and this is a bit intricate because it has two parts. One is impersonating the logic, you know, checking for passwords, being able to execute commands, upload files, etc. And the other bit is trying to get that UI, that distinct UI look. And we succeeded in both. We used the article from Unit 42 to get the logic. And we described the UI to a generative AI model, and we've received code that implemented that UI. And it looked exactly the same as the pictures of BumbleBee.

David Moulton: Is that within the realm of your expectations of what you're being -- we're going to be able to get, or did that combination of logic and UI coming back passing the sniff test surprise you?

Bar Matalon: When we started out, the quality of results made me believe that we would never get into a point like BumbleBee. The quality was so low at the beginning that something like that was truly mind-boggling for me.

David Moulton: Bar, what challenges might defenders see in the future?

Bar Matalon: So yeah. That could be very challenging for security researchers, especially for profits of attribution. Today, most of the math has to do the attribution rely on the tools and the techniques that you observe in the attack. And once we get to the point that it's so easy for threat actors to go to the AI model and ask, you know, generate me a malware that looks similar to, I've known some malware used in a campaign attributed to, let's say, ration nation state actor. Once it is possible and elector can do that easily, it would be very, very challenging and very hard for researchers to distinguish between the genuine malware to these, you know, imitating malware.

David Moulton: What have you learned from the experiments that have most surprised you?

Rem Dudas: We've learned a couple of surprising things, just to name them off, how easy it was to generate malware. Once we figured out the tricks, I'd say, how weak guardrails are on those commercial AI models. And the fact that some of those models are not as deterministic as the internet would suggest. So, you know, asking the same question, giving the same prompt, you would expect that it will give you the same answer 100 times. But that's not the case. It'll give you some different answers every single time. And we also had some weird shenanigans with some for generated malware samples.

David Moulton: And what do you mean by weird shenanigans?

Rem Dudas: For example, when we try to generate or impersonate a specific well-known ransomware family, we -- we found a logical flaw in the code that the AI wrote. So something kind of unique. You know, when a ransomware goes over files, it opens them and tries to encrypt the binary data. So it has a specific length of a key that it uses for encryption, and it's supposed to encrypt the files with this key each and every time. Now, the code that the AI wrote didn't check the size of the buffer of the key. So that created the unique problem in which, if you provided a short key, the ransom would just print every single string after the key that was present in the original malware code into the encrypted file, which became very problematic when you realize that they just printed out the encryption key in plain text into each encrypted file.

David Moulton: So, guys, talk to me a little bit about what are your next steps in this research.

Rem Dudas: Okay. Well, after we found out that we can impersonate malware and that we can ask for specific techniques from those AI models, we wanted to generate a framework of sorts, a set of questions or prompts that you can ask pretty consistently and receive good accurate results. We want to get to a point where we can bulk generate malicious samples as easily and as quickly as possible in order to test and strengthen our product. So the next steps are, after generating the framework would be obviously automating that and trying to build, like, a pipeline that will automatically test those samples.

David Moulton: So, essentially, you build yourself a playbook that you can use, generate consistent results, and then run those through I assume some level of automation to test against the endpoint protection that we have here. What do you do after that? What are the implications of this AI-generated malware on the cybersecurity landscape?

Bar Matalon: First of all, I think we need to remember that AI-generated malware isn't fundamentally different from other types of malware because, at the end of the day is -- it's a malware, you know, as the others. But once the threat actors' starting using AI to -- and leveraging AI technology, that could give them many advantages. So maybe we're not there yet, as our experiments show. But we believe that we will be soon at the point where threat actors can leverage this technology. So they can use it for developing power, for example, and developing knowledge. You know, if in the past you needed -- if you wanted to have, like, your own malware family, that would be very sophisticated and powerful. You probably needed to hire very talented developers and a group of people that understanding security operation, operating system, vulnerability exploitation, and so on. But once you can go through your AI model and use it to do those things, that means that potentially every, you know, individual hacker could be a threat, like what we call APT, the advanced persistent threats or the more serious groups in the -- in the landscape. And, also, people can do that and use it to generate malware without write even one single line of code. That's what we call lowering the barrier. So more people can join this game. And, in addition to that, threat actors can use AI to -- you know, to be much more effective. They can automate different stages of the attack and streamline tasks like recognitions, malware distribution, or that extra trust. Every stage of attack can be automated, and that can allow for threat actors to be more effective and launch campaign on larger scale. Another point that should be discussed when we're talking about AI and its potential implications in cybersecurity is what we call polymorphism or polymorphic malware. Polymorphic malware is a malware that on each execution or each deployment, its code looks a little -- a little bit different. And AI -- generative AI is a great tool to automate generating these kinds of malware because you can leave a piece of code that be generated by AI and, once you get different bits of code every time, that means that malware actually changes its appearance. It could have different signatures. It can even have different techniques or different functionality. And that's something that can be very challenging for signature-based tools to detect and stop eventually. We believe that we will soon see different cyber landscape. We'll see more threat actors, compared in higher volume and on a larger scale, and thumbs off slightly different malware sample generated every day.

David Moulton: I'm struck by the idea that you've built a pipeline where you can build malware, whether it's polymorphic or not, and give defenders an advantage on this front as they can run them against their own tools to bolster their defenses. So, while there is a dangerous side of this, I think that what you've done is give researchers and teams like ours here at Cortex the ability to make sure our defenses are incredibly strong. It seems to me that by having the ability to generate malware at scale we can ensure our protections constantly are improving, right? >> Bar Matalon:. Yeah, it's like an unlimited -- unlimited amount of malware samples that we can test on our product. Yeah. Which is, like, weirdly good. So let me wrap it up. I've got two more questions for you guys. Now, as you're thinking about best practices, organizations out there are going to want to defend against the AI-generated malware. What do you recommend that organizations do as far as best practices?

Bar Matalon: Cybersecurity has always been kind of a cat and mouse game. And organizations need to stay ahead of this game. And it seems like AI and the AI war is kind of the latest iteration of it. In the AI era where every single hacker can be a threat like, you know, serious actor and generate ATP-level malware, I don't really see that traditional signature-based tools like antiviruses managing to keep up with all the new samples generated. One of the best practices for organizations is to invest in advanced tools that leveraging dynamic detections and behavior rules to detect all these new threats and stop them. The best solution for bad person with AI model is a good person with an AI model, right? Of course, it almost goes without saying that organizations should keep what we call IT hygiene and adhering that security in layers paradigm. So that can help them to reduce the tax the air service because, at the end of the day, if organizations have vulnerable server and exposed to that -- to the internet, that it doesn't really matter if the malware was written by AI or human.

David Moulton: Defense in-depth and good security, IT hygiene, some of the fundamentals and the basics paired with some of those more advanced tools, the dynamic detections. Guys, let's wrap it up here. Rem, I'll kick it over to you. What is the most important thing that a listener should remember from this conversation?

Rem Dudas: Yeah, yeah, yeah. The field hasn't fundamentally changed yet. Malicious behavior is still malicious behavior. But we are well on the way towards something that will start to look uniquely different, I think, from what we're familiar with today.

David Moulton: Bar, what's the most important thing that a listener should remember from this conversation?

Bar Matalon: It is possible to generate malware using AI, but it's not so easy. You need to have basic understanding of how coding works and how to compile such malware. And you have to bypass these guardrails that AI models have today.

David Moulton: Yeah. I remember a few years ago seeing a design for an automobile that was completely generated out of an AI model. And it was bizarre looking. It didn't follow the lines and curves and expectations that you had for what a car would look like. And, yet, it was highly efficient, had a low profile for resistance to wind, and it moved things around. And still recognizable as a car but completely different. And I wonder as we look at malware and look to the real-world example of maybe the automobile, they'll still be able to recognize it as malicious; but it acts and behaves and puts things together in ways that we wouldn't necessarily have from a historical human-generated malware. But it still has the same intent. Let's plan on coming back to this conversation in I think six months because I think that the pace of development in and around AI has caught me off guard. You guys into that?

Rem Dudas: Sure.

Bar Matalon: Yeah. Sounds great.

David Moulton: All right. Bar, Rem, thank you for so much for coming on Threat Vector today and giving us your insights on the research that you've been running and the findings that you've talked about today.

Bar Matalon: Thanks for having us, David

Rem Dudas: Thank you very much.

David Moulton: That's it for Threat Vector this week. I hope you found my conversation with Rem and Bar as insightful and thought provoking as I did. Before I sign off, I want to recap two profound insights discussed today. First, generative AI models have significantly lowered the barrier to creating sophisticated malware. These powerful tools accessible to anyone with an internet connection mean that even those with minimal technical knowledge can generate effective malware. This means that we may see an increase in both the volume and variety of threats we face in cybersecurity from a larger number of threat actors. Second, we discussed the evolution and challenges posed by polymorphic malware. With AI's capability to generate highly adaptive and elusive malware that changes its code upon each execution, traditional signature-based detection tools will struggle to keep up. This advancement necessitates the development of more sophisticated defense mechanisms. As AI continues to evolve, so too must our strategies for detecting and defending against these rapidly changing threats. Thank you for joining today, and stay tuned for more episodes of Threat Vector. If you like what you heard, please subscribe wherever you listen. And leave us a review on Apple podcasts. Your reviews and feedback really do help us understand what you want to hear about. We want to thank our executive producer Michael Heller. I edit Threat Vector, and Elliott Peltzman mixes our audio. We'll be back in two weeks. Until then, stay secure. Stay vigilant. Goodbye for now.