Research Saturday 2.4.23
Ep 267 | 2.4.23

Can ransomware turn machines against us?

Transcript

Unidentified Person: You're listening to the CyberWire network, powered by N2K.

Rick Howard: Hey, everybody. Rick Howard here. This is exciting. Check out our new special edition episode where Brandon Karpf, our executive director of new markets, interviews a very special guest, ChatGPT, the AI chatbot launched by OpenAI that a lot of us have been playing with here at the CyberWire. Tune in to hear us try to turn ChatGPT into a security pundit, or at least a SOC analyst. Brandon asked the language construct questions and then he and I analyzed the answer. I'll give you a hint - it's pretty good. Come join us and see what you think. 

Dave Bittner: Hello, everyone, and welcome to the CyberWire's "Research Saturday." I'm Dave Bittner, and this is our weekly conversation with researchers and analysts tracking down the threats and vulnerabilities, solving some of the hard problems of protecting ourselves in a rapidly evolving cyberspace. Thanks for joining us. 

Tom Bonner: So we've been investigating the sort of machine learning attack surface for some time now. And we realized quite early on that there was some low-hanging fruit in terms of being able to execute code through machine learning models. 

Dave Bittner: Joining us this week are Tom Bonner and Eoin Wickens. They're from HiddenLayer's SAI team, the Synaptic Adversarial Intelligence team. The research we're discussing today is titled "Weaponizing Machine Learning Models with Ransomware." 

Tom Bonner: As part of that, we were looking at ways in which malware could be deployed through models. And yeah. 

Dave Bittner: That's Tom Bonner. 

Eoin Wickens: So I think it's probably good to preface it with the fact that, like, ML is being used in, like, nearly every vertical these days. 

Dave Bittner: That's Eoin Wickens. 

Eoin Wickens: I think there was a recent survey that said - that surveyed CEOs and they said about 86% of them had said that, you know, ML is part of their critical business function. And with that, we were thinking, like, you know, well, how much of an attack vector is this? So we've been researching this now for, I suppose, the last six, seven, eight months. And we've been finding that there is particular kind of weak points within machine learning that people just haven't really considered or haven't looked at yet because I suppose with any new technology, it tends to race on ahead of security consideration. 

Eoin Wickens: So, you know, with models themselves, training is a huge cost, right? I mean, financially as well as time as well as, you know, processing capability. And to solve this, people use pre-trained models. And pre-trained models are essentially the result of this massive computation, and they can be shared freely and easily. And it's - there's actually, like, a huge community built around the open source sharing of models, very similar to, you know, open-source software. But with that has kind of come a little bit of, I suppose, lax scrutiny in that models can be hijacked and code can be inserted in them. And I suppose that's kind of where we are with the research that we put out, in that we're trying to really shine the light on the fact that these models can be abused so readily and have been able to be abused for so long. 

Dave Bittner: Well, Tom, let's go through this together, then. Take us through step by step. I mean, how did you go about this exploration here? 

Tom Bonner: So the first thing we looked at was a very popular machine learning library called PyTorch. It's used for quite a lot of text generation models, image classifiers, things like that. And under the hood, it's storing its data using a format called pickle. This is part of the Python library for serializing data. Now, unfortunately, there's been a big, red warning box in the pickle documentation for probably about the last 10 years, saying, do not use this if you do not trust the source of the data because you can embed executable code in the - in a pickle file. Now, I think we've known about this for quite some time, but we just sort of wanted to take it to its logical extreme, if you will. So we looked at ways of abusing the pickle file format to execute arbitrary code. And also we looked at ways in which we could then embed and hide the malware in a model, as well. 

Tom Bonner: So we ended up using an old technique called steganography for - through embedding secret messages into other kind of plain text messages would be the original form. Now, in this case, we actually targeted the - what I call the weights and biases in a model, so perhaps more colloquially known as neurons. And by targeting the neurons in the model, we were able to change them very slightly and embed a malicious payload in there in such a way that wouldn't really affect the efficacy of the machine learning model at all. And then using pickle to execute arbitrary code, when the model is loaded, we can reconstruct the malware payload from the neurons and execute it. So what this means is that the model itself, it kind of - it looks as normal, really. It loads and runs as normal. But when it is loaded up on, you know, data scientist system or up in the cloud, wherever you're deploying this pre-trained model, it's going to automatically execute malware upon load. 

Dave Bittner: Now, what is the normal amount of scrutiny that a model like this would get from a security point of view? I mean, if someone is using this, is deploying it, to what degree do they trust it out of the box? 

Tom Bonner: That's a very good question. And really, the sort of crux of the problem is that most security software is not really looking too deeply into machine learning models these days. There are a lot of what are called model zoos, which are online repositories where people can share their pre-trained models, places like Hugging Face or TensorFlow Hub. And I think data scientists are quite used to just downloading a model, loading it up on their machine, loading it up on a sort of cloud or AWS instance without really doing any sort of security checks to see if it's been tampered with or subverted in a malicious manner. So, yeah, really, this is why we took things to such an extreme, was to highlight that malicious code can quite easily be embedded in these things and, yeah, automatically executed when you load them. 

Dave Bittner: Eoin, we've seen this sort of thing on GitHub as a supply chain issue where, you know, somebody can have a repository there. Something gets changed, people are relying on it, and the change, the malicious change, makes its way into people's production pipeline. Is this the same sort of thing you're imagining here where someone would surreptitiously insert something into one of these models and it goes undetected? 

Eoin Wickens: Yeah, absolutely. I think it's very similar to your traditional supply chain attack. And I suppose, you know, the limitations of such an attack are really up to the imagination of the attacker. This can spread out to be an initial access point. It could spread to be a source of lateral movement. You could deploy other malware, have a, you know, a remote backdoor for access into the environment. And I think what makes these attacks kind of a little bit - not more dangerous than traditional attacks but just as is that often with models, they have - you'll have a lot of access to training data. And that training data may contain personal identifiable information, or it may - like, they'll also have access to the model binary themselves or other models that have been trained within the environment. And in that instance, if you've been training a model for the last couple of months and have a lot of sensitive data go into it, if that gets stolen, that could be a huge financial cost as well as quite a large, I suppose - losing your advantage, really, against other companies if that's taken. And obviously there's the potential for things to be ransomed back as well and, you know, basically following the kind of more traditional cybersecurity attack format that we've seen in the past. 

Tom Bonner: And I would just add to that as well that there's very little in terms of sort of integrity checking or signing around models. So, yeah, from a supply chain perspective, it's pretty scary. It would be very easy for an attacker to subvert a model and, you know, a reputable vendor could end up distributing it downstream to their clients, and nobody would really be able to know or tell at the moment. 

Dave Bittner: And what degree of technical proficiency or sophistication would be required to have the skills to be able to do something like what you all have outlined here in your research? 

Tom Bonner: It's actually quite low. So the technical skills required, I would say right now this is pretty much in the domain of script kiddies to be able to pull off. We released some tooling to do this, but we're by no means the first. There are others who've released tools for targeting the pickle file format, for performing steganography on neural networks and ML models. So really, it's just a case of stringing together the right commands these days and inserting your malicious payload. It's not an awful lot of skill for an attacker. 

Dave Bittner: And Eoin, in terms of detecting this, are there tools that will do this or techniques that you all can recommend? 

Eoin Wickens: There is. There's been research into securing the pickle file format in the past because it's inherently vulnerable. One of those is Fickling by Trail of Bits. They've put in very good work really into detecting abuse of pickle files. But there's also a whole host of other ways it can be abused - that pickles can be abused, as well. And that can potentially be another major pitfall. So I suppose it would be silly of me to pass up the opportunity to say that. That's something that we do look at inside HiddenLayer, is a way of scanning models and verifying integrity of them to ensure that they're not housing malicious payloads and such. But other than that, other - it's not been extremely explored within the industry as far as automated ways go, outside of tools such as Fickling. 

Dave Bittner: Tom, are you aware of this sort of thing being exploited in the wild? Have we seen any examples of this? 

Tom Bonner: We are just starting to uncover, yeah, sort of in the wild attacks using these techniques. Just recently, we started to see common tools, things like Cobalt Strike, Metasploit, leveraging pickle file formats to execute code, again, going back to the fact that a lot of antivirus and EDR solutions aren't really monitoring pickle, Python and things like that very closely. We've seen a new framework recently as well called Mythic. And that allows to craft pickle payloads that will automatically execute, say, shell code or a known binary. And from there, you can load up a - yeah, C2 or some sort of initial access or initial compromise malware. 

Dave Bittner: So what are your recommendations then? I mean, for folks who may be concerned about this, what sort of things can they put in place to protect themselves? 

Tom Bonner: Well, first and foremost, do not load untrusted - in fact, don't really load any machine-learning models you've downloaded from the internet on your corporate machine or in your very expensive cloud environment where it could potentially be hijacked for, you know, coin mining or things like that - you know, aside from that, careful scrutiny of models, so scanning the models for malware, for payloads, evaluating sort of the behavior of models, as well. So we can use sandboxes, for example, to check the behavior of a model when it's loaded and make sure it's not doing things like spawning up cmd.xe to create a reverse shell. And also for suppliers of models, looking into designing models so that we can verify the integrity and, you know, even ensure they're not corrupt in any way - we're sort of lacking basic mechanisms like that for models at the moment. 

Dave Bittner: Eoin, any final thoughts? 

Eoin Wickens: Probably is also worth mentioning that we did also release a YARA rule for public consumption to detect any - a lot of different types of malicious pickle files. So that is something that we tried to provide people with today so that they can look and scan their models. Tom also touched on a really interesting point there, the use of coin miners within production cloud computing environments. I mean, if there's one thing those have access to, it's vast amounts of GPU computational power. And you can imagine, with a lot of traditional attacks, you'd see coin miners accidentally ending up as an initial stage in - I suppose in victim environments. And you can imagine now if they so happened to get into a massive stage-maker instance or something like that, how much illicit fortune could be made. 

Dave Bittner: Our thanks to Tom Bonner and Eoin Wickens from HiddenLayer for joining us today. The research is titled "Weaponizing Machine Learning Models with Ransomware." We'll have a link in the show notes. 

Rick Howard: Arm your team with the latest news and trends in the evolving cybersecurity landscape with CyberWire Pro enterprise. Our unlimited pro content will allow you and your team to accelerate and sustain cybersecurity awareness and knowledge. Get access to searchable and accessible news in cybersecurity, business, policy, privacy, disinformation and more. Find out about our education and military discounts by inquiring at thecyberwire.com/pro. 

Dave Bittner: The CyberWire "Research Saturday" podcast is a production of N2K Networks, proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. This episode was produced by Liz Irvin and senior producer Jennifer Eiben. Our mixer is Elliott Peltzman. Our executive editor is Peter Kilpe. And I'm Dave Bittner. Thanks for listening.