ChatGPT grants malicious wishes?
Dave Bittner: Hello, everyone, and welcome to the CyberWire's "Research Saturday." I'm Dave Bittner, and this is our weekly conversation with researchers and analysts tracking down the threats and vulnerabilities, solving some of the hard problems of protecting ourselves in a rapidly evolving cyberspace. Thanks for joining us.
Bar Block: As you probably know, ChatGPT has made a lot of waves since its release. So we wanted to see what we can do with it, whether it's phishing emails or info stealers and stuff like that. So we wanted to see it for ourselves.
Dave Bittner: That's Bar Block, threat intelligence researcher at Deep Instinct. We're discussing their research titled "ChatGPT and Malware: Making Your Malicious Wishes Come True."
Dave Bittner: So you start off here in your research trying to get ChatGPT to write a keylogger for you. Can we walk through that together? How did you begin?
Bar Block: Well, at the beginning, I just, like, simply asked them, write the keylogger. It refused, and it, like - it gave a message that keylogging is wrong, malware is bad, stuff like that. And then I thought, OK, so I won't ask it for a keylogger. I will just describe it, describe to it what I want the program to do, which is keylogging but without saying the word keylogger. And it worked.
Dave Bittner: And what was the output that it provided for you?
Bar Block: Well, it provided a keylogger in Go. That's the language I asked it to write a keylogger in. Just a simple program, just records keystrokes, saves them to a file. Later on, I asked it to add a function that can send that file using FTP to a remote location. And that's what it did.
Dave Bittner: And so with your success there, you moved up a level, and you asked it to create some ransomware. Take us through how that worked.
Bar Block: Well, like before, I asked it to write a ransomware. It refused. So I just described what the program should do. I asked it to make a program that iterates over directories and subdirectories, encrypts all the files in these directories and puts a text file in the directory telling with a simple message, which later on I changed to a more malicious one. But it just, like, put a simple message in it. And, well, that was it.
Dave Bittner: Yeah. And it handled the encryption and everything. And...
Bar Block: Yeah.
Dave Bittner: ...This code ran fine. It did what you asked it to do.
Bar Block: Yeah. Well, I had to add two imports, which ChatGPT omitted in purpose because I know it wasn't - I think it was in purpose because it wrote me exactly which imports were missing. So it knew what was needed for the program to run properly. So I just added them, and it ran OK. And it was even able to bypass some - most of the security products on VirusTotal.
Dave Bittner: Yeah, that's a really fascinating part of this story here, is that you took the results that ChatGPT generated, and you ran it through VirusTotal. And what happened when you did that? What were the results there?
Bar Block: Well, I think the - one of the samples, I compiled it to both PE32 and PE64 versions. One of the versions got three detections. The other got four detections. These are very low numbers because there's, like, 70 vendors on VT, and that's what was detected. I assume that it was because of - I used Go because Go - it's an uncommon language, and that's why I chose it to begin with. And also, the encryption was quite simple. It used AES, which is an encryption that isn't really used these days in ransomwares. So the simplicity and the use of Go are probably the reasons that it got such a low detection rate.
Dave Bittner: You know, you mentioned that the code that ChatGPT had provided to you was almost complete, but there were a couple of things you had to add there. Can you explain to us exactly what you think is is going on there?
Bar Block: Well, I think they omitted these parts by design because they just didn't want to - maybe somehow - I don't - I can't be sure of that - they knew that this program may be used for malicious purposes, so they omitted some of the imports by design. And I can - I know they knew exactly which imports were missing because right after ChatGPT provided me with the ransomware code, it added the message that said not to run this program; I needed, like, two more imports and said which ones. So it knew exactly what needed to do. It probably just didn't want to supply a working ransomware.
Dave Bittner: Yeah, that's fascinating. It's like it's saying, you know, here's - I don't know - here's the gun that I built you. By the way, you're going to need some gunpowder.
Bar Block: Yeah, something like that.
Dave Bittner: Now, a point that you all make here is that you can use this to help defend against malware as well. And you set this at generating some YARA rules. What happened there?
Bar Block: Well, I asked ChatGPT to generate YARA rules for specific MITRE techniques. I gave him the technique and ID, and it generated a rule. The rule was very general, and it had - if I had used it in an actual environment, it would generate lots of false positives. So obviously it wasn't a good rule. Then I asked - I tried to make it - to ask the bot to make it less generic, general and make it, like, generate less false positives. Then it was too specific. But the interesting part was that even though it wasn't as - very good at writing YARA rules, it was very good in writing programs that can use the - meant the technique. I provided it with the MITRE technique and bypass the rules that it wrote itself.
Dave Bittner: So was this a matter of just referencing the YARA rules that it created and saying to it, you know, write something that'll will bypass the rules that are right above?
Bar Block: Yeah, something like that. But, yeah, you have to like tell him, use that technique and bypass these rules. And then when I try to - I asked it to write a rule that can detect the malware it provided me with, it couldn't do that. It just wrote something very, very generic again and didn't really supply with - like, a good rule.
Dave Bittner: It's fascinating to me that it seems as though they've tried to build in some prevention here, but it's relatively easy to work your way around that.
Bar Block: Yeah. It's really quite easy. All you have to do is, like, rephrase your request. I know it has - ChatGPT has, like, more defenses that are not really - not necessarily cyber related, but just, you know, related to, like, topics it doesn't want to address to, like everything related to religion and race, things like that. But you can tell it that - you can ask it to play a role of another entity, and that entity has to do whatever you tell it to. And then it can do many things that usually ChatGPT won't do.
Dave Bittner: What's your perception of this as a tool for folks in the business that you're in, you know, doing research and analysis? Is there a real value here?
Bar Block: There can be a real value here. As I said before, the YARA rules that I was provided with weren't that good. But maybe with more training or if given more examples, it could generate better results. We can also use it to create malware and try to - ourselves - to defend against it, not using ChatGPT, because right now it seems that it's not really good at the defensive side of things. And maybe with some more work, this - ChatGPT can even be integrated into SIM systems to provide to provide SOC analysts with more information about things that go - are going on in their networks.
Dave Bittner: Do you see this as being a tool that could be a time saver for you?
Bar Block: In some aspects, it can. For example, if some someone is trying to create a malware to try to bypass its own company security for pen-testing assignments, they can use ChatGPT to do that. They can also use it to try to write basic YARA rules and try to improve them themselves - and, of course, like, finding examples for malware that you may have a hard time to find by yourself on VT - on VirusTotal or something. You can just ask, try to make it - write it and try to see how you can defend against it yourself. So when a real malware like that tries to get into your network, you will be safe.
Dave Bittner: Our thanks to Bar Block from Deep Instinct for joining us. The research is titled "ChatGPT and Malware: Making Your Malicious Wishes Come True." We'll have a link in the show notes.
Dave Bittner: The CyberWire "Research Saturday" podcast is a production of N2K Networks, proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. This episode was produced by Liz Irvin and senior producer Jennifer Eiben. Our mixer is Elliott Peltzman. Our executive editor is Peter Kilpe. And I'm Dave Bittner. Thanks for listening.