The Microsoft Threat Intelligence Podcast 8.28.24
Ep 26 | 8.28.24

Black Basta and the Use of LLMs by Threat Actors

Transcript

Sherrod DeGrippo: Welcome to "the Microsoft Threat Intelligence podcast." I'm Sherrod DeGrippo. Ever wanted to step into the shadowy realm of digital espionage, cybercrime, social engineering, fraud? Well, each week, dive deep with us into the underground. Come here from Microsoft's elite threat intelligence researchers, join us as we decode mysteries, expose hidden adversaries and shape the future of cybersecurity. It might get a little weird, but don't worry, I'm your guide to the back alleys of the threat landscape. [ Music ] Thank you for joining us on another episode of "the Microsoft Threat Intelligence podcast," I am joined by two security researchers here today for Microsoft, Daria Pop, security researcher, and Anna Seitz, security researcher. Daria and Anna, thank you for joining me.

Daria Pop: Thank you for having us.

Anna Seitz: Thanks so much for having us.

Sherrod DeGrippo: So we have some cool stuff going on on the landscape. I know those of us who are kind of ransomware chasers keep tabs on Black Basta for the past several years. So Daria, Black Basta is topping the charts lately when it comes to ransomware. So what are you seeing there?

Daria Pop: Yeah, sure. So I think over the years, we've seen a lot of changes, especially in the initial axis vectors leading to Black Basta. So by this, I mean changes in both the methods and the malware that they use. So I'll just kind of give you a quick overview and a few examples. So early on, their main initial access method was phishing, right? So the access brokers would send phishing emails with malicious URLs, malicious documents to distribute Qakbot. And then in like early 2023 we started seeing some of these access brokers switch to Pikabot, right, in addition to Qakbot. And then towards the end of the year, they moved to DarkGate and IcedID, which was kind of right after the Qakbot takedown operation was announced. And also, around the same time, we saw them, the access brokers using a tool called TeamsPhisher to distribute DarkGate via Microsoft Teams. So this was another new element. But after that, we kind of started seeing less reliance on email as they started trying new social engineering techniques. So for example, we saw them distributing BatLoader via SEO poisoning, which led to Storm-506 which is one of the Black Basta ransomware operators. And another example would be ZLoader, which we haven't seen in a while, and that was distributed in malvertising campaigns, leading to Storm-1811, and other Black Basta operators. And then I think most recently, that was kind of an interesting campaign that we saw in April this year. They can use the combination of phishing, voice phishing through phone calls and teams calls and also team's messages. They use impersonation and RMM tools, right, remote monitoring and management tools like Quick Assist. And then later on, we saw AnyDesk which was all part of that initial access step. So quite interesting. A lot of changes. These techniques, they're not novel techniques, right? They've been around for years, but I think it's relevant to kind of keep track of them and understand the context behind these changes.

Sherrod DeGrippo: They're not novel techniques, but it's like they've become tried and true for threat actors to just have this catalog of TTPs that they use in combination to get initial access and then to either distribute or keep that initial access to do further attacks. So I want to talk a little bit about malvertising, for people that aren't familiar with it, it really is sort of this throwback thing that we've continued to see for years and years, where threat actors will buy traffic as malicious ads on sites, on search engines, and they will deliver their payloads through an ad. And part of the reason that's so attractive to threat actors is because ad traffic is sliced up and carved up and brokered and sold through all of these different subsidiaries. So actually, being able to find out who directly sold ad traffic and who bought that ad traffic is really hard, making it easy for threat actors to hide. So, Daria, anything else we need to know about Black Basta? Do you think there's any reason that they made all these changes?

Daria Pop: I think there are a lot of factors here. Definitely the Qakbot takedown operation had an impact, slowed things down for a bit. I think there's also a lot of malware offerings, right? If a malware becomes unavailable, they will move on to the next one, which has probably similar capabilities, or it's even an upgrade. And then I think they are trying to get around detections all the time, so they will use whatever works and pivot from one method to another to just do their job, right, complete their task.

Sherrod DeGrippo: And we also talk a lot about how ransomware isn't really like a single actor, but it's this ecosystem of all these different toolkits and groups and scripts and platforms and software as a service, and all these different things. Do you think that as we look at the ecosystem for ransomware, do you think there'll be any changes coming up that might happen?

Daria Pop: I think it really depends on the partnerships. For example, with Black Basta, one of the most well-known and kind of almost exclusive ones with some of the Qakbot distributors, right, for Storm-464 and Storm-450 which is still present, but it changed a lot, and then we saw a lot of new access brokers, probably due to the Qakbot takedown and different opportunities for actors to collaborate.

Sherrod DeGrippo: Got it. So something else that is involved in this Black Basta story is a piece that we put out a couple months ago about Quick Assist being used for social engineering. Can you kind of tell me what exactly is going on with that?

Daria Pop: Yeah, so that was a pretty interesting campaign. So we saw threat actors impersonating IT support, pretending to conduct fixes on devices. And kind of a unique approach here was that they signed up targeted emails to different subscription services to flood the inbox with random content, and after that, they would call the target, pretend to be IT support, and say, "Hey, we can help you with that spam issue." And then during the call, they were trying to convince the user to grant them access to their device through Quick Assist and later on, yeah, we saw AnyDesk as well. And once the actor has that access, they would run cURL commands to download batch files or zip files to deliver malicious payloads. And we saw Qakbot again, we saw additional RMM tools like ScreenConnect, NetSupport, and then also Cobalt Strike, SystemBC and other tools that they were able to use for persistence, ladder movement and further hands-on keyboard activity. And in several cases, we saw Storm-1811 deploying Black Basta. So absolutely an interesting campaign to watch and document.

Sherrod DeGrippo: That is so fascinating, these RMM tools, or generally part of what we would consider living off the land tax. When people talk about living off the land, the RMM, the remote management monitoring tools, tend to be big favorites for threat actors, because they can do so much and they're already resident, they are set up with the permissions they need. So in terms of this Quick Assist social engineering campaign that we observed, what's the fix here? What can organizations do if they've got these kinds of tools installed? How can they protect themselves?

Daria Pop: Make sure that they're actually being used, right? If they're not being used, just make sure that they're disabled. And also, kind of talking about the tech support scams here, we kind of need to know that, for example, Microsoft, right, would never call you out of nowhere and ask for access to your device to fix something. And I know it sounds very basic, but it's focusing on this initial vector could stop something really bad, right? A potential major intrusion.

Sherrod DeGrippo: So, Daria, I know you've been in this world a long time. I've seen you out there for years. Do you have a favorite financially motivated threat actor?

Daria Pop: That's a good question, Sherrod if I'm not wrong, I know your favorite one is Strawberry Tempest, so I will not steal that from you, and-

Sherrod DeGrippo: That's a good one, right?

Daria Pop: It is. I love that. I will say Vanilla Tempest, because-

Sherrod DeGrippo: Oh, okay.

Daria Pop: Because it just sounds like a comfy, warm coffee, and then my second choice would be Sangria Tempest because why not?

Sherrod DeGrippo: Sangria Tempest and Vanilla Tempest. For those listening, Microsoft has threat actor naming conventions based off of the second word is related to, sort of like their motivation. So Typhoon is China, Sandstorm is Iran, Blizzard is Russia, Sleet is North Korea, and Tempest is crime, and we have a variety of others for other countries and things like that. Those are the big foreign crime, and Daria and I tend to go after Tempest actors. Those are sort of the ones that we tend to spend the most time looking at. So yeah, crime time. Speaking of threats and threat actors, let's talk a little bit to Anna about nation sponsored threat actors using LLMs and what we're seeing there. And we talk about AI a lot, I think there's an AI hype cycle, I am in the hype cycle occasionally, but threat actors are there too, right? Like, if they're hearing all these things in the news, we've always known threat actors have watched current events and leverage social engineering via current events. They know what's going on in the world. Anna, tell me what threat actors are doing with AI?

Anna Seitz: Great question. So just as security researchers are beginning to leverage LLMs to better streamline our research best practices so are adversaries, they're using LLMs to also try to operationalize their goals and targets. And so just to back it up a little bit, LLMs or large language models are those deep learning algorithms that use tons and tons of data to understand and predict text, and they can perform all kinds of things, but language processing tasks are obviously extremely attractive to threat actors right now. So Microsoft collaborated with OpenAI and was actually able to disrupt five state affiliated malicious actors, and all of the identified OpenAI accounts associated with these threat actors were terminated. What we were seeing was a lot of querying for open-source information that adversaries were looking for, lots of translations, finding coding errors, running basic coding tasks. So overall, all of the use cases of threat actors using these LLMs is still very consistent with the current behaviors of these threat actors. They're just using it as that platform for greater optimization of their tasking and targeting.

Sherrod DeGrippo: So let's start with Forest Blizzard. And as we just learned, Blizzard is a Russia-based state actor naming convention. So what is Forest Blizzard doing with LLMs?

Anna Seitz: Yep, force Blizzard, Russian state sponsored threat actor, they primarily target government, NGOs, energy and transportation in the United States, Europe and the Middle East. Their use of LLMs involved research into satellite and radar technologies that most likely pertains to military operations in Ukraine. So they were looking for things like assistance with their scripting tasks, that includes file manipulation. And the observed activity that Microsoft has seen is indicative of a threat actor using and exploring a new technology. So this is still not something that it's definitely scary, but it's not being leveraged in a way that is any different than their current objectives and targets at the moment.

Sherrod DeGrippo: And so that's Forest Blizzard. Let's talk Emerald Sleet, which is North Korea. North Korea, as we have talked about on the podcast in the past, and we've done quite a lot of work on North Koreans threat actors, North Korea's wild. They don't really fit the typical mold of a cyber espionage group. What is Emerald Sleet doing with LLMs?

Anna Seitz: So they've been extremely active throughout 2023 and a lot of their recent operations rely on spear phishing emails, and they're looking for intelligence on specifically prominent individuals that have expertise on North Korea. So they have been impersonating academic institutions, trying to lure targets and to reply to them about foreign policies related to North Korea. And so they were using LLMs to understand publicly known vulnerabilities, to help with their operations and also assist with that language support for their spear phishing campaigns. I think that's a very interesting common theme that we've seen with these threat actors that do incorporate a lot of spear phishing and phishing campaigns into their TTPs is- LLms are perfect for them. I mean, what a great opportunity to enhance your language and translation skills very rapidly. And it's only going to continue to get more and more difficult to differentiate between a real and a fake, I assume.

Sherrod DeGrippo: So those are sort of the things we used to talk about, right? And we still do, to an extent, is looking at CredPhish emails and looking for bad grammar. So what do you think LLMs are going to do in terms of changing that game?

Anna Seitz: It's going to be tough. I think even you see students using LLMs to submit papers that they're trying to get better grades on, I think we're going to see the same thing with phishing emails. It's going to become very difficult to differentiate what's a phishing email and what's not. Ot's already very difficult, I mean, obviously phishing is a huge threat vector for a lot of organizations and companies right now, so it's going to get worse, I assume, but I think it's still going back to those security best practices within organizations, it's not getting exponentially more targeted, it's just becoming a much easier, low hanging fruit option as a threat actor trying to conduct a spear phishing campaign.

Sherrod DeGrippo: So the final actor that I thought we should touch on is Crimson Sandstorm. So let's talk about Crimson Sandstorm. Sandstorm is a Iran aligned actor. What are we seeing from them using LLMs?

Anna Seitz: Crimson Sandstorm is assessed to be connected to the Islamic Revolutionary Guard Corps. They've actually been around since 2017, they've been using LLMs that are still reflecting the broader behaviors of what the security community has seen observed from this threat actor. And that includes things like social engineering assistance, troubleshooting errors, their.net development and researching ways an attacker might evade detection when they're on a compromised device. So they're really using it for that app in web development, and then, once again, the spear phishing campaigns, which is that common theme.

Sherrod DeGrippo: That's deeply concerning. So because of the concerns that we have around LLMs, can you kind of tell us what's going on in the detection world? What's going on from the posture of people that are in defense or intelligence, defensive positions, defenders to try to get around some of this, or to try to stop some of this stuff?

Anna Seitz: So everybody is obviously very concerned about this. I like to say the collective brain has been trip. So right now, there's an executive order on AI that I know a lot of companies, including Microsoft have aligned themselves with to protect Americans. There's also something called the Bletchley Declaration that came out of the AI safety summit back in November, which is all about identifying AI safety risks, building AI policies. And even you can see MITER created their new MITER ATLAS, developing T codes to help identify behavior where defenders can better prepare and arm themselves against these types of threats and these new and emerging threats. So everybody's thinking about it, and those are just a small sample of some items that people are becoming aligned to try to combat these bad guys, using AI for bad things. And I think it's only going to continue to become more refined over time, and we'll obviously get in line as well as defenders. And yeah, with great power, there comes great responsibility. And AI definitely falls into that bucket.

Sherrod DeGrippo: Absolutely. I think the MITER piece is the one that makes me honestly feel the best, because there's no other organization really that's sort of 100% focused on understanding and putting a framework around threat actor TTPs. And so I've seen some of the work that they've done with AI and LLMs in terms of looking at those as attacks. For example, Microsoft released a blog a couple months ago about the Crescendo attack. We had Mark Russinovich on to talk about that, I found it absolutely fascinating. But one of my first thoughts as someone who works in this industry is, wow, we already have a name for this attack style that leverages AI, and I think that we're going to have to start coming up with our own taxonomy, coming up with language and creating an easy way to talk about some of these things, unfortunately, exactly the way that we have had to come up with threat actor naming groups. So look forward to more controversy around naming attacks, as we have with naming threat actor groups. So, Anna, tell me, what can organizations really do about threat actors that are leveraging AI or if they have concerns around AI safety?

Anna Seitz: So I would say, just keep doing what you're doing. So for example, security hygiene needs to be a pillar of every organization structure. You can enact more further things like multi factor authentication policies, incorporating zero trust defenses, which is just as a refresher, zero trust is at verify, use least privilege, assume breach. And hopefully any actor that's using an AI-based tool for a phishing or social engineering attack just training staff in being able to identify phishing and social engineering type attacks. So I think education is key. But like we said before, there's no novel or unique campaigns that threat actors are using LLM information and LLM technology for at the moment, so it's just kind of stay true and hold those defenses.

Sherrod DeGrippo: Got it, and I think that's something really important that as I've come to work more with AI at Microsoft, it's just another tool, and how threat actors use the tools available to them has been a perennial discussion that we've always had to think about and work with. It's just there's a new tool out there that we as defenders can use for ourselves, or we can see that threat actors are using it as well we just have to figure out how to navigate that as defenders. I want to ask both of you, Anna and Daria, this is something I ask a lot of people that come on the podcast, and it's, are you using any AI tools in your daily life?

Anna Seitz: Yeah, absolutely. CoPilot. I use CoPilot all the time, and CoPilot for security as well.

Daria Pop: Yeah, that works for me too.

Sherrod DeGrippo: You use CoPilot too, Daria?

Daria Pop: Yeah. I write a lot, so.

Sherrod DeGrippo: You use it in like Word?

Daria Pop: Yeah.

Sherrod DeGrippo: I use it a lot in Outlook for email. I do the thing that says, make that email sound nicer, because I can get a little- I'm not mean an email, but I'm a little maybe- I'm not, I don't have all those, like, hello, thank you, like, I'm just like, hey, can you do this? And so I'll have the CoPilot fix that for me. I use ChatGPT a ton. ChatGPT is like tied my hip. I need it all the time for everything. I just am constantly asking it questions and asking it to do things. And one of the things that I've really enjoyed is that people post code snippets, and they'll say, "Here's code for blah, blah, blah, blah." And I'm like, "I don't know that I trust you." And so I'll put those code snippets into ChatGPT and say, "Can you tell me what this actually does, what this really means?" And most of the time, it's legit. But sometimes people post code snippets and scripts that are not actually doing what they say they're doing, or they're doing it badly, and the person who wrote it doesn't know that, and I'm not a developer, so get a little help from the AI. Daria and Anna, thank you so much for joining me on "the" Microsoft Threat Intelligence podcast to learn about Black Basta and how threat actors are using LLMs. Hope to talk to you both again soon. Thank you.

Daria Pop: Thank you, Sherrod. [ Music ]

Sherrod DeGrippo: Thanks for listening to "the Microsoft Threat Intelligence podcast." We'd love to hear from you, email us with your ideas at tipodcast@microsoft.com. Every episode will decode the threat landscape and arm you with the intelligence you need to take on threat actors. Check us out, msthreatintelpodcast.com for more and subscribe on your favorite podcast app. [ Music ]