The CyberWire Daily Podcast 9.11.23
Ep 1903 | 9.11.23

UK's NCA and NCSC release a study of the cybercriminal underworld. HijackLoader's growing share of the C2C market. Russia's hacker diaspora in Turkey. Cyber diplomacy, free and frank..

Transcript

Dave Bittner: UK's NCA and NCSC release a study of the cybercriminal underworld. HijackLoader's growing share of the C2C market. Russia's hacker diaspora in Turkey. My interview with author David Hunt discussing his new book,  “Irreducibly Complex Systems: An Introduction to Continuous Security Testing.” In our Industry Voices segment, Mike Anderson from Netskope outlines the challenges of managing Generative AI tools. And a senior Russian cyber diplomat warns against US escalation in cyberspace.

UK's NCA and NCSC release a study of the cybercriminal underworld.

Dave Bittner: The UK’s National Cyber Security Centre (NCSC) and National Crime Agency (NCA) this morning published a report looking at ransomware’s place in the cybercrime ecosystem, and outlining the attack chain used by ransomware actors. The agencies think that a broad view of the ransomware landscape is necessary to address the problem more effectively. 

Dave Bittner: In some ways, the report argues, attribution is superficial. “While on the surface, an attack can be attributed to a piece of ransomware (such as Lockbit), the reality is more nuanced, with a number of cyber criminal actors involved throughout the process. Tackling individual ransomware variants – something which the NCSC and NCA are frequently challenged on – is akin to treating the symptoms of an illness, and is of limited use unless the underlying disease is addressed. Taking a more holistic view by understanding the elements of the wider ecosystem allows us to better target the threat actors further upstream, in addition to playing ‘whack-a-mole’ with the ransomware groups.”

Dave Bittner: So no whack-a-mole, say NCSC and NCA. Why is this? It’s because the criminals aren’t stupid, or at least not in a way that would tend to make them run afoul of the usual sanctions, indictments, and prosecutions. They rebrand, and they modify code, and they distance themselves from the details of the original attacks. These simplistic measures are sometimes enough to keep them in business.

Dave Bittner: The criminal-to-criminal markets facilitate this kind of dodging. As the report notes, “each function can be conducted by a different threat actor and sold to each other as a service.” It’s also possible for gangs to vary their tools, to use, in the report’s language, “different functions.” And indeed some functions are merely optional, useful in some cases but not in others.

Dave Bittner: The report recommends that organizations concentrate on the high-level attack paths, and especially the methods by which the crooks gain initial access, as opposed to the specific gonif at the keyboard. Leave that to the people with badges.

HijackLoader's growing share of the C2C market.

Dave Bittner: Of course, what’s for sale in the C2C markets remains interesting. Researchers at Zscaler, for example, are warning about a new malware loader that’s gained marketshare in the underground market. HijackLoader, as it’s known, has spiked in popularity over the past few months. The loader first emerged in July 2023, and is being used to deliver several malware families, including Danabot, SystemBC, and RedLine Stealer. Zscaler notes, “Even though HijackLoader does not contain advanced features, it is capable of using a variety of modules for code injection and execution since it uses a modular architecture, a feature that most loaders do not have.” The researchers add, “We expect code improvements and further usage from more threat actors, especially to fill the void left by Emotet and Qakbot.”

Dave Bittner: So while the rest of us look to closing off those attack paths, go get ‘em officer, and slap the cuffs on ‘em. Virtually, sure, but we hope physically, too. 

Spyware in malicious Telegram apps.

Dave Bittner: Kaspersky discovered several malicious Telegram clones in the Google Play Store that appear to be designed to target Chinese-speaking users, particularly China’s Uighur population. The apps purport to be faster versions of the legitimate Telegram app, and are “capable of stealing the victim’s entire correspondence, personal data, and contacts.” BleepingComputer notes that the apps have been downloaded more than 60,000 times. Google has since removed the apps from its Play Store.

Russia's hacker diaspora in Turkey.

Dave Bittner: The Financial Times reports that among the many thousands of young, military-aged men who skipped from Russia last Fall to evade increased conscription, including the recall of former conscripts who'd finished their military service, were a large number of hackers, IT workers, and, most significantly, cybercriminals. Turkey received several thousand such emigrants, and many of them have either connected with local Turkish gangs or formed small criminal groups themselves. Conditions for cybercriminals in Turkey are not as easy as they are in Russia, where cyber gangs operate with the connivance of the government. They enjoy no such official protection in Turkey, but hope to stay at large by keeping their crimes petty, by avoiding hitting targets in Turkey (where victims are likely to complain to the authorities), and by keeping their trade as unobtrusive and evasive as possible.

Dave Bittner: The expatriate criminals' preferred tool is Redline, commodity malware that nonetheless seems to evade widely used defensive software. It's "most often downloaded inadvertently by people using illegal websites to play video games or pirated versions of popular software." The criminal take is retail-level stuff: passwords and other login credentials as well as credit card data. It also includes stolen cookies, possession of which makes it easier to use the other data the thieves hold. The information is traded in an underground market researchers call "the Underground Cloud of Logs."

Dave Bittner: The newly arrived Russians are said to have taught the existing Turkish cybercriminals how to make better use of their tools, and in particular how to organize their stolen data in ways that render them more attractive in the C2C markets.

Russian cyber diplomat warns against US escalation in cyberspace.

Dave Bittner: In an interview with Newsweek, Artur Lyukmanov, director of the Russian Foreign Ministry's International Information Security Department and special representative to President Vladimir Putin on international cooperation on information security, reiterated familiar Russian non-denial denials of Moscow's offensive cyber operations--US allegations are accompanied by a "lack of hard evidence," he said. Thus, it's not so much "we didn't do it," as, "where's your evidence?" and besides, you're the guilty ones here." He described the US National Cybersecurity Strategy as an inherently escalatory document that deeply implicates the US government and US corporations in "preparations for 'cognitive warfare.'" He said, "We want to halt further deterioration. A mistake in the use of ICTs may lead to a direct conflict, an all-out war, especially as that the White House is aware that Russia has all the necessary capabilities to defend itself. A devastative computer attack against our critical information infrastructure will not be left without response."

Dave Bittner: One of the principal lessons the US has drawn from Russia's war is that effective cyber defense depends upon international cooperation, and specifically upon cooperation among the public and private sectors of democracies. Breaking Defense reports that Ambassador-at-Large Nate Fick told the Billington Cybersecurity Summit last week that a new strategy for promoting such cooperation was under preparation, and that it would be circulated this Fall.

Remember 9/11.

Dave Bittner: Finally, we’d be remiss if we didn’t close with a brief remembrance of the terrorism of 9/11, now twenty-two years in the past. Join us in sparing a thought for those who suffered and died in the attacks and their aftermath. And also, when you can, reach out to those who mourn or care for them. Sometimes the best thing you can do for grief is simply listen.

Dave Bittner: Coming up after the break my interview with author David Hunt, discussing his new book "Irreducibly Complex Systems: An Introduction to Continuous Security Testing." In our industry voices segment, Mike Anderson from Netskope outlines the challenges of managing generative AI tools. Stay with us. [ Music ] Mike Anderson is Chief Digital and Information Officer at Netskope with over 25 years of experience in the industry. In this sponsored industry voices segment, I asked Mike Anderson about the proliferation of generative AI tools, and how organizations can balance the utility of these tools against the potential security risks they present.

Mike Anderson: There's a lot of conversation from the boardroom down around how is GNI going to impact how we operate as an organization? What skills is it going to require? What skills, what positions may be impacted, which ones may not be impacted? And so there's a lot of conversation is we can't block it from people using it our organization. In fact, that's a very daunting task for most companies, because every week we're seeing three to five new startups that are coming out building on top of, you know, the existing platforms, like, the Open AI, the ChatGPT, Bard, and others. And so you've got that aspect of it. But at the same time, there's lots of concern around our people, you know, uploading sensitive information into public models. How do we make sure we distinguish between a public model and a private model? And so there's a lot of questions and governance type things that people are talking about today because they definitely want to say, how do we safely enable, you know, generative AI in our organizations, but at the same time, you know, stay on top of the changes that are going on globally as well.

Dave Bittner: And how do you suppose an organization can come at striking that balance between the usefulness of these tools, but those legitimate concerns as well?

Mike Anderson: Yeah. So what I'm seeing a lot of my peers doing in the industry is they're paying for opening up some of the new paid models from, you know, providers, whether it's a Google or Microsoft, they're buying licenses now for their employees to give them a safe place to go innovate, versus, you know, some of the free models. You know, if we think about ChatGPT, you know, we have the free model where things get uploaded into the public large language models. And then we've got our private ones where the data is contained within our environment. The challenge really, that creates a good framework. But then there's so many of these new applications popping up. And Grammarly is a great example. It's very difficult distinguish between a paid Grammarly subscription and a free subscription. And so because we can't distinguish, we block that from our users using that platform, until we get to the point where we can distinguish. Because we don't want is effectively a key logger, logging all the interactions our users are having, and having that information go into a public large language model. And so that's a- so a lot of it is give people a place to go and experiment in a safe way, versus outright blocking.

Dave Bittner: So where do organizations stand when it comes to addressing things, like, data governance and consent management?

Mike Anderson: That's a great question. What I see people doing today is, one, is we're looking at the lineage of data. For example, there was a case that came up recently in case law with a judge where an attorney had basically gone through and searched for a brief to or a precedent to basically support a claim they were making and they used generative AI. You know, so from a data governance standpoint, one of the things we're seeing is people trying to make sure there's a good lineage of where did data come from? What's the source, the attribution of the data is key, because we can't just rely on things in a public large language model, because it's sourcing data from the entire Internet. It's scanning everything. And so there was a good example, you know, recently in a courtroom where an attorney basically used information from ChatGPT to support what they were- their claim they were trying to make. But the data was actually from something that was fictitious, not something that was real. And so that starts to bring concern, but we're actually seeing in law rooms, were in courtrooms where people have to cite their evidence. They have to attribute where that information came from. They have to actually say, did they use generative AI in any form, in anything to do from a legal standpoint to make sure that it stands up. And so we think about data governance, it's that lineage, what, how, where did that data come from? How is it attributed so that when decisions are being made, especially even on private models, how can I make sure that I trust the information that's coming from it to make a business decision? And so oftentimes, you know, to help temper expectations today, around kind of where we're at, what you see is some of my peers are giving questions to their board members and their C suite to say, go to some of these ChatGPT, or some of these public models and ask the following questions and look at the answers you get. And the questions that all the board remembers, and all the C suite would know the answers to, to compare the answer accurate or not. And it's a good way to level set expectations from when you really think about the governance of data that's used to make these decisions. And so,, you know, I find that to be a very good place. But I feel like, you know, we're at the beginning. But this is a truly transformational moment in technology. I correlate it to, you know, when we saw the iPhone introduced in 2007. We're at that point now with generative AI where we're just at the beginning, and everyone's really trying to put the structures around it in real time.

Dave Bittner: What about the communications channels themselves? You know, that securing that pathway between the user and these large language models.

Mike Anderson: Yeah. So the ones where you're going directly to the, you know, the tool, the ChatGBT, those are the easier ones to address. Where it becomes more complicated is this world of third party plug-ins we see within whether it's Microsoft or Google or Salesforce, any of our key SaaS applications that we leverage today. We have the ability to plug in various, you know, add-ons. We see in the browser world, if we look at Google Chrome. I can download add-ons for my Google Chrome browser. And so it's those types of plug-ins where I feel like we have more heartburn because they're harder to detect. And so it's really comes into this whole conversation around third party risk. And that's another area where we're also using some of our own technology. We just announced here recently, the ability from a SAS security posture management standpoint, the ability to identify all the different plug-ins that people are trying to use, and assess risk against those. So we've catalogued over 70,000 applications, each with their own individual risk scores. And so then we can apply that same risk scoring to those third party plug-ins that people are trying to use, whether it's a browser, or it's something that plugs directly into a team or a Slack or a covenant type tool we're using today within our organizations.

Dave Bittner: What are your recommendations for organizations who are just getting started on this journey? They realize and recognize the power of these tools, but perhaps they're feeling a little overwhelmed at getting a handle on securing them. You have a suggestion for where to begin and what pathway to take?

Mike Anderson: Well, selfishly, you know, we want everyone to take a look at Netskope, because we use our own technology and feel pretty good about how we're managing these things internally. What I always recommend to people is get people a safe- realize that you're not going to block it. I mean, if I go back to the '90s, before we had email that could work outside of our organizations. You know, we saw the consumerization of IT, so email, so when the free email platforms came out, like, Yahoo Mail, back in the late '90s, what we saw is people would forward their work email to their personal email so they could get access to it at home. And that was a forcing function for organizations and to open up email so people can access it from outside the four walls of their organization. And so we're seeing the same thing happen today. And we think about generative AI. And we think about other examples like that, what we need to do is give people a safe place to go experiment. You know, so if we outright blocking is not a good strategy. So how do we get people that safe sandbox, educate them? You know, I always say give people a license to go fishing, right, but make sure they're fishing in the right place with the right equipment so when they get something on the line, and they reel it in, we have a positive outcome versus perhaps a negative outcome. And so,, you know, put the right guardrails and give people have the license to experiment, but help them understand the right place to experiment. And then use tools that are out there in the market today, to basically police those, that third party components we spoke about around those third party plug-ins, but then also to make sure we're protecting and guiding our users and giving them that GPS or that compass, to make sure they know where to go, where not to go, what to do what not to do in real time. And don't just rely on someone reading something or attending a webinar internally, which we know people have to hear things 27 times before they remember it. [Music] So let's make sure and remind them every time so it starts to become, you know, brainstem for all of our users.

Dave Bittner: That's Mike Anderson from Netskope. [ Music ] David Hunt is co founder and CTO at Prelude Security, and author of the new book "Irreducibly Complex Systems: An introduction to Continuous Security Testing." David Hunt has worked at organizations, like, MITRE, Mandiant, John Deere, and the US government. While at MITRE, he designed and built the Caldera framework, an open source tool for conducting semi autonomous purple team assessments. Our conversation begins with him describing his motivation for writing the book.

David Hunt: Yeah. I've been in the security space for I guess, about 17 years now. And I've done a lot of writing on the topic. And I've kind of bounced between public and private sector in terms of red teaming and offensive security. And I've kind of seen a shift in the last, I don't know, 6, 12, 18 months, in how security testing is happening across different organizations. And watching that trend happen, and then kind of, like, really feeling it through my daily work, I wanted to get that down on paper. And so I think it's a- it's pushing against the grain in a lot of ways in terms of what has been done in security, testing, the idea of continuously testing your security. And I wanted to get that down on paper and kind of give an explanation of where I see that trend going and kind of some of the technical reasoning as to how we got there.

Dave Bittner: Well, can you help us with a definition here? How do you describe continuous security testing?

David Hunt: The way I like to describe it is repeatedly testing if your defenses are capable of defending against emerging threats. And so maybe a more understandable way of saying that is, as we read the news, and we see different attacks occurring, we've talked a lot about the move at vulnerability over the last couple of months. The question always comes down to, could this happen to me? Am I vulnerable to this actual attack? And the idea behind continuous security testing is around the clock to be able to test each one of your security controls for that particular vulnerability. So even if you don't have it today, if it popped up tomorrow, you would understand how your defense reacted to it.

Dave Bittner: And what are the advantages of adopting this kind of system?

David Hunt: It's really information and intelligence early on. And so when we look at what we've done in red teaming in the past, we are able to create intelligence, but it's point in time. And so we might taking the MOVEit example, we might see that we have a vulnerability that's MOVEit. We understand that at this point in time, we have that vulnerability. But we lose sight of that next month, a month after that, and so forth. When you're running tests continuously, what you start to realize is, you're able to regression test an entire production infrastructure. So it doesn't matter when the vulnerability comes into your environment, or if it goes away and comes back, you actually have a heartbeat the entire time.

Dave Bittner: Can you give us some examples here of how this actually works in practice?

David Hunt: So there's, like, a lot of security testing, it's two parts. And so what you want in continuous security testing is you want one part, that's a what's called a probe or an agent. And when you deploy those out on your endpoints, so things, like, computers, servers, containers, and so forth, those things create a persistent connection back to what you would refer to as your command and control center. That command and control center is basically an automated scheduler. And so what the behavior that you want in the real world is you want to set your command and control center out where it can schedule out tests on a repeated basis to all of your endpoints. And as these endpoints retrieve tests, they execute them and spit the results back to the command and control center, where that- those results can be aggregate.

Dave Bittner: And how do you ensure that in this process, you're going to do no harm?

David Hunt: That's one of the biggest tenants that I go into in the book is continuous security testing needs to do no harm. That harm, I think is most obviously represented in the tests themselves, making sure that the test cannot actually create a negative effect on the host. Because continuous security testing is designed to run in production and across all your devices, it introduces that as a potential risk. So the way that I describe this in the book is each one of the tests should have guardrails built in, so the tests themselves, for example, can be limited based on the amount of runtime that you give them. I like 10 seconds. So you try to accomplish everything that you need to accomplish in the test within 10 seconds. Another guardrail that's pretty popular is verification of where the test comes from. So each one of these endpoint probes that you can deploy inside of your environment, should have the ability to verify the test is coming from a location that you approve. That avoids any sort of man in the middle attacks, which would be one of the biggest threat vectors to a system like this.

Dave Bittner: Well, and then how do organizations take the information that they've gathered here and turn that into some sort of actionable strategy?

David Hunt: That is a great question, because this is also one of the biggest changes in continuous security testing that I go into in the book. The way, and I like to describe it from kind of where we're coming from with security testing. Where we're coming from is a world where we run security tests, and then we have a security engineer or a red teamer contextualize what those results are in order to determine what to do remediation wise. And so for example, you would run a test from the terminal, you would look at the terminal output, and you would say, hey, these IP addresses have specific ports open that have a vulnerability, therefore, based on my knowledge and ability to contextualize the terminal output, here's what I would do. Now that doesn't scale really well beyond a couple of people inside of a smaller environment. So continuous security testing takes a much more production ready type of approach. And what continuous security testing emphasizes is a simple result code, an exit code be returned for every test. So when you run an actual test, the output, the terminal output is disregarded, and a particular exit code is sent off of the endpoint and to your command and control center. Now it's the aggregate amount of those exit codes that tell the picture and do the contextualizing for you in a very automated way. So for example, one exit code might be 105. 105 might be quarantined test, that would indicate that a defensive control, say an EDR quarantine the security test while it was running, that'd be a good thing. You want the defense to quarantine bad things. And so at scale, you're able to collect all of those codes for all of these tests and build basically a giant heat map of what your environment looks like at anytime.

Dave Bittner: That's David Hunt from Prelude Security. The book is titled "Irreducibly Complex Systems: An Introduction to Continuous Security Testing." And that's the Cyberwire. For links to all of today's stories, check out our daily briefing at the cyberwire.com. Don't forget to check out the Grumpy Old Geeks podcast where I join Jason and Brian on their show for a lively discussion of the latest news every week. And find Grumpy Old Geeks where all the fine podcasts are listed. We'd love to know what you think of this podcast. You can email us at CyberWire@n2k.com. Your feedback helps us ensure we're delivering the information and insights that help keep you a step ahead in the rapidly changing world of cybersecurity. We're privileged that n2K and podcasts like the Cyberwire are part of the daily intelligence routine of many of the most influential leaders and operators in the public and private sector, as well as the critical security team supporting the Fortune 500 and many of the world's preeminent intelligence and law enforcement agencies. N2K's strategic workforce intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at n2k.com. This episode was produced by Liz Irvin and Senior Producer Jennifer Eiben. Our mixer is Trey Hester with original music by Elliot Peltzman. The show was written by our editorial staff. Our executive editor is Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.