CSO Perspectives (Pro) 5.26.20
Ep 8 | 5.26.20

Intrusion kill chains: a first principle of cybersecurity.

Transcript

Rick Howard: [00:00:00] During the first Gulf War in 1991, Iraq's mobile Scud missiles gave the United States Air Force and Navy pilots trouble. The Iraqi soldiers were able to fire them long before the U.S. planes could find their location and blow them up. After the war, Gen. John Jumper changed air combat doctrine to address that issue. He formalized the techniques necessary to compress the time it takes to find and kill the enemy on the battlefield. The Air Force's target acquisition model is called Find, Fix, Track, Target, Engage and Assess. Also known as F2T2EA because, you know, military acronyms. Or simply, the Air Force calls it the kill chain. Jumper's mandate to the Air Force was to compress the kill chain from hours or days to under 10 minutes. And now fast forward to 2010. The Lockheed Martin research team took that idea and applied it to cyberdefense. 

Rick Howard: [00:01:11]  My name is Rick Howard. You are listening to "CSO Perspectives," my podcast about the ideas, strategies and technologies that senior security executives wrestle with on a daily basis. This is the third show in a planned series that we are doing on network defender first principles. In the first episode, I described the construction of a metaphorical infosec wall based on first principles and presented an argument about what the ultimate cybersecurity first principle is. And that will be the foundation of our infosec program. In the second episode, I laid the first block on that foundation. It is a passive, defensive strategy that involves systematically closing all the windows and doors in your digital environments. You know it as zero trust. In this episode, I cover a more active strategy with a more sexy, military-sounding name, a complementary strategy to zero trust. It's called the intrusion kill chain. If you have somehow landed here without hearing the first two episodes, you should really go back and listen. You can listen to all of these as standalone episodes if you want. But then you and I get the benefit of understanding how cool my wall metaphor links everything together. And I know you're all about the metaphors. 

Rick Howard: [00:02:32]  The idea of cyber intrusion kill chains is so obvious to me that I am astonished that many organizations do not have a robust strategy already in place. It began with the famous white paper from the Lockheed Martin research team back in 2010. That paper introduced the network defender world to the concept and revolutionized how we all thought about digital protection. The previous strategy was something called defense in depth. And most network defenders were pursuing some version of it as far back as the late 1980s. The main characteristic of defense in depth - and you can say this about zero trust, too - is that it's passive. Network defenders would install overlapping digital defensive controls and hope that the bad guys would run into them. The in-depth part of the strategy was the idea that if the bad guys somehow got past the first control, they would probably run into the second. And if they got past that, they would most likely run into the third, et cetera, et cetera. 

0:03:29:(SOUNDBITE OF FILM, "THE KING AND I") 

Yul Brynner: [00:03:29]  (As King Mongkut of Siam) When I shall sit, you shall sit. When I shall kneel, you shall kneel, et cetera, et cetera, et cetera. 

Rick Howard: [00:03:36]  The reason the strategy is passive is that it's not based on how specific cyber adversaries attack their victims. It is a general purpose protection scheme. It is similar to an infantry platoon setting up a defensive perimeter. The soldiers have no specific knowledge of when and where the enemy will attack. So they install a defense-in-depth posture. They dig fighting positions so that they're not exposed to enemy fire. They put overhead cover on the fighting positions so that they're protected from hand grenades. They coordinate overlapping fields of fire with the positions on their left and the right so that there are no gaps in the coverage area to their front. You get the idea. Defense-in-depth tactics are not based on intelligence. They are general purpose practices designed to defend against any kind of an attack. They are necessary but not sufficient. The genius of the intrusion kill chain strategy is that it provides a framework for deploying defenses where we know the enemy must travel. In my infantry platoon scenario, it's like discovering that we know the exact avenue of approach the enemy will take to attack our position. Any platoon leaders that knew the approaching enemy would most likely cross the river in front of our position at a specific point and then walk up a slight rise would have something special planned when the enemy got there. It's the same thing for digital defense. 

Rick Howard: [00:04:58]  The Lockheed Martin team realized that all cyber adversaries, regardless of their motivations - like crime, espionage, hacktivism, low-level cyber conflict and just general mischief - and regardless of the tools that they use to accomplish their mission, must traverse the same digital ground to complete their task. In other words, all cyber adversaries have to negotiate the same attack milestones to be successful. Lockheed Martin called these milestones the intrusion kill chain, taking the name from the U.S. Air Force called their process for quickly tracking down targets on the battlefield. Since the original Lockheed Martin publication, many network defenders have tweaked the concept by adding their own special sauce to it. But from the original, these are the seven attacker milestones. 

Rick Howard: [00:05:47]  Recon - the bad guy recons their victim's network, looking for potential defensive weaknesses. Weaponization - they take that intelligence, adjust their existing toolset and create new tool sets as needed to leverage those discovered weaknesses. Delivery - they deliver some of their tools to the potential victim. Exploitation - they either trick their victims into running one of their tools that gives them access to the victim's machine or trick them into giving up their credentials so that the bad guys can run those tools like they were the victim themselves. Installation - the bad guys install their tools on the victim's computer. Command and control - then the bad guys establish a communications channel back out to the internet somewhere so that they can report status and download additional tools it might need for the campaign. Actions on the objective - they begin to move laterally within the victim's network, looking for the data they have come to steal or to destroy. Once they find it, they exfiltrate it out through the command and control channel. 

Rick Howard: [00:07:04]  Think about each milestone in the attack sequence, each link in the chain as an opportunity to disrupt the hacking campaign. Instead of the defense in depth idea of general purpose controls, network defenders deploy controls at every phase of the intrusion kill chain designed specifically for every known adversary campaign. For example, if we want to stop a Fancy Bear campaign, we design and deploy specific controls to counter how Fancy Bear recons for victim weaknesses, for the malware they deploy, for the techniques they use to deliver their malware to their victims, for the exploitation code they use to compromise victim zero, for the process they use to download and install additional tools to help them in their mission, for the interdiction of their communications channel and, finally, for how they move laterally within the victim's network, looking for the data they've come to steal or to destroy. With this model, we know exactly where Fancy Bear is going to cross the river in the digital space. So let's make it difficult for the bear every step of the way. 

Rick Howard: [00:08:13]  But I'm out of my depth here. It is one thing to talk about Fancy Bear in the abstract. It is quite another to track these adversary groups and the campaigns that they run on a daily basis. I needed to talk to an expert, someone who has been in the trenches, tracking adversary campaigns from the very start from when Lockheed Martin first published their paper. I needed to talk to my good friend Ryan Olson. 

Ryan Olson: [00:08:37]  I am Ryan Olson. I'm the vice president for intelligence for Palo Alto Networks, and I run our global threat intelligence team that we call Unit 42. 

Rick Howard: [00:08:45]  We have been friends for well over a decade, and we worked together at two different companies doing commercial cyber intelligence, iDefense and Palo Alto Networks. So I was going back through how long we've worked together, and I couldn't pin down the exact numbers. How long have we been doing cyber intelligence together? 

Ryan Olson: [00:09:03]  2006 was the - when I did my internship at iDefense, when I first met Rick Howard. And he said the magic words to me that I still remember - iDefense is a magazine (laughter). I still remember. 

Rick Howard: [00:09:21]  (Laughter) I still believe that. 

Ryan Olson: [00:09:22]  I know. I don't think you were wrong, either. I just - I remember that was, like, the very first meeting that we had. 

Rick Howard: [00:09:27]  You've been tracking one adversary group called OilRig for a while now. So I'm interested in the attack sequences that OilRig has launched against its victims, right? So let's start with weaponization. Can you tell me the tools that OilRig has used in the past that we can look for? 

Ryan Olson: [00:09:47]  Yeah, the tools are some of the most interesting things that we've seen from OilRig. Typically, when we think about an adversary in that weaponization phase, we're thinking about, how do they build these tools? How do they select them? And if the tools are commodity, off-the-shelf things, that's interesting because they're using things that they can freely access. But when they're custom tools, they give us an opportunity to actually trace the evolution of the tool over time and make it easier for us to attach multiple attacks back to the same group. So if they're building the tools and they're using them themselves, makes it easier for us to connect the dots on those. And OilRig loves custom tools - tons and tons of custom tools, occasionally off-the-shelf tools, especially for post-exploitation stuff, but most of the time, completely custom. 

Rick Howard: [00:10:32]  So then the next phase is delivery. How do they deliver some of those tools to get them to victim zero on their target list? 

Ryan Olson: [00:10:39]  So the vast majority of the attacks we've seen from OilRig for delivery of the tool has been over email. And typically, it's an email that has a file attached to it. The files are oftentimes Office documents, either Excel or Microsoft Word. And normally, they contain some really compelling information in the email itself to convince the victim to open that file. It'll be something related to their business, something that's topical, something that might be very relevant to the individual who receives it. But most of the time, it's that attachment that they want them to open. And once they open it, that's when their system's going to get infected. They do have custom... 

Rick Howard: [00:11:16]  So... 

Ryan Olson: [00:11:16]  ...Tools for this as well for those actual delivery documents, as we refer to them. 

Rick Howard: [00:11:20]  So for - in those files they send over, are they trying to do exploits where they're actually breaking in using a software exploit? Are they tricking them - tricking their victims into giving up their credentials somehow? 

Ryan Olson: [00:11:33]  So I look back across the delivery tools that we've seen used by OilRig in the past, and it's actually very rare that they exploit a vulnerability. In the majority of cases, they attach a file, the user opens it and then they have to enable macros in some way. They have to enable that active content, as Microsoft calls it, which will then run a PowerShell script or a VBScript, something else which actually infects the computer with the malware. We did see them exploit vulnerabilities back in 2017 - one Office vulnerability in particular, 2017-0199. But that was the exception to the rule. Typically, these are just social engineering - convince the person to click on it so that they actually are saying, hey, I'll just go ahead and run this malware on your system. 

Rick Howard: [00:12:15]  So I'm confused a little bit, Ryan, about how OilRig gets access to the system. Are they tricking the victim into revealing credentials somehow? How is that happening? 

Ryan Olson: [00:12:25]  Yeah, so their typical goal is to install some malware on the system, something that's going to give the attacker access to the machine so that they can run commands on it. They've got different tools for this - custom ones. A couple names that we've given to the malware - one was called QUADAGENT. Another one was called Helminth, another called ISMAgent. We've given them lots of great names over time. 

Ryan Olson: [00:12:44]  And they're typically pretty simple. One of the reasons we think that OilRig has built so many tools is they don't spend a ton of time building these really in-depth custom tools that have DUIs and things like that, although they do occasionally. They build a tool that's relatively simple. And if it gets burned, they can move on and create another one. But it doesn't really take a lot of sophistication on that tool to actually be able to exfiltrate credentials. We've seen OilRig use MIDIcats and other sort of credential dumping tools after the fact - after they've infected the computer with that initial implant - dump the credentials and then upload them back to the attacker. So that installation phase is really about - get the malware on the box. Get access to that host. 

Rick Howard: [00:13:24]  So now they're on the victim's machine. Do they do a custom command and control channel, or do they just use regular stuff that everybody else uses? 

Ryan Olson: [00:13:31]  OilRig has - their tools have the most interesting command and control channels. So typically - and this isn't true for all of them - but the common pattern we've seen is they have an HTTP-based command and control channel. So it talks to a web server - just makes a request out to it - sends some information and it gets some information back. But if that is blocked for some reason - so maybe there's an IPS in between those two hosts and it's blocking - or the domain for the website is blocked - instead, what the malware will do is fall back to a DNS tunnel where it creates a custom tunnel just using DNS requests, where it packs the data into the actual name of, you know, abcd.badguy.com. And it gets back an IP address where it actually converts that from a - you know, four 8-bit numbers into actual data it can use. And different command and control, different DNS tunnels they've built over time - all but slightly different protocols but all very interesting and good at evading detection. 

Rick Howard: [00:14:28]  So then we're finally at the last stage of lateral movement - OK - actions on the objective. Anything special here for OilRig? 

Ryan Olson: [00:14:34]  So in this case, OilRig once again comes back to interesting custom tools that we've seen. So the typical goal in an OilRig intrusion is maintain persistence inside that network through access to credentials and by not relying just on that implant that they originally installed. So what we've seen them use, beyond just sort of stealing these credentials and, like I said before, you know, getting access to active directory controllers and dumping credentials out of them, is a series of custom web shells that they've created where they typically target an Outlook web access server - so the email server the company's using. They infect it with some malware by getting access to the server, and that is actually accessible to the outside world via the internet, where they can actually issue commands on that host. 

Ryan Olson: [00:15:20]  So a web shell typically is in the form of, you know, a PHP or an ASP file. But in OilRig's case, they've made some really complicated ones that are built as compiled DLLs that are very, very stealthy and hard for someone to actually identify on that OWA server after it's been infected. And that allows them to keep access to that network, even if they scrubbed all the other hosts. And it's an access point that's easily accessible because everyone wants to be able to access their OWA server from the internet. 

Rick Howard: [00:15:50]  I love Ryan. He makes it all sound so easy, but that is absolutely how you track an adversary campaign. You learn every detail of what adversaries do as they traverse the intrusion kill chain. 

Rick Howard: [00:16:06]  You might be saying that this is all well and good for government intelligence agencies and Fortune 500 companies. They can track adversary activity across the intrusion kill chain, but you have a small staff. You don't have the resources to do this kind of thing. How do small and medium-sized businesses compete against the seemingly limitless army of hackers out there trying to cause my organization harm? Here's a nonintuitive thought. There are not that many of them. 

Rick Howard: [00:16:36]  It turns out that the number of adversary groups like Fancy Bear and OilRig that are active on the internet on any given day is not that big. Nobody knows for sure how many there are, but the Cyber Threat Alliance, a cyberthreat intelligence-sharing group made up of about 28 cybersecurity vendors - they estimate that the number is between 50 and a hundred. The number of attack campaigns they run collectively is also not known for sure, but we do know that many groups run multiple campaigns. The Cyber Threat Alliance estimates that the total number of campaigns from all of the adversary groups could be as high as 500 on any particular day. 

Rick Howard: [00:17:14]  The thing about multiple campaigns run by a specific adversary group like Fancy Bear or OilRig is that they are not a hundred percent unique. Fancy Bear doesn't string one set of techniques across the intrusion kill chain for one campaign and then string another completely different set of techniques for another. Instead, Fancy Bear tweaks. The group uses many of the same elements of campaign one in campaign two but might use a different malware version or a different communications protocol or change some other bit of the attack sequence. 

Rick Howard: [00:17:45]  In reality, adversary groups don't run 500 unique campaigns on any given day. They run variations of a smaller number of campaigns, and that puts the advantage with the defender. If you already have prevention controls in place for campaign one, when campaign two emerges, your networks are already protected for the bulk of the attack sequence, except for the new pieces. The trick is to get the intelligence on the new bits quickly, design prevention controls for your already deployed security stack and then distribute those controls to the security stack in all of its permutations, like behind the traditional perimeter, in your data centers, on your employee mobile devices, in your SaaS services and in your multi-cloud IaaS workloads. How we do that will take up two additional blocks on our first principle info security wall, intelligence operations and DevSecOps. I will talk about those in later episodes in this series. 

Rick Howard: [00:18:48]  My biggest pet peeve with the network defender community is that we have become comfortable reacting to the latest technical threats and don't stop to think that there are humans behind these technical threats trying to accomplish some task. We like to talk about the latest zero-day exploit or the newest piece of malware or the recent ransomware. We put all of our resources into blocking these one-off technical tools and spend little time trying to stop the humans that use them to accomplish some mission. 

Rick Howard: [00:19:16]  Our foundational first principle is to reduce the probability of material impact to our company due to a cyber event. We can play whack-a-mole by blocking technical tools all day long and will probably have some effect. But if we decide to utterly defeat the humans that are using those tools, our impact can be so much larger. We shouldn't just be blocking a random tool with no relation to the hackers behind it. We should be blocking every single tool, every possible technique at every phase of the intrusion kill chain that the hackers use. We want to give Fancy Bear no place to hide. We want to force OilRig to spend resources designing new attack tools, and then we will block those, too. 

Rick Howard: [00:19:59]  And I am not talking about attribution here, either. For the most part, it doesn't really matter if Fancy Bear is Russian, Chinese, Klingon or, for that matter, working for Hydra. What matters is the digital path Fancy Bear takes to harm my organization. Just like our infantry platoon leader, I want to have something ready at every digital river crossing, every digital bridge and at every digital mountain ridge that Fancy Bear crosses. I want OilRig so frustrated with the obstacles I put in its path that it decides I'm no longer worth the effort. 

Rick Howard: [00:20:29]  You can't do that if you only focus on random tools with no context about which group is using them. You can only do that if you design defensive plans targeting specific adversary campaigns. And it's not like there are a million campaigns running today. There are only 500. If you chose to, you could probably manage it all on a spreadsheet. I don't recommend it, but you could probably do it. 

Rick Howard: [00:20:56]  Intrusion kill chains are part of the key atomic thinking to our first principle cybersecurity wall. It's as important as the zero-trust strategy. You can't just pursue one strategy and not do the other. You have to do both. In fact, you'll also have to follow a third strategy - resilience. But we'll talk about that in the next episode. 

Rick Howard: [00:21:15]  For this building block, though, we add a key element into our infosec wall. We're no longer just implementing a passive strategy for a general-purpose defense. Intrusion kill chains allow us to pursue an active defensive strategy tailored for how the adversary will specifically attack us, and it gives us another lever to pull to reduce the probability of material impact from a cyber event. 

Rick Howard: [00:21:40]  Well, that's a wrap. If you agree or disagree with anything I've said, hit me up on LinkedIn or Twitter and we can continue the conversation there. The CyberWire's "CSO Perspectives" is edited by John Petrik and executive produced by Peter Kilpe. Engineering and music design and original music by the insanely talented Elliott Peltzman. And I am Rick Howard. Thanks for listening.