CSO Perspectives (Pro) 5.18.20
Ep 7 | 5.18.20

Zero trust: a first principle of cybersecurity.

Transcript

Rick Howard: [00:00:00] Sometime before 2013, Edward Snowden purchased a web crawler from the dark web for about a hundred dollars, and he turned it loose on the United States intelligence agency's JWICS network. JWICS stands for the Joint Worldwide Intelligence Communications System, and it is where American spies store their super-secret information. Snowden collected over a million highly classified documents, walked out the door with them and, well, let's just say created quite an international incident. The crazy thing is that once he legitimately logged in to JWICS, he had authorized access to almost everything stored there. He didn't run a Mark Zuckerberg-level hack like we saw in the movie "The Social Network." 

0:00:42:(SOUNDBITE OF FILM, "THE SOCIAL NETWORK") 

Jesse Eisenberg: [00:00:42]  (As Mark Zuckerberg) Let the hacking begin. Lowell has some security. They require a username-password combo. And I'm going to go ahead and say they don't have access to the main FAS user database, so they have no way of detecting an intrusion. Adams has no security but limits the number of results to 20 a page. All I need to do is break out the same script I used on Lowell and we're set. Done. 

Rick Howard: [00:01:00]  Snowden didn't have to do that because he was authorized to be there. He basically web surfed the JWICS network to see what he could find. I guess it didn't hurt either that he had system administrator credentials for many of those systems. Edward Snowden is the poster child for why we should all be deploying "zero trust" networks. 

Rick Howard: [00:01:31]  My name is Rick Howard. You are listening to "CSO Perspectives," my podcast about the ideas, strategies and technologies that senior security executives wrestle with on a daily basis. On this episode, we are going to take a look at zero trust and why I think it is the cornerstone building block to our first principle cybersecurity infosec program. And here's the key takeaway. This is not as hard to do as you think. 

Rick Howard: [00:02:05]  This is the second show in a planned series that we are doing on network defender first principles. In the first episode, to set the series up, I explained what first principle thinking is and presented an argument about what the ultimate cybersecurity first principle should be. If you have somehow landed here without hearing that first episode, you should really go back and listen. No, no. I don't mind. I'll wait. 

Rick Howard: [00:02:32]  So welcome back. After walking through that analysis, it's clear to me that our foundational first principle, our cybersecurity cornerstone is this, and I quote, "reduce the probability of material impact to my organization due to a cyber event," end quote. That's it. Nothing else matters. That simple statement is the pillar we can build an entire infosec program on. So if you're buying any of this, this wall metaphor that I'm pushing, you might be thinking, well, there's going to be a number of blocks on the wall. What's the next one? I'm glad you asked. 

Rick Howard: [00:03:11]  Let's start with the simple things first. How about if we just make it harder for some hacker group to cause material impact? Why should it be easy to get into my network? Think of it like trying to protect your house from common thieves. You could spend a lot of money and time installing and maintaining and monitoring expensive surveillance equipment and physical security systems, but if you forget to close and lock the doors and windows when you went out for the evening, the bad guys could easily slip into your house. So it's the same idea for protecting your digital assets, but just what is the equivalent of locking your doors and windows in a digital environment? 

Rick Howard: [00:03:48]  The example that easily comes to mind is the common problem of misconfigured S3 storage buckets in the Amazon Echo system. Amazon started the service back in 2006. And since then, we witnessed a steady stream of S3 bucket exposures. It's hard to pin down the actual number, but one guy I follow on social media put it in the thousands. Who knows? But it's clearly not a small number. The thing is these hackers didn't break into the S3 buckets using some clever hacker technique. They mostly climb through the open digital windows and doors because the responsible administrators didn't configure the S3 bucket correctly. S3 bucket configures are just one example of failing to lock our digital windows and doors. Depending on the size of your organization, you could have hundreds, if not thousands, of potential and unintentional electronic doors and windows left wide open during your day-to-day operation. 

Rick Howard: [00:04:41]  Now, people like us, network defenders, we like to lump these kinds of activities, this closing of the digital doors and windows, under the heading of cyber hygiene. The original internet founding father Vint Cerf, the guy who helped build the original TCP/IP stack back in the day, coined the phrase in 2000 when he testified before Congress. But the word hygiene doesn't really convey enough importance to what we're trying to do here. And when I hear hygiene, it makes me think that employees should do this, not the company. It's kind of like it's my job to prevent tooth decay by brushing my teeth every day. But in the current age of continuous low-level cyber conflict between North Korea, China, Russia, the U.S. and others, it doesn't seem quite fair to blame Luigi back in the cafeteria for a material breach because he fell for the same clickbait that everybody else does. 

Rick Howard: [00:05:31]  Cyber hygiene is definitely a building block on our metaphor for a first principle cybersecurity wall, but it is not the foundational one. We're not putting this one on the base near the bottom to hold everything up. It'll be somewhere on the top. But the block we need here needs to be much more comprehensive, something solid, weighty, something that will be hard to knock over. This is where the zero trust strategy comes into play. Now, the ideas of zero trust have been bouncing around the industry since the early 2000s, but John Kindervag published the essential paper that solidified the concept back in 2010. 

Rick Howard: [00:06:07]   He based his thesis on how the military and the intelligence communities think about protecting secrets. Essentially, treat all information as need-to-know. In other words, if you don't require the information to do your job, you shouldn't have access to it. To achieve a zero trust posture then, network architects make the assumption that their digital environments are already compromised, and they design them to reduce the probability of material impact to the company if it turns out to be true. 

Rick Howard: [00:06:35]  That's a powerful concept, and completely radical to the prevailing idea at the time, which was perimeter defense. With perimeter defense, we built a strong outer protection barrier, but once the attackers got in, they had access to everything. We called this the hard-and-crunchy-on-the-outside, soft-and-gooey-on-the-inside network design. My own name for it is the M&M network, hard candy shell on the outside, soft chocolate on the inside - so soft that the inner network melts inside the hackers' mouths as they consume your digital assets. How about that for a metaphor? 

Rick Howard: [00:07:12]  The U.S. government maintains a handful of not-directly-connected-to-the-internet networks, like the NIPRNet - essentially, the U.S. government's internet - the SIPRNet, a place where the government can store, share and communicate secret information, and JWICS. When Snowden did what he did, the JWICS network engineers had no concept of a zero trust network. Now, the irony doesn't escape me that John Kindervag based his entire zero trust thesis on how the intelligence community typically compartmentalizes its secrets and then we discover that Snowden was successful, at least in part because the NSA didn't compartmentalize its secrets on its most secure network. But let's be fair. Back in 2013, nobody anticipated that a highly vetted contractor like Snowden would do such a thing on a super-secret network. In hindsight, it seems obvious that somebody would try. But back then, the controls that the NSA had in place to vet these workers seemed adequate. 

Rick Howard: [00:08:08]  The Snowden incident caused the NSA and many network defenders elsewhere to rethink their network designs. For the infosec community, it moved Kindervag's theoretical paper from an interesting idea to a key design principle that we all should be following. Zero trust was how we were going to build networks moving forward. And then, nothing happened. The bulk of us didn't build them. It turns out that even though Kindervag's thesis is brilliant, the practical how-to section is kind of sparse. I have talked to many network defenders over the years about zero trust architecture. My takeaway is that most miss the point. They don't seem to understand that zero trust is not a destination. It is not a set of technologies that you buy or build, install and then tell your boss, well, that's it; we have zero trust. 

0:08:56:(APPLAUSE) 

Rick Howard: [00:08:58]  It doesn't work like that. Zero trust is a philosophy, a strategy, a way of thinking. There are a million things you can do technically and process-wise that will improve your zero trust posture, to lock those digital doors and windows to make sure that somebody roaming around the digital hallways of our networks will not find a door ajar and wander in to find something they should not have access to. Or even if they do, what they discover through that open door will not significantly impact the company. The pursuit of zero trust is a journey, not a destination. You will never reach the end. The good news is that it is pretty easy to get far enough down the path to make a difference, to be able to say that your zero trust program has reduced the probability of material impact to your organization due to a cyber event. 

Rick Howard: [00:09:53]  The reason many of us have not even begun this journey, this conversion of our M&M networks into zero trust networks is because we have set ourselves a giant task. We are convinced that in order to achieve zero trust, we have to boil the ocean. That's right. We have to throw everything out that we have already built and start over. Ouch. No wonder not many people have done this yet. If you don't believe me, take a look at the NIST draft document called "Zero Trust Architecture" that they published in February of 2020. NIST stands for the National Institute of Standards and Technology. And, you know, it's been around since 1901. Their government mission is to enable innovation through standards. 

0:10:33:(SOUNDBITE OF ARCHIVED RECORDING) 

Unidentified Person: [00:10:34]  Today, the technologies that are most important to us are not single standalone technologies. They are actually systems of technologies that need to work with each other. 

Rick Howard: [00:10:43]  Now, don't misunderstand me. The NIST "Zero Trust Architecture" document is absolutely correct in how it organizes the zero trust ideas and the technical things you have to have in place in order for zero trust to work. That said, NIST puts forward a proposed system of systems, like an architecture of black boxes that when I looked at it the first time seemed to be something that none of us have, isn't available from the commercial sector to buy and is too big to build ourselves. And then I was thinking, that's OK. That's what NIST does. They give us direction for how to build big and innovative things. 

Rick Howard: [00:11:17]  But thinking about it for just a minute, I realized that I was wrong about the community not having the things needed to do zero trust. In fact, you most likely already have the technical tools deployed in your networks that will allow you to get a long way down the path of the zero trust journey right now. They are called next-generation firewalls, and they became commercially available in 2007. All the major firewall vendor products do next-generation things, and if you're a medium- to large-scale business, you probably have a boatload of them deployed in your networks. The firewall has been a staple of the generic security stack since the early 1990s. But when I say firewall, most of us are thinking about the old stateful inspection firewalls invented around the same time. They were basically fancy routers that allowed us to block incoming and outgoing traffic based on ports, protocols and IP addresses, and we deployed them at the boundary between our digital organizations and the internet. 

Rick Howard: [00:12:17]  The next-generation firewall, as compared to the stateful inspection firewall, is a paradigm shift. You block network traffic based on applications tied to the authenticated user. Let that sink in for a second. Instead of a layer-3 firewall that operates on ports, protocols and IP addresses, it's a layer-7 firewall that operates on applications. If you're concerned about your employees visiting Facebook during the workday, you could try to block their traffic at layer 3 by not allowing them access to a raft of IP addresses that Facebook manages and continuously changes. That's a never-ending. task by the way. Or you could write a next-generation firewall rule, a layer-7 firewall rule, that says something like, the marketing department can go to Facebook, but nobody else can. Done. And you never have to touch it again. 

Rick Howard: [00:13:08]  In the next-generation firewall world, everything is an application. Using Salesforce? That's an application. Having an internally deployed Exchange server - use of that - that's an application. Accessing the dev code library - application. Pinging a host in your network - application. Reading The Washington Post? You guessed it. That's an application. Being able to block applications based on the employee groups that use them provides the infosec team a means to start down the zero trust journey without having to completely redesign their network. They may have to supplement it a bit. But they don't have to start from scratch. 

Rick Howard: [00:13:49]  There are two approaches we can take - logical segmentation and microsegmentation. Logical segmentation is the relatively easier one. And I have to say I love it when people tell me that things will be easy when I know that they're not going to be. 

Rick Howard: [00:14:07]  I had an old Army boss who loved this one Latin phrase. And he put it on all the plaques that we gave to departing soldiers - nihil facile est. His translation - nothing is easy. Words to live by. Anyway, logical segmentation is creating layer-7 firewall rules for the big muscle movement functions in your company, like marketing, legal, finance, software development. You get the idea. And this is where a lot of network defenders get tripped up. Since we can create next-generation firewall rules by tying applications to authenticated users, it's very tempting to create rules for individuals in the company, things like, Kevin can go to Facebook, but Luigi can't. In any sizeable organization, that quickly becomes a management nightmare. Trying to administer the constant churn of individual employees moving around the organization over time will quickly cause your system to crumble of its own weight. 

Rick Howard: [00:15:04]  Instead, focus on the 10 to 15 big, functional areas. Create rules for what applications they can use and which ones they can. And that is how you start your zero trust journey. You still have to manage employee movement. But their access permissions are not specific to each employee. They're based on a handful of important company functions. The other, more difficult approach is microsegmentation. This uses the same idea of building functional groups and writing rules for them, but it focuses on the devices used by those functional groups. Like, the marketing team can access the internal cafeteria website from their iPhone to order lunch. But the group does not have access to the financial department's M&A database server. The reason this is harder is that the infosec team has to do the additional work of installing some sort of public key infrastructure on every device in the organization that the next-generation firewall can interrogate. For small- to medium-sized companies, this is probably a bridge too far. But for larger organizations, they most likely already have this deployed. They just need to decide to utilize it. 

Rick Howard: [00:16:13]  The Google service BeyondCorp is the newest zero trust alternative to next gen firewalls. It is a SAAS offering - or Software as a Service, if you like. Or you could call it a SASE service - Secure Access Service Edge cloud delivered. Hey, don't look at me. I don't create these names. And if you're not sure about what SASE is, check out an earlier show I did called "Your Security Stack is Moving: SASE is Coming" to see why SASE might just be the next thing we are all buying in the near future. BeyondCorp presents a new approach to the zero trust strategy and microsegmentation. It's not complete yet. Today, it only secures access for your remote employees' devices to approved web applications. But you can see the direction the service is going. Out of all the data islands where we store our company's data, like behind the perimeter, data centers, mobile devices, SAS applications and cloud deployments, BeyondCorp covers just some of them. The road map is clear, though, to cover all of them in the future. 

Rick Howard: [00:17:15]  BeyondCorp resulted from Google's internal response to a Chinese government cyberattack against their networks and other prominent U.S. companies back in 2010. McAfee named the attack Operation Aurora. And it was significant in two big ways. First, the announcement made the general public aware of the advanced persistent threat, or APT, for the first time. This is a cyber-adversary that didn't pull a hit and run to steal credit cards or other valuable personal identifiable information, or PII. These guys took their time. They burrowed in slowly with stealth. This was an espionage operation, not a criminal operation. The intelligence community and other serious network defenders were aware that this kind of thing had been going on since the late 1990s. But this was the first time that the general public really became aware of it. 

Rick Howard: [00:18:05]  And second, it marked the first time that a commercial entity, Google in this case, went public with breach information. Before that milestone, no commercial company would ever admit that they had been breached. They feared that the stock price would spiral down to the cellar if they did. When Google admitted it, it kind of gave everybody permission to do it, too. And since then we've learned that the public admission of a breach, if communicated properly, will not tank the company. Today, announcing breaches is commonplace, whether we communicate them properly or not. Oh, and I love this next part. 

Rick Howard: [00:18:42]  Once the dust from the Operation Aurora investigation had settled down, we learned that there wasn't just one Chinese government entity operating inside the Google networks; there were three. The Chinese equivalents of the FBI, the Department of Defense and the CIA all had a toehold inside the Google network. And in a nod to government bureaucracies everywhere, they each didn't know the other two were in there until Google went public. How great is that? It's like they were cyber-espionage moles popping up and down in their holes, saying, what are you doing here? No, what are you doing here? If that's not the classic case of the right hand not knowing what the left hand is doing, I'm your mother's uncle. 

Rick Howard: [00:19:20]  But the best part for me, though, is that even though the Aurora campaign demonstrated the Chinese government's advanced capabilities in cyber-espionage operations, it also demonstrated that the Chinese government was hampered with the same kind of debilitating information silos we all can find in any government bureaucracy. To me, this somehow makes them less scary. They are not a group of "Jason Bourne" spies who never make mistakes. They're just humans who are particularly good at their craft. But also, they put their pants on one leg at a time, just like we all do. 

Rick Howard: [00:19:56]  In response to the Aurora attacks, Google engineers redesigned their internal security architecture by adopting the zero trust strategy using microsegmentation - apps to devices. Ten years later, Google product managers took what they learned from that experience and built the BeyondCorp product, which brings us back to next-generation firewalls. If you want to cover all of your data islands right now with technology that you most likely already have, these next-generation firewalls are your best bet. The simple approach is to use logical segmentation - rules based on applications tied to authenticated users. Next-generation firewalls are really good at that. If you want to get fancy, add microsegmentation - rules based on authenticated users tied to the devices they are using. And keep an eye on BeyondCorp and the copycat SASE services that are likely to pop up in the next few years. That is a promising idea. 

Rick Howard: [00:20:54]  Here's the thing, though. Zero trust initiatives do not fail because the technology to implement it doesn't exist. Next-generation firewalls have been around since 2007 and were designed to do that very thing. Zero trust initiatives fail because network defenders don't install the proper people and process to manage it. At worst, some of us think that we can flip a switch, and the system will manage by itself. Let me count how many times that strategy is worked in my lifetime. That would be zero. 

Rick Howard: [00:21:22]  At best, we use the two guys and a dog management approach. This team of crack IT management experts operate our routers, our security stack, our printers and faxes. And they get coffee for the CEO in the morning. Now we want them to manage the zero trust strategy inside our next-generation firewalls. They barely have time to check their email in the morning. And now we add this task for their plate? That is a train wreck in the making. That just adds to the technical debt pile that we are already not addressing. And besides, deciding which employees get access to which company resources is not a decision we want sitting with that vaunted two guys and the dog team. That is a decision that should be addressed in policy at the senior levels of our organization. 

Rick Howard: [00:22:04]  If zero trust is the next first-principle building block that we are going to install on our reduce the probability of a material cyber event foundation, surely it is important enough to build a team to manage it. We need a team to create the processes for bringing new employees and deciding which zero trust functional buckets they will belong to initially. The team will also decide how to change employee access when they move laterally within the organization to new jobs and new responsibilities. The team will further design the processes for when employees lead the organization by removing their axis from the system. 

Rick Howard: [00:22:37]  And finally, we'll need an entirely different team focused on automating these procedures so that the team managing it doesn't fat-finger the configuration changes and cause an Amazon-S3-bucket-type error by leaving the digital windows and doors open for some bad guys find. 

Rick Howard: [00:22:57]  Seven years after the fact, it's easy for armchair network defenders to criticize the NSA for failing to install a zero trust network designed to reduce the impact of an Edward Snowden-type insider threat attack. The startling truth is that most of us didn't have that kind of network installed, either. The sadder reality is that most of us still don't, and we should. At first glance, the prospect of converting our M&M (ph) networks into zero trust networks appears daunting and expensive. 

Rick Howard: [00:23:25]  But instead of thinking of zero trust as a thing we have to do to finish, to put in our done pile, consider it a journey on the never-ending path of improvement. And as I said, there are probably a million things we can do on that zero trust journey. But there are things we can do right now with the technology that we likely already own that will allow us to start closing those digital doors and windows. And even if we do leave one ajar by mistake, the limited data that they find there will not significantly impact the organization. And that is why zero trust is the next building block we would install on our reduce the probability of material impact strategy wall. 

Rick Howard: [00:24:08]  That's a wrap. If you agree or disagree with anything I've said, hit me up on LinkedIn or Twitter. And we can continue the conversation there. The CyberWire's "CSO Perspectives" is edited by John Petrik and executive produced by Peter Kilpe. Engineering, original music and music design by the insanely talented Elliot Peltzman. And I am Rick Howard. Thanks for listening.