zero trust (noun)
Rick Howard: The word is: zero trust.
Rick Howard: Spelled: zero for none, and trust for unfettered access.
Rick Howard: Definition: A security philosophy that assumes adversaries have already penetrated the digital environment and tries to reduce the potential impact by limiting access by people, devices, and software to only the resources essential to perform their function and nothing more.
Rick Howard: Example sentence: In zero trust, someone will assert their identity and then we will allow them access to a particular resource based upon that assertion.
Rick Howard: Origin and context: The ideas around zero trust have been orbiting the industry since the early 2000s, but John Kindervag published the essential paper that solidified the concept in 2010. He wrote it when he was working for Forester and he called it "No More Chewy Centers: Introducing the Zero Trust Model of Information Security." He based his thesis on how the military and intelligence communities think about protecting secrets. Essentially, treat all information as need to know. In other words, if you don't require the information to do your job, you shouldn't have access to it.
Rick Howard: To achieve a zero trust posture then, network architects make the assumption that their digital environments are already compromised and design them to reduce the probability of material impact if it turns out to be true. That's a powerful concept and completely radical to the prevailing idea at the time called perimeter defense. With perimeter defense, we built a strong outer protection barrier, but once the attackers got in, they had access to everything. All transactions on the inside were automatically trusted. From the original paper, John thinks that idea is ludicrous.
Rick Howard: More than a decade later, organizational assets are scattered across multiple data islands, mobile devices, traditional data centers, SaaS services, and various cloud services. If there ever was such a thing as a trusted network, it for sure doesn't exist today. In the early 2000s, the US military started experimenting with the idea of deperimeterization under the project name, the Jericho Forum. The idea was to decouple the identification and authorization functions away from the workload. In other words, you don't connect to a sensitive workload and then try to log in. You connect to a separate system that verifies your identity and validates that you are authorized to connect to the sensitive workload. If you are, it then establishes the connection to the workload and nothing else.
Rick Howard: The same year that Kindervag published his paper, Google got hit by a massive Chinese cyber espionage attack called Operation Aurora. In the weeks that followed, we learned that there wasn't just one Chinese government entity operating inside the Google network, there were threeâ€“the Chinese equivalents of the FBI, the Department of Defense, and the CIA. And in a nod to government bureaucracies everywhere, they each didn't know the other two were in there until Google went public with the information. In response to the Aurora attack, google engineers redesigned their internal security architecture from the ground up using the concepts of deperimeterization and the zero trust philosophy. A few years later, they released a commercial product called BeyondCorp that incorporated many of the ideas they developed internally. Today, deperimeterization is known in the industry as software defined perimeter.
Rick Howard: It's important to note, as Kindervag originally explained, zero trust is not a product. It's a philosophy, a strategy. A way to think about security, and it can always be improved. In that way, it's not about the destination. You're never going to get to the end. It's more about the journey. You can buy products to help, but zero trust is a mindset and you can start with the systems you already have on your network. In order to have a mature zero trust environment, organizations must have complete visibility of all people, devices, and applications that access material, data, or systems. Once that's accomplished, organizations must then have the ability to restrict access to resources based on need to know. Key to all of that is a robust identity and authorization system.
Rick Howard: Nerd reference: Over the years, Kindervag has traveled around the world explaining his zero trust philosophy, and he uses a Kipling poem called "I Keep Six Honest Serving Men" to help people understand the basic concepts. The poem is about Kipling's young daughter's endless curiosity, and how, as we all get older, we tend to lose that sense of wonder and asking questions about who, what, when, where and why here's John Kindervag from a CyberWire-X episode we published in May, 2021 explaining the poem.
John Kindervag: And so this is my personal homage to him because who, what, when, where, why and how. I'm trying to determine who should be allowed to access a resource. Here's a way to write the policy because ultimately zero trust is a Layer 7 policy statement when it's implemented.
John Kindervag: Who should be accessing a resource? That's the asserted user identity. That's been validated by something like multi-factor authentication or some other authenticator. So it's highly validated.
John Kindervag: Where statement is, where is it located?
John Kindervag: When statement is, when does this rule need to be turned on? There's a lot of rules that should be turned off at various times, because no one typically uses them. We need a lot more time delimited rules.
John Kindervag: The why statement is, because this is mission critical data, or it's highly classified top secret. That's where we can tie classification levels into the policy.
John Kindervag: We have a how statement. What kind of processes are we going to put to the packet?
John Kindervag: The what statement is the application typically that you're accessing. By what applications should they have access to that protect surface? The protect surface of course, is the shrinking down of the attack surface orders to magnitude to something that is small and knowable. We put a data type or a single application or a single asset, or a single service inside of a protect surface, break it down into a very small chunk so that we can solve that one problem and move on to another.
Rick Howard: Word notes is written by Nyla Gennaoui, executive produced by Peter Kilpe, and edited by John Petrik and me, Rick Howard. The mix, sound design, and original music have all been crafted by the ridiculously talented Elliott Peltzman. Thanks for listening.