CSO Perspectives (Pro) 2.22.21
Ep 39 | 2.22.21

Google Cloud Platform and cybersecurity first principles.

Transcript

Rick Howard: On this show, we're taking a look at the Google Cloud Platform, or GCP, through a first principle lens. We've already done this for Microsoft Azure and Amazon AWS. Google didn't roll out GCP until 2012, a good six years after Amazon released AWS and two years after Microsoft released Azure, and it shows. Where Azure and AWS are similar in how their customers use those infrastructure as a service, platform as a service and software as a service cloud offerings, it's clear that Google studied the other two competitors and made some design changes. The most obvious have come in the form of how Google views their virtual private clouds, or VPCs, and how they have placed zero trust as a cornerstone to the entire experience.

Rick Howard: My name is Rick Howard. You are listening to "CSO Perspectives," my podcast about the ideas, strategies and technologies that senior security executives wrestle with on a daily basis. 

Rick Howard: Let's start with some basic GCP Networking 101. 

Rick Howard: Google has abstracted some of the tactical networking components that are the meat and potatoes of Azure and AWS into a hierarchical construct. When you buy GCP services, your organization can create multiple folders; let's say the finance team, the IT team and the security team. Each owner of the folder can also create subfolders for various and distinct tasks, like an employee salary folder and an employee vacation folder, all under the parent finance team folder. Each folder owner can also create one or more projects. 

Rick Howard: And this is the key. The GCP concept of a project is the fundamental organizing service entity of the Google Cloud offering and contains all the access, permissions and settings, as well as resources like compute, storage and networking. 

Rick Howard: Now, a project can't access another project's resources unless the owner manually shares them or establishes some sort of VPC peering. With VPC peering, though, GCP customers can design their environments so that individual project owners don't have to worry about networking and security stacks. The IT teams and the security teams can share their projects with the finance team, and the finance team can get busy doing, you know, whatever finance people do. 

Rick Howard: For resiliency considerations, the IT team can create subnets in various regions similar to the capability in Azure and AWS. With Google, though, the routing is implicit and handled under the covers by the GCP infrastructure. 

Rick Howard: With a nod toward zero trust, GCP also has this notion of host projects versus service projects. Service projects can't create infrastructure. They share the infrastructure from a host project. In our example, the IT team and the security team maintain host projects - you know, things like subnetting and firewalls, you know, just to name a couple - and share that infrastructure with the employee salary service project. And much to their joy, the owners of the employee salary service project don't have to worry about all that infrastructure stuff that generally slows them down anyway. They can just focus on making a better employee salary service. 

Rick Howard: Now, you can build this kind of model inside of AWS and Azure, but that's the thing; you have to build it. Within GCP, it's the way the infrastructure works. 

Rick Howard: So that's GCP Networking 101. Let's talk about security. 

Rick Howard: GCP offers three layers of security controls and services - within the VPC, between VPCs and between VPCs and the internet. Within VPC projects, you have microsegmentation capability in the form of identity management, third-party tools, key management and hardened virtual images. Between VPC projects, designers have ways to connect things securely with VPC firewalls and other service controls, VPN connections back to your on-prem data centers, network address translation for internet-facing workloads and packet mirroring for network management and incident response. 

Rick Howard: For internet-facing VPCs and employee access to VPCs, this is where Google is fundamentally different from Azure and AWS, and they call it BeyondCorp. BeyondCorp is Google's implementation of the zero trust model. But before they could get there, three things had to happen - a transition to DevOps, a famous zero trust white paper and a massive Chinese cyber-espionage attack. 

Rick Howard: In Season 1, Episode 10 of "CSO Perspectives," I did a deep dive on how the Google leadership team transitioned to a DevOps philosophy. As far back as 2004, instead of the traditional IT teams performing the standard network management tasks, the Google leadership team gave that set of jobs to the development team roughly five years before the IT community even came up with the DevOps label to classify the work. And that was step one. 

Rick Howard: Google couldn't have implemented their version of zero trust unless they had a way to deploy infrastructure as code at scale. In 2004, they weren't thinking about zero trust yet because, you know, it hadn't been invented yet. But Google's site reliability engineers had started to master the day-to-day practice of DevOps operations. 

Rick Howard: I also did a mini history lesson of how we got to zero trust philosophy back in Season 1, Episode 7. Although the idea had been kicking around various places during the 2000s, it wasn't until John Kindervag published his famous paper on the concept in 2010 that it started to get legs. You can even make an argument that zero trust as a legitimate cybersecurity best practice didn't really gain traction from the majority of network defenders until the last five years or so. But the paper was step two from the Google perspective. 

Rick Howard: Now, I'm not saying that Kindervag's paper influenced the Google decision. I'm saying that it generally influenced the community about the goodness of zero trust. And I expect that general sentiment rubbed off on Google engineers in some way. 

Rick Howard: But the real catalyst was step three. Multiple Chinese cyber-espionage groups broke into the Google networks, as well as many other Silicon Valley companies, in 2009 in an adversary campaign called Operation Aurora. 

Rick Howard: At one point, there were at least three different Chinese government organizations conducting cyber-espionage operations within the Google networks - the Chinese equivalents of the U.S. FBI, the Department of Defense and the CIA. And here's my favorite part of the story. They each didn't know that the other two were in there until Google went public with the intelligence in 2010. And you all thought that the American government didn't like to share information. For shame. 

Rick Howard: But that's what did it. Google leadership decided they needed a redesign of their own internal network security. And shortly after, they rolled out BeyondCorp. And the ideas and infrastructure that the Google engineers created to support that effort eventually found their way into GCP. 

Rick Howard: So why is BeyondCorp such an important network design component? The aha moment for the Google engineers came when they realized that we really shouldn't authenticate and authorize users and API calls on the actual workload that we're trying to protect. Instead, we should be doing those operations before any user or machine actually gets into the network at all. And that way, we have a chance to keep bad guys out of our networks before they can get in the front door and snoop around. 

Rick Howard: That's brilliant. Why didn't I think of that? 

Rick Howard: This is similar to what Jerry Archer, the Sallie Mae CSO, talked about in our last episode about AWS security. He said that he deployed a third-party tool in his AWS instance that provides a software-defined perimeter, or SDP, that did a similar thing. 

Rick Howard: The basic concept of SDP came out of the U.S. government - the Defense Information Systems Agency, or DISA, to be precise - and was eventually codified by the Cloud Security Alliance as a general best practice for cloud deployments. 

Rick Howard: GCP does this with something called an Identity-Aware Proxy, or IAP, paired with a Google Cloud Identity Access Management system, or IAM. Google's take on SDP adds a little GCP sweetener by also monitoring the endpoints used by customers and employees trying to connect to various workloads. They're looking for things like current operating system versions, patch levels and whether or not they have ever seen the device before. 

Rick Howard: Once the system authenticates the user or the API call and checks that they have permission to access the workload, then GCP facilitates the connection to the desired resource and nothing else. Just because the requester has permission to access a workload doesn't give them permission to access all workloads. If the NSA would've had this kind of thing back in the day, there might not have been an Edward Snowden problem. 

Rick Howard: When you hear Google say that the perimeter is dead, this is what they're talking about. With the BeyondCorp model, you no longer require VPNs to tunnel into a perimeter. You authenticate at an access point the Identity-Aware Proxy and access management system combo. The system then checks if you have permission to access the resources you want to connect to, and then the system provides a direct connection to the resource. 

Rick Howard: This is how you do zero trust. SDP is not just a good idea; it's probably the idea on how to do zero trust in the cloud. GCP's version is light-years ahead of the other two cloud providers we have looked at in this series. 

Rick Howard: In terms of first principle thinking, all three cloud providers we have talked about in this series are about the same in terms of offered capability. If all things were created equally, Google would probably get the nod for its rethinking of SDP and how they could provide this zero trust service to their customers. 

Rick Howard: Having said that, I realize that I haven't considered any operational issues - in other words, how easy is each to deploy? - or any financial issues - like how much money does it cost to run your operation in these environments? - and probably a trough of other issues that might contribute to the decision of what cloud provider to use. But that's OK. That wasn't my purpose here. Before I make any decision to adopt one cloud provider or another, I would want to understand if they could at least meet my security needs before I figured out operational or cost issues. 

Rick Howard: So here's the scorecard so far. Resilience - good across the board for all three cloud platforms. As I have said across the entire series, cloud platforms do resilience well, or at least make it easy for you to build those resilient systems yourselves. Zero trust - again, good for Azure and AWS, excellent for GCP. We have to give GCP the nod here for their BeyondCorp design. Intrusion kill chain prevention - poor across the board. You're going to need third-party tools to get this done in all three cloud provider networks. And finally, risk assessment - again, poor across the board, but you will have lots of telemetry that you can use to build your own risk models. 

Rick Howard: Having said all of that, your own on-prem deployment scorecard is likely not much better than this. It might be a tad worse. So if you're looking for an excuse to go to the cloud, this might be the reason. Cloud platforms can help you get closer to your first principle design goals. 

Rick Howard: And that's a wrap. I have also written a more detailed essay about this topic that has an extensive reading list. If you're looking for more information written by smarter people than me, check out that essay on the CyberWire Pro website. And if you agreed or disagreed with anything I have said here about Google GCP, Amazon AWS or Microsoft Azure, hit me up on LinkedIn, and we can continue the conversation there. 

Rick Howard: Next week, we will invite the experts to the CyberWire Hash Table to see what I got wrong on this episode. You don't want to miss that. 

Rick Howard: The CyberWire's "CSO Perspectives" is edited by John Petrik and executive produced by Peter Kilpe. Our theme song is by Blue Dot Sessions. And the mix of the episode and the remix of the theme song was done by the insanely talented Elliott Peltzman. And I am Rick Howard. Thanks for listening.