CSO Perspectives (Pro) 4.5.20
Ep 1 | 4.5.20

Your security stack is moving: SASE is coming.

Transcript

Rick Howard: [00:00:12] Hello, everyone. Rick Howard here. As you may have heard, I am the newest employee of the CyberWire podcast team. I signed on as their new chief analyst, chief security officer and senior fellow. And that is indeed a mouthful. I've been busy these last few weeks working on new content, including this podcast you are listening to right now called "CSO Perspectives" and that you can subscribe to for future episodes from the CyberWire Pro+ platform. In this first episode, though, I wanted to talk about a new set of security technologies. I've been tracking these things since, oh, last fall or so, and I believe they're going to fundamentally disrupt how we all consume security services in the very near future. 

Rick Howard: [00:00:59]  Now, I'm not talking about a security product here. This is not a commercial. I'm talking about a set of technologies that I believe is going to flip on its head how we all consume security services. Now, some of the folks over at Gartner - Neil MacDonald, Lawrence Orans and Joe Skorupa; I believe that's how you say it - they formalized what many of us have known for a while now; that perimeter defense is dead, that it no longer makes any sense to try to pull all of our digital assets behind a single security stack or a series of many internal security stacks deployed all over and that it is crazy to backhaul our network traffic in order to accomplish the same. 

Rick Howard: [00:01:34]  The world has moved on. Our employees still connect to our resources and our centralized data centers, but they also interact with our organization's data back at headquarters, on their personal and company-provided mobile devices, in SAS applications and more and more within applications running in the cloud and, increasingly, across multiple cloud providers like Google, Amazon and Microsoft. I like to refer to these information repositories as data islands. 

Rick Howard: [00:02:01]  Now, since about 2010, network defenders like me have been generally trying to protect these data islands with two grand strategies - intrusion kill chain prevention and Zero Trust. The intrusion kill chain prevention strategy involves deploying defensive campaigns designed to defeat specific cyber adversaries. Think of it like installing interdicting trolls to your security stack at every phase of the intrusion kill chain for the sole purpose of preventing the success of adversaries like Fancy Bear, Refined Kitten and Lazarus just to name three. The Zero Trust strategy involves reducing our network's attack services by limiting employee and customer access to network resources based on need-to-know. 

Rick Howard: [00:02:43]  When we just had perimeter defense to worry about, implementing these two grand strategies was hard enough, but now that our digital resources are scattered to and fro across all of our data islands, pursuing these two strategies has become a bridge too far. It turns out that the service providers we use today on each of these data islands had their own product sets to sell for intrusion kill chain prevention and Zero Trust. In order to pursue our two grand strategies, network defenders and network operators alike have to deploy different tools that have the same functionality but operate in different environments and don't easily integrate. Consequently, the complexity of orchestrating the security of those data islands with all of those product sets has grown exponentially, and the chances of declaring success with our two grand strategies has drastically reduced. 

Rick Howard: [00:03:31]  Enter something called SASE, or Secure Access Service Edge cloud delivery. This is the name that Gartner gave it, and I think we're stuck with it. And when I hear the name, I don't want to call it SASE; I want to call it sassy or something like that. But, you know, I digress. The Gartner team published their SASE paper in August 2019. Here's a quote from their essay. Now bear with me here; there's a lot of networking jargon in this description, and it sounds kind of intimidating. All right, here it goes. 

Rick Howard: [00:04:02]  (Reading) The digital inversion of usage patterns will expand further with a growing enterprise need for edge computing capabilities that are distributed and closer to the systems and devices that require low-latency access to local storage and compute by connecting a worldwide fabric of points of presence and peering relationships. 

Rick Howard: [00:04:22]  Whew. Let me see if I can't break that down. In other words, you no longer have to install, maintain and operate your own security stack within one or more of your internal and central locations within your own networks and then trombone your network traffic back to it in order to get the benefit from it. Instead, the first hop from your user's device or your organization's servers, regardless of which data island they sit on, will be to a cloud provider's SASE service. The SASE service will provide your security stack for all of your users and devices and will also provide efficient, peer routing to the destination. 

Rick Howard: [00:04:59]  This accomplishes two things. First, it simplifies the orchestration of your two security strategies. Instead of managing multiple vendor security products, some that perform the same function only in different environments, all traffic goes to a copy of the same security stack with the same policy. If your SASE vendor allows your own DevSecOps teams to update the security platform through automation, your ability to deploy defensive campaigns designed for specific adversaries will become easier. If your SASE provider uses a security platform that already automatically updates its own intrusion kill chain prevention controls for all known adversaries, then the chance that your intrusion kill chain strategy will succeed will also have significantly improved. If your SASE provider uses a security platform that facilitates Zero Trust rules through automation, then your chances of successfully implementing your Zero Trust strategy likewise will greatly improve. 

Rick Howard: [00:05:55]  Second, by choosing the right SASE vendor who has established the essential peering relations shifts with the key content providers that you most likely will use, your network latency will be drastically reduced, too. In Andrew Blum's book Tubes: A Journey to the Center of the Internet," he describes the evolution of the internet - how it began and how it has changed since. He would probably hate how I'm going to reduce his ideas. But basically, the internet has gone through several phases. In 1969, UCLA and the Stanford Research Institute established the first internet connection. As Blum would say, the internet took its first breath. I love that. In the 1970s, Vint Cerf and Robert Kahn invented TCP/IP, and by 1983, TCP/IP became the standard internet communications protocol. At this point, the internet was just one large network. There were other private networks, but they couldn't talk to each other. 

Rick Howard: [00:06:49]  In 1989, Yakov Rekhter and Kirk Lougheed invented BGP - on three cocktail napkins, by the way, at an internet conference. And as - that was just a temporary measure to connect the internet to the other private networks. But as you all know, BGP became the de facto standard that we use today. In 1995, the National Science Foundation led a contract to establish four main hubs of internet traffic and converted the original mesh network idea into a hub-and-spoke network. And they were Sprint out of New Jersey, Ameritech in Chicago, Pacific Bell in San Francisco and - the one I was familiar with - MAE-East in Virginia. 

Rick Howard: [00:07:26]  By the late 1990s, a bandwidth problem emerged called the Chicago problem. If my business lived in Minneapolis, Minnesota, and I wanted to send an email to another business in Minneapolis, that traffic would have to go all the way to Chicago before it would get delivered. The solution that emerged for that were something called internet exchanges located in the regional areas. They would provide the local connectivity and would only send traffic to the big hubs when needed. 

Rick Howard: [00:07:53]  All right, so fast-forward now to the early 2010s or so. Content providers like Google, Netflix, Akamai and others decided it was in their best interests to build their own high-speed networks to support their own customers. Companies like Netflix didn't want to rely on the big service providers to deliver their content for them. They started laying their own fiber across the globe. And according to Adam Satariano of The New York Times, these content providers own over 50% of the fiber deployed across the world, compared to the traditional network providers like AT&T and others that you might have expected to own it. 

Rick Howard: [00:08:30]  And here's the kicker - the content providers would plug their network straight into the internet exchanges, a process called peering. So if I'm a G Suite customer, instead of my traffic going to my internet service provider, to the local internet exchange, to one or more of the big internet hubs in order to reach the front door of the Google network, because of the peering network, it goes into the internet service provider, to the internet exchange and right into the back door of the Google network. Think of this as sort of a short-circuit option for content providers, which brings us finally - I know this is long-winded way to get here. But it finally gets us to what SASE is. 

Rick Howard: [00:09:07]  Like I said, Gartner coined the SASE phrase in August of 2019, but companies like OPAQ and Cato and, even my old employer Palo Alto Networks, has a version of this service that they've been running for a few years. The SASE vendors install a security stack into the same data centers as the internet exchanges. In fact, they become kind of souped-up internet exchanges. And then they peer with the same content providers like Google and Netflix. The customers' hop, regardless of the data island they are sitting in, is through one of these SASE vendor's nodes. The SASE vendor uses the same shared responsibility model as cloud providers do. They maintain and secure the physical facilities and keep the blinking lights running on the network and security here. The customer keeps the security policy up to date on the security stack. 

Rick Howard: [00:09:52]  So for example, let's say that your organization is a Google Suite shop. Your employees use all of the Google apps to get their work done - Gmail, Google Drive, Google Calendar, Google Docs, Google everything. By using a SASE vendor that peers with the Google network, your employees will go directly to whatever data and workload source they require without having to traverse the entire internet to get to Google's front door. Your second hop is not to the internet; it is to the Google network. 

Rick Howard: [00:10:20]  The key is that no matter where the data originates from - either the traditional perimeter, the data center, employees' mobile devices or workloads and cloud environments like IaaS and PaaS and SaaS - the data is traversing the same security stack with the same policy that you maintain. The benefits are amazing. Customers move their complex network management off their premise and let their SASE vendor manage it for them. They move their complex security orchestration to their SASE vendor, too. It is the perfect solution for small- and medium-sized businesses who do not have the resources to manage complex environments. And today, it's a pretty good solution for Fortune 500 companies for their nonessential applications. And I predict, within five years, SASE will be good enough for them, too. 

Rick Howard: [00:11:08]  Secure Access Service Edge cloud delivered is a fundamental shift in internet data flow on the same level of significance as standardizing on TCP/IP, installing BGP routing and instantiating content provider peering relationships. The interesting part is that, for the first time in internet evolution, the security solution is built in as the main feature for customers to use it. You all know that usually security features are bolted on at the end. Security is the main reason you will deploy a SASE service. And if done correctly, the burden of orchestrating your internal security stacks becomes less complex. We might actually have a chance to achieve our two security objectives - deploying defensive campaigns automatically for every phase of the intrusion kill chain and deploying a realistic and useful Zero Trust policy. 

Rick Howard: [00:11:57]  So thanks for letting me ramble on about this exciting new way of consuming security services. If you agree or disagree with anything I have said, hit me up on LinkedIn or Twitter. We can continue the conversation there. "CSO Perspectives" is edited by John Petrik, Tim Nodar and executive produced by Peter Kilpe. Sound design and mixed by the insanely talented Elliott Peltzman. And I am Rick Howard. Thanks for listening to "CSO Perspectives." And be sure to look for more Pro+ content at thecyberwire.com/pro website.