Securing containers and serverless functions: around the Hash Table.
Rick Howard: Hey, everybody. Rick here. Some of you may know that my last CSO gig was with Palo Alto Networks. One of my responsibilities was to manage the in-house cyberthreat intelligence team, Unit 42. In 2013, containers and serverless functions were just starting to gain popularity. The Unit 42 team fully anticipated that cyber-adversaries would try to leverage this new client-server architecture. The problem for Unit 42, though, was that we didn't have any telemetry around that space. It wasn't that we weren't seeing bad guy activity. We just didn't have the radar in place to even collect it. Around 2016, customers started asking Palo Alto Networks about how the security platform was going to protect these containers in the cloud. Because of that, the company went on an acquisition hunt to fill that gap in the platform portfolio. The leaders eventually settled on two container security companies, RedLock and Twistlock. Unit 42 was ecstatic. Integrating those two services into the platform was going to provide all the telemetry they needed. It was just a matter of time before we started to get some visibility into bad guy activity in this space.
Rick Howard: Now, I left Palo Alto Networks at the end of 2019, and I'm still waiting to see any reports on bad guy activity around cloud containers, not just from Unit 42 but from any of the cloud security providers, like Cisco, Fortinet, Check Point and many others. Even the MITRE ATT&CK framework, which I'm a huge fan of, by the way - it is the most comprehensive open-source collection of adversary tactics, techniques and procedures in the world right now. And if you are not using it to establish your intrusion kill chain first-principle prevention strategy, you are probably failing. For more info on that topic, see Season 1, Episode 8, but I digress. But even the MITRE ATT&CK framework is silent about any container-related tactics, techniques and procedures. Doesn't that seem odd?
Rick Howard: My name is Rick Howard. You are listening to "CSO Perspectives," my podcast about the ideas, strategies and technologies that senior security executives wrestle with on a daily basis. Today, we are talking to two Hash Table experts on the subject of containers and serverless functions, the absence of adversary activity in a space and whether or not CISOs and CIOs should be prioritizing container and serverless function security over other risks.
Rick Howard: Roselle Safran is the CEO and founder of a small startup called KeyCaliber. I have known Roselle for a number of years. She has a first-class cybersecurity mind and, in a former life, worked as a government cyber operator in multiple functions. Now as a CEO, she brings a unique perspective to the Hash Table compared to most of the other Hash Table members, who are generally holding CSO positions. I wanted to talk to her about containers and serverless functions as a technology strategy to deliver her product services. We are broadcasting this episode right in the middle of the COVID pandemic. Before we started talking about containers and how to secure them, I asked her how much harder it was to grow her business during this weird time. To use a sports metaphor from the world of NASCAR, the commercial world is currently operating in white-flag conditions. Keep driving, but slow down.
(SOUNDBITE OF ARCHIVED RECORDING)
Unidentified Person #1: White flag, one lap to go.
Unidentified Person #2: That's it right there. You're in the rhythm.
Unidentified Person #1: White flag.
Unidentified Person #2: Pull into the white, man. Pull into the white.
Rick Howard: As expected, Roselle is taking advantage of that slowdown. Instead of waiting the virus out, she has doubled down on development investment.
Roselle Safran: The beauty of it is that we've been heads down with development, and it's been a good time for development. Realistically, our competitors weren't getting further ahead of us, really, in the last six months, especially when so many organizations kind of put a pause on spending. And so it's given us, in some ways, the opportunity to kind of catch up with our competitors. And we've been able to build really fast. So we had a demo ready in three months - because I wanted it ready for RSA, of course - and we - and I had lined up a bunch of meetings to get feedback and then, over the next few months, continued getting lots of excellent feedback. And in some ways, I think it was easier to get those calls on the calendar because people weren't traveling as much. And so we had about a hundred conversations with potential customers and other industry experts so that we had a very solid understanding of what other capabilities we needed to add and what we needed to have for MVP. And so with all of that, we were able to launch our MVP eight months after we started the company, which is insanely fast.
Rick Howard: I asked Roselle why containers were attractive to her as a business strategy compared to other more established client-server technology, like virtual machines deployed either in the customers' data center or in the cloud running companies' applications.
Roselle Safran: So we're not a straight SaaS product where you have one very large instance, and all of the customers are just logging into that same environment. We give each customer its own separate instance, and that is primarily based on security and privacy concerns. And so the deployment is all theirs, and so with that said, we can deploy in any which way. We can deploy in our cloud, in their cloud or even on Prem. So by having the whole deployment in containers, we can do any of the three types of deployments, whether it's on Prem, our cloud, their cloud, and it's the same code base. And that just gives us a ton of flexibility with how we build. And beyond that, building in containers has just been a good fit for the type of capabilities that we have. We have several different modules, so it makes sense to make the product more modular and keep it in separate containers. And that also allows for us to essentially contain complexity because there are certainly some of our containers, especially when we get to the analytical side of it, that are quite complex. And by using a container structure, we can keep that complexity constrained to that one environment, and it doesn't spill over into other areas of the application.
Rick Howard: In the previous episode, I talked about the evolution of client-server architecture and how containers and serverless functions are the current next step in that journey. To refresh at a high level, containers are hermetically sealed boxes of software that run on a bare-bones kernel of an operating system. Because of that technical arrangement, CIOs can run several containers - a lot, actually - on a single bare-bones virtual machine. This is different from the previous client-server strategy of running one complete operating system, not bare-bones at all, for every application you need. Serverless functions take that idea to the extreme. Instead of the CIOs having to manage virtual machine infrastructure themselves, either in their data centers or in the cloud, they tell their DevOps teams to store their code in a cloud provider service like Amazon's Lambda functions, Google's Cloud Functions or Microsoft's Azure Functions. I was talking to Bob Turner about this. He's the CISO for the University of Wisconsin at Madison and, at this point in the "CSO Perspectives" podcast, has become a regular at the Hash Table. He thinks that serverless functions will be the ultimate winner over containers.
Bob Turner: I think that we are - as we are maturing, we're going to see that serverless FaaS (ph) container is going to be the place to go. I don't know if we're at a point where we can even put a pin in the timeline on when that could be. We have a lot more to learn about, you know, using the containers appropriately, using the repositories appropriately and understanding the code well enough to be able to push from building inside of, you know, a perfectly secured container to building inside of a serverless environment.
Rick Howard: Bob is probably right about that. For me, though, I'm not quite as sure. Serverless functions definitely have their place for a specific kind of automation, like, for example, monitoring IoT devices in a warehouse. Here's Bob again.
Bob Turner: Think about it in, say, the concept of IoT integration. You know, you think of a application you're building, needing to go and check on door status somewhere. So at a specific point, it kind of opens itself up and allows data input from all of the designated sources, right? You're using the server as the mail collector, you know? They're collecting, you know, sensor data, sensor status, but it's also in that serverless environment. It's a piece of code that will work and always collect data at a certain time, but it's not always going to make sure that the data is valid and will run inside of the container.
Rick Howard: But as I mentioned in the last episode, serverless functions aren't good at maintaining state. They are designed to start up, do a task, like collect telemetry on equipment in the warehouse, deliver it somewhere and then disappear. We may still need containers to make sense of that telemetry collection.
Bob Turner: So the bridge we need to make in between the two concepts is some sort of a way to, you know, validate that the data is expected.
Rick Howard: To secure these client-server programming techniques, the first thing to remember is that regardless of whether it is a container or a serverless function, it is all just code. Whatever your developers were doing before they started building containers and serverless functions, in terms of the security development life cycle, they should continue to do that.
Bob Turner: And I think that in general, the way to accomplish security for products being created in a containerized environment and not - are not that much different than securing applications that are just simply built on the desktop. But understanding that applications have to navigate firewalls - they have to navigate monitoring.
Rick Howard: The second thing to consider is to determine if you have complete visibility over the container apps and serverless functions running within your organization. These things are so easy to set up with your local cloud provider, your own team may be deploying them without the knowledge of the CIO and the CISO. This sounds very much like another version of shadow IT taking over, so at the very least, you might want to get visibility under control.
Bob Turner: Well, so this goes back to the CIS critical security controls, right? If you know your hardware, know your software and know your common configurations, that's - you know, that's a good percentage of the battle. That's over 10% of the war is won by having that in your back pocket. And I think to what we don't know here is that, you know, reviewing the results of what goes on inside of, you know, a container environment or comes as part of a GitLab registry - understanding how to understand from the security perspective - in other words, scanning, understanding what the security scan results are telling us and then moving that from dev to production, moving that understanding and that scanning ability so that we now have a consistent picture of the application - as it grows from useful code to actual applications, it's doing useful work.
Rick Howard: The good news here is that if you are looking for ways to mature your DevSecOps team, this might be one way to do it. Since this is developing code and securing code and deploying code as infrastructure, that is the very definition of what DevSecOps is.
Bob Turner: The sec part of that is literally a bolt-on to the original concept, and that's why I think that as we're continuing to work in that arena, we have the opportunities to - you know, to refine that to where it, you know - there's no slash between the dev and the sec. There's - it's just DevSec - happens at the same time, and then it's - the ops part of it is just feeding how exactly we do that.
Rick Howard: But we are not there yet. The development community and the security community haven't quite figured out how to get together on this. In many organizations, the two groups are separated by a large margin. Bob says that the big problem to overcome is to get both sides on the same systems - in other words, interoperability.
Bob Turner: Interoperability is really the big thing, and we have a project in place right now that is addressing all forms of interoperability, from identity and access management all the way to, you know, common code, common use of GitHub-like services on campus. And I really think it's a level of maturity - is going to be the jump.
Rick Howard: One way to make that merger happen more quickly might be to centralize operations. Most organizations of any size still run separate IT shops and separate security shops. There are exceptions, for sure, but different teams running almost independently are more the norm. This emphasizes the uniqueness of both groups and causes friction and disagreements when coordination should be happening. This is the complete opposite of what we are trying to achieve with the DevSecOps jobs philosophy. In Bob's view, the management of infrastructure should not be a distributed exercise across multiple teams within an organization. He says that it should be at the center of the organization's information universe.
Bob Turner: I don't know if you remember Jerry Tuttle, Gulf War, the Copernicus Theory - right? - that the operator was central to the information universe.
Rick Howard: He is referring to the famous Vice Admiral Jerry O. Tuttle, who almost singlehandedly digitally transformed the U.S. Navy from pencil and paper back in the early 1990s to a modern-day warfighting machine. From his Washington Post obituary, quote, "Before Admiral Tuttle, the Navy used manual Morse, teletypes and paper charts to drop dumb bombs from airplanes over short distances. When he finished, the Navy had the Global Positioning System, Aegis, modeling and simulation, the full use of 3D communications, satellite constellations, digital workstations and altogether new weaponry that flew missiles up and down mountainsides and landed a thousand miles inland with accuracy that astounded the world," end quote. In the early 1990s, the U.S. Navy's communications and intelligence systems were a hodgepodge of stovepipe relics from the 1970s that didn't talk to each other and were designed to facilitate communications between machines, not inform operators on the ground. Admiral Tuttle changed all of that. He got his idea from the famed Renaissance-era mathematician and astronomer Nicolaus Copernicus, who changed the world's view that the sun and planets don't revolve around the Earth - the planets, including the Earth, revolve around the Sun. Along that thought, Tuttle advocated placing the operator and operations center in the middle of the information universe with all communications and computer systems supporting the needs of the warrior. Tuttle's Copernicus moment simplified the approach to developing technology programs and forced a paradigm change that impacted the world.
Bob Turner: In our case, our Copernicus is the academy and the researchers. They have to be central to what we do. But in the IT part of the business, it has to be net ops, system operations and security operations as center to the IT organization and providing the services that go out to our customers.
Rick Howard: Regardless of all of that, the question remains as to whether there exists a high risk of material impact to your organization because you use containers or serverless functions. In other words, should you drop everything in order to focus resources on securing these digital assets? The answer, at least for today, is probably not. The reason that the MITRE ATT&CK framework doesn't list any tactics, techniques or procedures that leverage containers is because right now, it is too hard to do - not impossible, probably, but hard. Adversaries have many other ways to destroy or steal data that are not nearly as complicated.
Roselle Safran: Well, I mean, some of it is just the infrastructure. By its nature, it implicitly has some defenses in place. And maybe that's just because it's newer technology, and so that was more built into it than with some older technology. For example, from the perspective of the memory and making sure that the memory is protected - NX Bit so the - an attacker can then execute from the stack and ASLRs, where everything is in - the stack is in random locations. It forces the attacker to have to go to, you know, return-oriented programming attacks, so they can't even get to softball attacks. And so you have that type of infrastructure that already in place with it, and so that helps.
Rick Howard: This doesn't mean that hackers will never try to leverage this new client-server architecture. It just means that they aren't right now, that if your organization has limited cyberdefense resources. And you still have work to do, preventing all the things we already know that hackers do that are currently listed in the MITRE ATT&CK framework. Diverting security resources away from that to containers and serverless functions is probably not a smart move. Instead, just keep your eye on the situation. Encourage your container developers to follow the same software development lifecycle best practices as the rest of the developer team and make sure that you are monitoring who and what is accessing all the container applications and serverless functions. In that way, when the inevitable happens and the hacker community turns their attention to the software infrastructure, you won't have to start from scratch. I will let Roselle have the last word on this.
Roselle Safran: Down the line, are there going to be container attacks? Yeah, probably. I mean, attackers find a way to hack into everything.
Rick Howard: And that's a wrap. If you agreed or disagreed with anything I have said about containers, serverless functions or really anything, hit me up on LinkedIn, and we can continue the conversation there. Next week, we will be talking about how SOCs are thinking about SOAR - or Security Orchestration, Automation and Response. You don't want to miss that. The CyberWire's "CSO Perspectives" is edited by John Petrik and executive produced by Peter Kilpe. Our theme song is by Blue Dot Sessions, and the mix of the episode and the remix of the theme song was done by the insanely talented Elliott Peltzman. And I am Rick Howard. Thanks for listening.