Securing containers and serverless functions.
Rick Howard: Hey, everyone. Rick Howard here. The last security conference I attended in person was RSA 2019. If I stood up in one of the sessions and swung a cat by its tail in a circle around me...
(SOUNDBITE OF CAT SCREECHING)
Rick Howard: ...The chances were pretty good that I would hit at least two or three dev ops tribe members who were almost drooling with excitement about the possibilities of containers and serverless functions. As a security guy, I knew these grand ideas grew out of the Linux world back in the late 1970s and got a power-up boost in the early 2010s as cloud technologies became popular. But I never took the time to learn why they are important to the software development world. And it didn't occur to me until just recently that there were any significant security issues that I needed to consider.
Rick Howard: My name is Rick Howard. You are listening to "CSO Perspectives," my podcast about the ideas, strategies and technologies that senior security executives wrestle with on a daily basis. On this show, we are talking about why containers and serverless functions are the next evolutionary step in the client-server paradigm, why the dev ops community loves them and why the network defender community needs to secure the.
Rick Howard: Now, I'm not a coder by any means. If I am anything, I am a hack. Oh, I taught programming at the college level in the ‘90s, but the programs I wrote for class weren't what anybody would call elegant. I believed in the power of the CPU. And if I ever got a program to work, it was mostly because I brute forced it. I'm just saying that the nuances of why containers and serverless functions were important were a bit lost on me. I put them in the bucket of interesting programming technique and moved on with my day. But by 2015 or so, with the dev ops movement in full swing and the journey to the cloud significantly begun by many organizations, the security community started to make noises that these containers and serverless functions weren't merely programming tools. They essentially add more attack surface for a potential adversary to leverage and require the same first-principle cybersecurity protections that we would apply to any other digital asset within our organization.
Rick Howard: That caught me by surprise. It wasn't until I went back through the history of these dev ops tools that it started to make sense. I then noticed what was really going on. Containers and serverless functions are the current evolutionary step in our collective efforts to make it easy for one computer to connect to another in order to perform some task. Now, when I put it that way, it sounds so boring. But almost from the beginning of the computing era - I'm talking about the 1960s here - the newly self-described computer scientists began a quest to build machines that made it easier for multiple users to connect to other machines from wherever they were in order to get some work done. It is the evolution of the client-server architecture. In those early days, computers were batch-oriented. That means that only one person at a time could give instructions for the computer to execute. And the various designs back then, like the PDP-1 and the IBM System/360, weren't compatible with each other. If you wrote code for the PDP-1, it wouldn't run on the IBM System/360 and vice versa.
Rick Howard: Now, many approaches to this problem have been tried over the years. And some are still in use. You can remotely log into a distant computer and execute commands as if you were directly connected to the computer itself. You can run virtual machines on your computer that run a different operating system than on your base system - like, for example, running a Windows virtual machine on your Mac or running a Debian Unix system within your Windows system. You can partition your existing machine to run several different mini machines, each with a different IP address and which appears to the outside world as separate machines. Each of those many virtual machines can run their own virtual machines, too, in a ephemeral M.C. Escher kind of way. Now, that is the cyberspace equivalent of going down the "Alice In Wonderland" rabbit hole. You can even install a runtime environment - say, Java - that interprets code written in one environment so that you can run in a different environment. You can use these various configurations on your personal workstation, on machines in the data center and now on virtual machines in one or more of the big cloud providers' networks. But the next step has been the advent of containers and serverless functions. They represent the current thinking of computer compatibility within the model of the client-server paradigm.
Rick Howard: From a CIO perspective, the attraction to containers and serverless functions comes from a need for efficiency and compatibility. In the 1980s, if CIOs wanted to deploy a new application, they would have to purchase a big iron server. And by big iron, I mean a beefy hardware computer with fast CPU, lots of RAM and plenty of disk space. Install an operating system on it. Install the application on it. Deploy it to the data center. And then provide the system access to the network. For every application required, CIOs would have to rinse and repeat that process, which gets expensive quickly, takes a lot of time to execute and creates a lot of resources to manage. And over time, you start to run into compatibility issues, too. As you upgrade the operating system with newer versions and patches for bugs and security holes, the original deployed application becomes more and more brittle. The deployed application starts crashing into incompatibilities with the new upgrades. It is the reason you still sometimes see the infamous Windows XP blue screen of death as you walk around airport terminals. The application developers found it easier to just keep running the extremely old operating system rather than try to keep their applications up to date.
Rick Howard: In the 1990s and the advent of virtual machines, CIOs gained some efficiency. They still had to deploy one operating system for every application, but at least they could deploy many virtual operating systems onto one set of big iron hardware. CIOs didn't have to manage as many hardware platforms as they used to. Let's call that an incremental improvement in efficiency.
Rick Howard: When cloud solutions became viable in the mid-2000s or so, the solution became even more efficient because they could now deploy completely virtual systems. They transferred the burden of managing the big iron hardware to the cloud provider. And since it was all software at this point, the organization could reap the benefits of a dev ops philosophy, infrastructure as code. CIOs still had the same compatibility issues, though. Upgrades and patch fixes even in virtual environments caused application brittleness. Containers and serverless functions became methods to decrease that brittleness.
Rick Howard: So after saying all that, what is a container? Well, containers are a brilliant evolution of operating system functionality. If the CIOs are only deploying one application on a server, why do they need the complete functionality of a modern-day operating system? If they are deploying a financial application, let's say, why do they care if Nvidia upgraded its own video drivers to improve the gaming experience of 7-year-olds playing "Fortnite?" The answer is they don't. With containers, you build a virtual standalone box of software that contains only the application, plus the software libraries and other binaries it requires, plus the operating system pieces it depends on and then any configuration files needed to run it. That's it. The box is hermetically sealed against any future operating system upgrades or patches. Every container you build this way shares the base operating system or the kernel but none of the other flotsam and jetsam features that always come with the operating system package. This makes containers small, typically tens of megabytes in size, compared to virtual machines that run closer to several gigabytes in size. Since they share the same operating system kernel, they are quicker to boot, too. You don't have to wait for the entire operating system to start in order to run your application.
Rick Howard: So if that's a container, what's a serverless function? The serverless function name represent a bit of confusion in the industry. Of course, there are servers in this evolution of client-server architecture. They don't disappear. They have to be running somewhere. The point is they are serverless for the customer. The customer doesn't have to manage the server at all. The cloud provider does. Serverless functions take the idea of containers to the extreme. If reduce-sized, hermetically sealed software boxes are a good idea compared to, say, virtual machines, why not eliminate them altogether? Instead of maintaining an operating system and building your own containers, why not just let the cloud provider handle all the admin? Developers write the code, the functions, in other words, and deploy them into the cloud provider system for future execution. This is a similar idea to how programmers write code in the first place. They don't write one long program that does everything. They break the program into smaller pieces of functionality. Each piece is a function call.
Rick Howard: For example, if I need a program to read some data, sort it and then write it to disk, I don't write one program to do all these tasks. I write three functions - read, sort and write - and then use the main program to call each function in turn. With serverless functions, this takes a basic programming technique and moves it to the cloud. Most of the major cloud providers have some version of this functionality. Amazon calls theirs Lambda functions. Google calls theirs Cloud Functions. And Microsoft calls theirs Azure Functions. One limitation, though, is that you wouldn't want to use serverless functions and applications that need to preserve state. What I mean is that if I need to keep track of all the changes to the data in my read, sort write program, serverless functions are probably not the tool for this. These things are designed to be ephemeral. They start up, do a task and disappear. That is their beauty. For the Unix graybeards out there, they are similar to the Unix system daemons in that way, except that the system call isn't to the operating system. It is to the cloud provider somewhere.
Rick Howard: Since the 1980s, the options presented to the CIOs for client-server architecture have rapidly evolved. In the 1980s, we got the first client-server solutions - one big iron server, one operating system and one application. In the 1990s, virtual machines started to appear - one big iron server, multiple virtual operating systems, one application per operating system. In the mid-2000s, we got infrastructure as a service and platform as a service - one virtual machine in the cloud, multiple virtual operating systems, one application per operating system, all in the cloud. By 2013, we got containers - one virtual machine in the cloud, one virtual operating system, multiple hermetically sealed containers running one application each. And on a side note, you can also run containers and operating systems in the data center, too. We've been doing that since the mid-2000s. By 2015, we got serverless functions - no virtual machine in the cloud, no virtual operating systems, multiple functions stored in the cloud for future use.
Rick Howard: Up to this point, I've only been talking about the CIO. Let's bring the CISOs into it. I've tried to make the case that containers and serverless functions aren't simply the most recent dev ops technique. Since they exist as code on the Internet, they represent an attack surface that wasn't as visible when the applications were just stored in the data center. Common sense dictates that you have two areas to consider when securing these new digital assets - code at rest and zero trust. When I say code at rest, I'm saying that all the things you did to write secure code before you use containers and serverless functions still applies here. For containers, even though you are working in a hermetically sealed software box protected from the rest of the operating system, those software libraries, binaries, essential operating system pieces and configuration files that are in the box still have to be patched from time to time. The bigger bang for your buck, though, will be to limit the number of computers, people and networks that can access your containers and serverless functions to only the bare minimum to provide the service. This is zero trust for client-server architecture. It wouldn't hurt to have some kind of always-on monitoring service, too, for your container and serverless function environments just to ensure you know exactly what entities are accessing this new infrastructure.
Rick Howard: As I said at the top, containers and serverless functions are the latest evolutionary step in the digital world's pursuit of client-server architecture. The big beneficiary of this innovation is the dev ops and dev sec ops movement. Infrastructure as code is in all of our futures, whether you are pursuing it at 100 miles an hour today or still just thinking about it for some future. Regardless, the world is moving in that direction. Because virtual machines, containers and serverless functions are all just software programs most likely running on somebody else's computers, they have been the tools of choice for the early dev ops adopters trying to automate their infrastructure. They will be yours, too, once you decide to move down the dev ops path.
Rick Howard: And that's a wrap. If you agree or disagree with anything I have said, hit me up on LinkedIn or Twitter. And we can continue the conversation there. Next week, I have invited our pool of CyberWire experts to sit around the Hash Table and discuss how they secure their containers and serverless functions. So don't miss that. The CyberWire's "CSO Perspectives" is edited by John Petrik and executive produced by Peter Kilpe. Our theme song is by Blue Dot Sessions, remixed by the insanely talented Elliott Peltzman, who also does the show's mixing, sound design and original score. And I am Rick Howard. Thanks for listening.