Securing containers and serverless functions.
By Rick Howard
Nov 2, 2020

CSO Perspectives is a weekly column and podcast where Rick Howard discusses the ideas, strategies and technologies that senior cybersecurity executives wrestle with on a daily basis.

Securing containers and serverless functions.

Listen to the podcast episode.

The last security conference I attended in person was RSAC 2019. If I’d stood up in one of the sessions and swung a cat by its tail in a circle around me, the chances were pretty good that I would hit at least two or three DevOps tribe members who were almost drooling with excitement about the possibilities of containers and serverless functions. As a security guy, I knew these grand ideas grew out of the Linux world back in the late 1970s and got a powerup boost in the early 2010s as cloud technologies became popular. But I never took the time to learn why they were important to the software development world, and it didn’t occur to me that there were any significant security issues that I needed to consider. 

Importance to the network defender.

I’m not a coder by any means. If anything, I’m a hack. Oh, I taught programming at the college level in the 1990s, but the programs I wrote for class weren’t what anybody would call elegant. I believed in the power of the CPU, and if I ever got a program to work, it was mostly because I brute-forced it. I’m just saying that the nuances of why containers and serverless functions were important were a bit lost on me. I put them in the bucket of “interesting programming technique” and moved on with my day.

By 2015 or so, with the DevOps movement in full swing and the journey to the cloud significantly begun by many organizations, the security community started to make noises that these containers and serverless functions weren’t merely programming tools. They essentially add more attack surface for a potential adversary to leverage and require the same first principle cybersecurity protections that we would apply to any other digital asset within our organization. That caught me by surprise. It wasn’t until I went back through the history and evolution of these DevOps tools that it started to make sense. 

What are these things anyway?

I then noticed what was really going on. Containers and serverless functions are the current evolutionary step in our collective efforts to make it easy for one computer to connect to another in order to perform some task. When I put it that way, it sounds so boring. But almost from the beginning of the computing era—I am talking about the 1960s here—the newly self-described computer scientists began a quest to build machines that made it easier for multiple users to connect to other machines from wherever they were in order to get some work done. It is the evolution of the client-server architecture.

In those early days, computers were batch oriented. That meant that only one person at a time could give instructions for the computer to execute, and the various designs, like the PDP-1 and the IBM System/360, weren’t compatible with each other. If you wrote code for the PDP-1, it wouldn't run on the IBM System/360 and vice versa. 

Many approaches to this problem have been tried over the years and some are still in use. You can remotely log in to a distant computer and execute commands as if you were directly connected to the computer itself. You can run virtual machines on your computer that run a different operating system than your base system, for example, running a Windows virtual machine on your Mac or a Debian Unix system within your Windows system. You can partition your existing machine to run several different mini-machines each with a different IP address and which appear to the outside world as separate machines. Each of those mini-virtual-machines can run their own virtual machines in an ephemeral M.C. Escher kind of way. That is the cyberspace equivalent of going down the Alice in Wonderland rabbit hole. You can even install a runtime environment, say Java, that interprets code written in one environment so that it can run in a different environment. 

You can use these various configurations on your personal workstation, on machines in the datacenter, and now on virtual machines in one or more of the big cloud provider’s networks. Containers and serverless functions represent the digital world’s next step of computer compatibility within the model of the client-server paradigm.

Solve for efficiency and compatibility.

From a CIO perspective, the attraction to containers and serverless functions comes from a need for efficiency and compatibility. In the 1980s, if CIOs wanted to deploy a new application, they would have to purchase a big iron server—by big iron, I mean a beefy hardware computer with a fast CPU, lots of RAM, and plenty of disk space—install an operating system on it, install the application on it, deploy it to the datacenter, and then provide the system access to the network. For every application required, CIOs would have to rinse and repeat. That process gets expensive quickly, takes a lot of time to execute, and creates a lot of resources to manage. Over time, you start to run into compatibility issues, too. As you upgrade the operating system with newer versions and patches for bugs and security holes, the original deployed application becomes more and more brittle. The deployed application starts crashing into incompatibilities with the new upgrades. It is the reason you still sometimes see the infamous Windows XP “blue screen of death” as you walk around airport terminals. The application developers found it easier just to keep running the extremely old operating system rather than try to keep their applications up to date.

In the 1990s and with the advent of virtual machines, CIOs gained some efficiency. They still had to deploy one operating system for every application, but at least they could deploy many virtual operating systems onto one set of big iron hardware. CIOs didn’t have to manage as many hardware platforms as they used to. Let’s call that an incremental improvement in efficiency. 

When cloud solutions became viable in the mid-2000s, the solution became even more efficient because they could now deploy completely virtual systems. They transferred the burden of managing the big iron hardware to the cloud provider. And since it was all software at this point, the organization could reap the benefits of the DevOps philosophy: infrastructure as code. CIOs still had the same compatibility issues though. Upgrades and patch fixes, even in virtual environments, caused application brittleness. Containers and serverless functions became methods to decrease that brittleness.

What is a container?

Containers are a brilliant evolution of operating system functionality. If the CIOs are only deploying one application, why do they need the complete functionality of a modern-day operating system? If they are deploying a financial application, let’s say, why do they care if Nvidia upgraded its own video drivers to improve the gaming experience of seven-year-olds playing Fortnite? The answer is, they don’t. 

With containers, you build a virtual stand-alone box of software that contains only the application, plus the software libraries and other binaries it requires, plus the operating system pieces it depends on, and then any configuration files needed to run it. That’s it. The box is hermetically sealed against any future operating system upgrades or patches. 

Every container you build this way shares the base operating system, the kernel, but none of the other flotsum and jetsum “features” that always come along in the operating system package. This makes containers small—typically tens of megabytes in size—compared to virtual machines that run closer to several gigabytes in size. Since they share the operating system kernel, they are quicker to boot too. You don’t have to wait for the entire operating system to start in order to run your application. 

What is a serverless function?

The serverless function name represents a bit of confusion in the industry. Of course there are servers in the evolution of client-server architecture. They don’t disappear. They have to be running somewhere. The point is, they are serverless for the customer. The customer doesn’t have to manage the server. The cloud provider does.

Serverless functions take the idea of containers to the extreme. If reduced-size, hermetically sealed software boxes are a good idea compared to virtual machines, why not eliminate them altogether? Instead of maintaining an operating system and building your own containers, why not just let the cloud provider handle all the admin? Developers write the code, the functions in other words, and deploy them in the cloud provider’s system for future execution.

This is a similar idea to how programmers write code in the first place. They don’t write one long program that does everything. They break the program into smaller pieces of functionality. Each piece is a function call. For example, if I need a program to read some data, sort it, and then write it to disk, I don’t write one program to do all three tasks. I write three functions—read, sort, and write—and then use the main program to call each function in turn. With serverless functions, this takes a basic programming technique and moves it to the cloud.

Most of the major cloud providers have some version of this functionality:

  • Amazon: Lambda Functions
  • Google: Cloud Functions
  • Microsoft: Azure Functions

One limitation though is that you wouldn't want to use serverless functions in applications that need to preserve state. What I mean is that if I need to keep track of all the changes to the data in my read-sort-write program, serverless functions are probably not the tool for this. These things are designed to be ephemeral. They start up, do a task, and then disappear. That is their beauty. For the Unix greybeards out there, they are similar to the Unix system daemons in that way except that the system call isn't to the operating system, it’s to a cloud provider somewhere.

The trajectory of client-server architecture. 

Since the 1980s, the options presented to CIOs for client-server architecture have rapidly evolved. 

1980s: First client-server solutions

  • one big iron server
  • one operating system
  • one application

 1990s: Virtual machines in the data center

  • one big iron server
  • multiple virtual operating systems
  • one application per operating system

Mid-2000s: IaaS/PaaS

  • one virtual machine in the cloud
  • multiple virtual operating systems
  • one application per operating system in the cloud

2010s: Containers

  • one virtual machine in the cloud
  • one virtual operating system
  • multiple hermetically sealed containers running one application each

(Note: You can run containers in operating systems in the data center too.)

2015s: Serverless functions

  • no virtual machine in the cloud
  • no virtual operating system
  • multiple functions stored in the cloud for future use

How do you secure these things? 

Up to this point, I have only been talking about the CIO. Let’s bring the CISO into it. I have tried to make the case that containers and serverless functions aren’t simply the most recent DevOps technique. Since they exist as code on the internet, they represent an attack surface that wasn’t as visible when the applications were just stored in the datacenter. Common sense dictates that you have two areas to think about: code at rest and zero trust.

When I say “code at rest,” I am saying that all the things you did to write secure code before you used containers and serverless functions still apply here. For containers, even though you are working in a hermetically sealed software box protected from the rest of the operating system, those software libraries, binaries, essential operating system pieces, and configuration files that are in the box still have to be patched from time to time.

The bigger bang for your buck though will be to limit the number of computers, people, and networks that can access your containers and serverless functions to only the bare minimum to provide the service. This is zero trust for client-server architecture. It wouldn't hurt to have some kind of always-on monitoring service for your container and serverless function environments just to ensure you know exactly what entities are accessing this new infrastructure.

DevOps is the beneficiary. 

As I said, containers and serverless functions are the latest evolutionary step in the digital world’s pursuit of client-server architecture. The big beneficiaries of this innovation are the DevOps and DevSecOps movements. Infrastructure as code is in all of our futures whether you are pursuing it at a hundred miles an hour today or still just thinking about it for some future. Regardless, the world is moving in that direction. Because virtual machines, containers, and serverless functions are all just software programs most likely running on somebody else’s computers, they have been the tools of choice for the early adopters trying to automate their infrastructure. They will be yours too once you decide to move down the DevOps path.

Container Timeline

1963–1972: First Virtual Machine

IBM designs and builds the first commercial mainframe to support virtualization, called the CP/CMP (Control Program/Console Monitor System). 

1969: Telenet

ARPANET develops telnet to allow computers to connect to other machines and run applications. 

1979: chroot

The chroot system is introduced to Unix V7, advancing the idea of process isolation (segregating file access for each process). 

1980s: Client Server

Client-server systems begin to emerge in the United States in the early 1980s as computing transitions from large mainframes to distributed processing using multiple workstations or personal computers. Corporations quickly adopt client-server systems, which become the backbones of their office automation and communication infrastructure.

1983: rlogin, rsh and rcp

BSD Unix 4.2 releases rlogin.

1987–1994: SoftPC

Insignia Solutions demonstrates the software emulator that allows Unix workstations to run DOS applications. By 1989, they release a Mac version. By 1994, they sell their software packaged with operating systems pre-loaded on Windows and OS/2.

1994–1996: Java

Java allows developers to write an application once, then run the application on any computer with the Java Runtime Environment (JRE) installed. 

1995: SSH

Finland native Tatu Ylönen creates SSH in response to a password-sniffing attack at his university. 

1997: Virtual PC

Apple releases Virtual PC, which allows Mac users to run Windows.

1999: Virtual Workstation

VMware releases VMware Workstation to run Unix on Windows machines.

2000: FreeBSD Jails

FreeBSD jails introduction achieves clear-cut separation between a provider’s services and those of its customers. Administrators can partition a FreeBSD computer system into several independent, smaller systems—called “jails”—with the ability to assign an IP address for each system and configuration. Linux adds VServer in 2001. Solaris adds containers in 2004.

2001: ESX Server and GSX Server

VMware releases ESX Server and GSX Server (run virtual machines on top of an existing operating system: Type-2 Hypervisor). ESX Server does not require a host operating system to run virtual machines. (Type-1 Hypervisor)

2006: Google’s Process Containers

Google designs Process Containers for limiting, accounting, and isolating resource usage (e.g., CPU, memory, disk I/O, network) of a collection of processes, renamed “Control Groups (cgroups)” in 2007 and eventually merged to Linux kernel 2.6.24.

2008: Linux Containers (LXC)

Linux introduces the first, most complete container implementation using cgroups and Linux namespaces.

2011: Warden

CloudFoundry introduces Warden using LXC in the early stages, later replacing it with its own implementation. Warden can isolate environments on any operating system, running as a daemon and providing an API for container management. It develops a client-server model to manage a collection of containers across multiple hosts, and Warden includes a service to manage cgroups, namespaces, and the process life cycle.

2013: LMCTFY

Let Me Contain That For You (LMCTFY) kicks off in 2013 as an open source version of Google's container stack, providing Linux application containers. Applications can be made “container aware,” creating and managing their own subcontainers. Active deployment in LMCTFY stops in 2015 after Google starts contributing core LMCTFY concepts to libcontainer, which is now part of the Open Container Foundation.

2013: Docker

When Docker emerges in 2013, containers exploded in popularity. Docker also uses LXC in its initial stages and later replaces that container manager with its own library, libcontainer, an entire ecosystem for container management.

2016: Container Security Weaknesses Revealed

Vulnerabilities like dirty COW demonstrate container security weaknesses.

2018: Kubernetes Becomes the Gold Standard

Kubernetes is used for most enterprise container projects. 

2019: Docker loses steam, Cloud container management

New runtime engines start replacing the Docker runtime engine, most notably containerd, an open source container runtime engine, and CRI-O, a lightweight runtime engine for Kubernetes. Docker Enterprise is acquired and split off, resulting in Docker Swarm being put on a 2-year end-of-life horizon. We witness the decline in popularity of the rkt container engine, while officially still part of the CNCF stable.

VMware, IBM, Google, Amazon, and Microsoft provide solutions to manage cloud and on-prem containers.

Reading List

5 ways to secure your containers,” by Steven Vaughan-Nichols, CEO, Vaughan-Nichols & Associates, 23 April 2019.

8 technologies that will disrupt business in 2020,” by Paul Heltzel, CIO, 26 August 2019.

A Brief History of Containers: From the 1970s Till Now,” by Rani Osnat, Aqua, 10 January 2020.

A brief history of SSH and remote access,” by Jeff Geerling, an excerpt from Chapter 11: Server Security and Ansible, in Ansible for DevOps, 15 April 2014.

Amazon Launches Lambda, An Event-Driven Compute Service,” by Ron Miller, TC, 13 November 2014

Application Container Security Guide: NIST Special Publication 800-190,” by Murugiah Souppaya, John Morello, and Karen Scarfone, NIST, September 2017.

Container Explainer,” IDG.TV, 19 August 2015.

Container Network Security - Kubernetes Network Policies in Action with Cilium (Cloud Native),” by Fernando, Gitlab, 16 July 2020.

Container Security,” by Synk.

Google has quietly launched its answer to AWS Lambda,” by Jordan Novet, Venture Beat, 9 February 2016.

Historical Computers in Japan: Unix Servers,” IPSJ Computer Museum.

M.C. Escher Collection,” Maurits Cornelis (MC) Escher (1898—1972).

Serverless Architectures,” by Martin Fowler, martin.Fowler.com, 22 May 2018.

Serverless vs Microservices — Which Architecture to Choose in 2020?” TechMagic, 01 JULY 2020.

The Benefits of Containers,” by Ben Corrie, VMware, 16 May 2017.

The essential guide to software containers for application development,” by David Linthicum Chief Cloud Strategy Officer, Deloitte Consulting.

The Invention of the Virtual Machine,” by Sean Concroy, IDKRTM, 25 JANUARY 2018.

What are containers and why do you need them?” by Paul Rubens, CIO, 27 JUN 2017.

What even is a container: namespaces and cgroups,” by Julia Evans, Julia Evans Blog.

What is a Container?” by Ben Corrie, VMware, 16 May 2017.

What is a Container?” by VMware.