CSO Perspectives (Pro) 2.13.23
Ep 99 | 2.13.23

Prior research on cybersecurity first principles.

Transcript

Rick Howard: For the past three years, I've been going on and on about cybersecurity first principle thinking as if the concept was somehow unique, like it was cut out of virgin cloth and walked down from Mount Olympus by Athena herself, the goddess of wisdom, courage, inspiration and the arts. Well, that just isn't true. In the cybersecurity thought leadership space, there have been plenty of big-brain researchers, from the early days until the present, trying to find the edges of just what exactly is cybersecurity. As Sir Isaac Newton said, if I have seen further than others, it is by standing on the shoulders of giants. I figured it was time to give some of those amazing scientists some recognition and, in the process, document the evolution of the security community's thinking on the subject.

Steve Winterfeld: Oh, my God. Here we go - more history. 

Rick Howard: That was Steve Winterfeld, the Akamai advisory CISO, regular visitor here at the CyberWire Hash Table and - what can I say? - the Al Borland to my Rick the Tool Man. And yes, Al, we're doing more history. So hold onto your butts. 

(SOUNDBITE OF FILM, "JURASSIC PARK") 

Samuel L Jackson: (As Arnold) Hold onto your butts. 

Rick Howard: My name is Rick Howard, and I'm broadcasting from the CyberWire's Secret Sanctum Sanctorum Studios, located underwater somewhere along the Patapsco River near Baltimore Harbor, Md., in the good old U.S. of A. And you're listening to "CSO Perspectives," my podcast about the ideas, strategies and technologies that senior security executives wrestle with on a daily basis. 

Rick Howard: I've said in the past that I study InfoSec history because I'm a student of the cybersecurity game. I want to understand the trends in order to make my own assessments of their validity, to see what went right and what went wrong, to determine why some ideas worked, why some failed and why some may have just been ahead of their time. I want to learn from the failures of others so that I don't have to repeat those mistakes myself. I want to steal the best ideas from the giants that came before me so that I can benefit from their wisdom. From my viewpoint, I can't understand the current state of the InfoSec community unless I have some understanding of what has happened in the past. 

Rick Howard: When I think about our relatively short 50-plus year of InfoSec history, I can make the case that it roughly coalesces around four phases. Phase One is the mainframe years, like 1960 through about 1981. Phase Two is the personal computer years, say 1981 to, like, 1995. Phase Three was the internet years, 1995 to 2006. And Phase Four was the cloud, like 2006 to present. And with all the discussion about AI and ChatGPT this year, we might be moving into a new phase, the AI phase. But it's too soon to tell. These phasers are not a perfect representation of the history, but each one represents a major change that disrupted how people use computers and consequently changed how security practitioners thought about securing those computers. 

(SOUNDBITE OF THE BERLIN PHILHARMONIC ORCHESTRA PERFORMANCE OF STRAUSS' "ALSO SPRACH ZARATHUSTRA") 

Rick Howard: You're listening to "2001: A Space Odyssey," the title song from the soundtrack of the 1968 science fiction film of the same name, composed by Richard Strauss, Gyorgy Ligeti and Johann Strauss II. And I'm using that theme to kick off the first phase of research around the ideas of cybersecurity first principles. 

Rick Howard: In the modern world, the computer era started in earnest in this first phase, when the mainframe computer became useful to governments, universities and the commercial world. It took about a decade before the mainframe community, though, realized that they might have a computer security problem, and it started with the U.S. military. Willis Ware's paper, "Security Controls for Computer Systems," published in 1970, when Ware was working for the RAND Corporation, started the process. His paper is not so much a definition of cybersecurity or a statement about cybersecurity first principles as it is a listing and description of all the ways computers were going to be a problem in the future when they started sharing resources across networks. I would put this in the category of - the first step in solving any problem is recognizing that you have a problem. 

Rick Howard: The paper hints at the idea that the security community needs to determine how to build secure systems. That means designing a computer architecture that is mathematically proven to be impenetrable. That idea would be the focus of researchers through the 1990s. In the Cybersecurity Canon Hall of Fame book "A Vulnerable System: The History of Information Security in the Computer Age," published in 2021, the author, Andrew Stewart, laments the fact that since the beginning of the digital age, nobody has been able to build this secure system. And that's true. Today, that idea has largely been abandoned. 

(SOUNDBITE OF SONG, "AMERICAN PIE") 

Don Mclean: (Singing) A long, long time ago, I can still remember how that music used to make me smile. And I knew if I had... 

Rick Howard: That is the warm and rich vibrato of Don McLean singing his classic 1971 hit "American Pie," which puts us in the mood of a slew of papers that came out in the 1970s and 1980s that tried to find the edges of what cybersecurity really meant. The first on our list is the paper, "Computer Security Technology Planning Study," published by James Anderson for the U.S. Air Force in 1972. It feels like a continuation of thought from the Willis Ware paper. It's an early expression, maybe the first expression of the idea that security shouldn't be added on after the system is built, something that security professionals still talk about today when you hear them discuss the idea of shifting left or security by design. It mirrors the idea that building a secure system is the ultimate goal, but proposes that any secure systems will require a way to monitor that system for defects and intrusions. 

Rick Howard: The next year, in 1972, David Bell and Len LaPadula, then working for MITRE, published their paper called "Secure Computer Systems: Mathematical Foundations." In it, they provide the arithmetic proof that would guarantee that a computer system is secure. Unfortunately, they admit upfront that even if you could build a system that adhered to their proof, how would system builders guarantee that they implemented everything correctly? Theoretically, you could do it, but practically, how would you vouch for its security? This is the problem that plagued this kind of research for 20 years. 

Rick Howard: In 1975, Jerome Saltzer and Michael Schroeder published their paper, "The Protection Of Information In Computer Systems," in the proceedings of the IEEE. According to Jen Jenn, a former CSO, but now working as an AWS principal and as a regular guest here at the CyberWire Hash Table, told me in a LinkedIn conversation last year, 2022, that the Saltzer and Schroeder paper may be the first paper to describe the CIA triad. They didn't call it that, but they referred to three types of invasion - unauthorized information release - confidentiality - unauthorized information modification - integrity - and unauthorized denial of use - availability. What's interesting is that Saltzer and Schroeder, as well as other researchers during this period, talk about elements of the CIA triad, but they never grouped them together as a coalesced concept. These early papers refer to those elements as things you might do and should do in a checklist or things that can go wrong if you don't do them. But they never lump all three characteristics into one cybersecurity first principle as in, if you just get these three things done, then you'll solve cybersecurity. 

Rick Howard: Saltzer and Schroeder also likely make the first case that the user-ID-password combinations are a weak form of authentication, and two-factor authentication will be required. But we as a community didn't really listen to them until many years later. Further, they might be the first to champion the reduction of complexity in all things related to security design, and for whatever the design becomes to not hide it in secrecy. In other words, this may be the first public record of researchers making the argument against security through obscurity. Finally, they promote an idea called failsafe defaults - deny everything first and allow by exception. This idea is possibly the first inklings of perimeter defense, building an outer barrier to the network that could control access. This was about a decade before we had the technology to do it. We call that technology firewalls today, and we started seeing them in the marketplace about 20 years after Saltzer and Schroeder published this paper. 

Rick Howard: The U.S. Department of Defense published "Trusted Computer Systems Evaluation Criteria" in 1985, more commonly known as the Orange Book. This was an effort to establish standards for how secure a computer system should be, depending on the classification level of the data that will reside on the system. The standards have changed over time, but the idea is still in practice today, not just by the U.S. government, but by many governments worldwide when they had to think about security requirements for various levels of security classification like in the U.S. - top secret, secret, confidential and sensitive but unclassified, or SBU. 

Rick Howard: After the break, we'll move to phase three of our infosec history, the internet years. Come right back. 

(SOUNDBITE OF SONG, "SMELLS LIKE TEEN SPIRIT") 

Nirvana: (Singing) Hello, hello, hello. With the lights out, it's less dangerous. Here we are now. Entertain us. 

Rick Howard: That is the anguished vocal performance of Kurt Cobain singing "Smells Like Teen Spirit" with his band Nirvana, released in 1991. Which brings us to Dr. Fred Cohen in his papers published in the early 1990s. He was the first to describe defense in depth as a common cybersecurity model that the network defender community was following. About seven years ago, I discovered his papers, but I couldn't tell if he originated the idea of defense in depth. He referred to the concept in the papers, but he didn't really take credit for it. So I called him on the phone and asked him. I said, hey, Fred. Are you the guy that invented defense in depth? And he said that, no, he didn't invent the idea, but he was probably the first one to document it in a research paper. So there you go. Let's give it to Fred. Since scientific tradition typically gives credit to the person who publishes first, Fred is our man. 

Rick Howard: Defense in depth is the idea that network architects would erect an electronic barrier that sits between the internet and an organization's digital assets. In order to get on the inside of the barrier from the internet, you had to go through a control point, usually a firewall, but sometimes in the early days, you did it with a router. From the 1990s until today, the common practice has been to add additional control tools behind the firewall to provide more granular functions. In the early days, we added intrusion detection systems and antivirus systems. All of those tools together form something called the security stack, and the idea was that if one of the tools in the stack failed to block an adversary, then the next tool in line would succeed. If that one failed, then the next one would take over. That's defense in depth. 

Rick Howard: If you asked cybersecurity practitioners today to describe their security model, many would say they follow the defense-in-depth model. But you may have noticed that I don't list defense in depth as one of my first-principle strategies. 

(SOUNDBITE OF ARCHIVED RECORDING) 

Tim Allen: Oh, no. 

(LAUGHTER) 

Rick Howard: It's not even a tactic we've talked about because when you say that you are strategically trying to reduce the probability of material impact to your organization due to a cyber event and you say that you're deploying a defense-in-depth strategy, how does that work exactly? In the old days, say the late 1990s, we were just trying to block bad technical things from happening. Call it cyber hygiene. We would block all network traffic and only allow by exception with our firewall. We would disallow even more network traffic based on known bad-guy IP addresses with our intrusion detection systems. And we would block malicious code with our antivirus systems. Three systems - defense in depth. As time went on, we added more and more tools to the stack. Back then, cyber hygiene was our first and best guess about how to stop malicious behavior coming onto our networks. I used it back in the day, like everybody else, because we didn't have an alternative. But today, defense in depth doesn't have anything to say about the more modern notions of probability risk reduction and materiality. Defense in depth is an idea whose time has come to an end. 

(SOUNDBITE OF ARCHIVED RECORDING) 

Tim Allen: Oh, no. 

(LAUGHTER) 

Unidentified Person: Rick, can we cut that? I don't think we should have said that one. And based on that comment, I want to make sure everybody knows to send your comments and complaints to csop@thecyberwire.com. 

Rick Howard: In 1998, Donn Parker published his book "Fighting Computer Crime: A New Framework for Protecting Information," where he strongly condemns the elements in the CIA triad as being inadequate. He also never mentions the phrase CIA triad, by the way, just the three elements - confidentiality, integrity and availability. However, he does propose making the list more complete by adding three additional elements. Instead of a triad, he proposed a more complete hexad. He says to add possession or control, authenticity and utility. That eventually became known as the Parkerian Hexad. 

(SOUNDBITE OF ARCHIVED RECORDING) 

Tim Allen: Oh, yeah. 

(LAUGHTER) 

Rick Howard: Now, that is a great name. But the idea never really caught on for reasons probably only a marketing expert could explain. It's unclear when it happened or even who is responsible. But sometime after the Parker book, the infosec community started referring to the three elements - confidentiality, integrity and availability - as a single concept, an inextricably linked triad. During this period, though, most security practitioners spent time improving the security stack in one form or the other. As cloud environments demurs around 2006, the number of digital environments we had to protect exploded. 

(SOUNDBITE OF ARCHIVED RECORDING) 

Tim Allen: Oh, no. 

(LAUGHTER) 

Rick Howard: Organizations started storing and processing data in multiple locations that you all heard me talk about like data islands - traditional data centers, mobile devices, cloud environments and SAS applications. The security stack idea became more abstract. It wasn't one set of tools physically deployed behind the firewall any longer. It was now a series of security stacks deployed for each data island. The security stack became the set of all tools deployed that improved the organization's defensive posture regardless of where they were located - defense in depth applied abstractly to all of the environments. Most of the research in this period focused on improving CIA triad capability by building better tools for the security stack - like application firewalls, identity and access management systems, XDR and the like - and better models for stopping adversary activity like Kindervag's "Zero Trust: No More Chewy Centers" (ph) paper in 2010, Lockheed Martin's intrusion kill chain model also in 2010, the U.S. Department of Defense's diamond model in 2011, and the MITRE ATT&CK framework in 2013. 

(SOUNDBITE OF SONG, "LEVITATING") 

Dua Lipa: (Singing) If you want to run away with me, I know a galaxy, and I can take you for a ride. I had a premonition... 

Rick Howard: That soulful and dynamic voice is Dua Lipa singing her song "Levitating" that she released in 2020, which means we have reached the modern era. Sometime in the 2010s, the academic community started some preliminary thinking about how to apply the first-principle idea to cybersecurity. The State University of New York at Buffalo's Charles Arbutina and Sarbani Banerjee tied what they called foundational propositions to the U.S. National Security Agency, or the NSA, checklist for what makes up a secure system. But their work assumes that building a secure system is the absolute cybersecurity first principle without any discussion. It's the right idea, pursuing cybersecurity first principles, but it's not atomic enough. It doesn't get to what the actual first principle is. Some of their proposed tasks, like domain separation, process isolation and information hiding, might be and should be used as a tactic, but the authors don't illustrate exactly what it is they're trying to do. They don't get to the essence of the problem. 

Rick Howard: In 2017, Dr. Matthew Hale, Dr. Robin Gandhi and Dr. Briana Morrison covered similar ground using the NSA checklist in their "Introduction to Cybersecurity First Principles," designed for elementary students, like K through 12. And in 2021, Dr. John Sands, Susan Sands and Jaime Mahoney from Brookdale Community College - my wife's alma mater, by the way - covered the same material with more detail, but again, don't offer any argument about why these are first principles, just that they are. Shouhuai Xu published his paper, "The Cybersecurity Dynamics Way of Thinking in Landscape," at the seventh ACM workshop on moving target defense in 2020. Xu proposes a three-dimensional axes with first principle modeling analysis, assumption driven; data analytics, experiment driven; and metrics, application and semantics driven. But again, there's no discussion of why his first principles are elemental. 

Rick Howard: Nicholas Seeley published his master's thesis at the University of Idaho in 2021, "Finding the Beginning to Discover the End: Power System Protection as a Means to Find the First Principles of Cybersecurity." Out of all the papers reviewed here, this is the most complete in terms of first principle thinking. In his thesis, Seeley reviews most of the papers I've talked about on this show before he drew any conclusions and makes the case that the main ideas that emerge from those papers revolve around the issue of trust. He then questions whether or not the idea of trust is fundamental enough to be a first principle. He quotes James Coleman in his book "The Foundations of Social Theory," that says situations that involve trust are a subset of situations that involve risk, or as Seeley says, without risk, there is no need for trust. Seeley says that risk is a function of probability, a measure of uncertainty. He believes that uncertainty is more fundamental than the CIA triad or any of the other analytical checklists that the previous authors came up with. 

Rick Howard: Interestingly, the father of decision analysis theory, Dr. Ron Howard - no relation - says the same thing in his book, "The Foundations of Decision Analysis Revisited." Seeley takes an idea from the Luhmann, King, Morgner book "Trust and Power," that trust allows us to reduce complexity in our lives. He then proposes a set of assumptions in the style of Euclid from back in the day that form his set of cybersecurity first principles. No. 1, complete knowledge of a system is unobtainable. Therefore, uncertainty will always exist in our understanding of that system. No. 2, the principle of a system must invest trust in one or more agents. No. 3, known risk can be mitigated using controls, transference and avoidance; else, the risk must be accepted. And finally, No. 4, unknown risk manifests through complexity. I was so taken with Seeley's thought process about first principles that I called him on the phone to ask him about it. He is currently the vice president of Infrastructure Defense at Sweitzer Engineering Laboratories. And I asked him about his Euclid-like postulates. 

Nicholas Seeley: OK, so there's the Euclid, and that's purely a deductive logic, right? 

Rick Howard: Right. Right. 

Nicholas Seeley: So he's creating a framework, and within that framework, he is articulating all of these things that are definitively true, right? I mean, they are truth because they are encompassed within a particular framework. When we talk about first principles, the deductive side of logic doesn't necessarily hold because the framework within which something is created basically presupposes that as truth, right? So the framework itself isn't the first principle, but there's something underneath it that presupposes all of that. But what that means is, like, you can prove something, right? Like, you can formally... 

Rick Howard: Right. 

Nicholas Seeley: This to this to this - dah (ph) - QED, right? I mean, it's - there it is. Like, you've proved it... 

Rick Howard: For 23 centuries, by the way. You know, Euclid's... 

Nicholas Seeley: (Laughter) Yeah. 

Rick Howard: You know, it looks like it's been working pretty well. Yeah (laughter). 

Nicholas Seeley: Right. Yeah, exactly. Exactly. And then when we get on the more inductive side of things - right? - it's all empirical. I mean, it's just so - it's observation, right? So we've got all of these observations that basically told us that if you spin a magnet inside a coil of wire, like, you're going to produce a voltage, and here's how that works. Does that mean that that's going to work tomorrow? Like, philosophically, no, it doesn't. Like, there's this huge, like, kind of philosophical chasm that says just because something works today does not - that it's too much of a philosophical leap to say that it will definitely work tomorrow. But it does work, and I'm going to rely on it, right? My cellphone's probably going to work in the morning, right? My computer's going to work. 

Rick Howard: Yeah. 

Nicholas Seeley: So it doesn't - the philosophy side of things doesn't necessarily preclude us from using things. It does put us back into this idea where we have to have these inductive statements to get to first principles, right? And it's like, it really is a statement that is the foundation. The thing that really got me thinking was that if you go back to the very, very first papers that are written on information security or computer security - and I'm talking like, you know, in the mid-'60s, there was the RAND report. 

Rick Howard: Yeah. 

Nicholas Seeley: A little bit later there was James Anderson, I think... 

Rick Howard: Anderson. 

Nicholas Seeley: ...Wrote some work for The Anderson Report. 

Rick Howard: Yeah. 

Nicholas Seeley: Willis Ware was part of - the writer at the RAND Report. Anyways, these guys spent - and I'm not - and please don't take this as me thinking they did something wrong or weren't rigorous enough. But really, there's - the statement of the problem is, like, four paragraphs in a hundred-page report, right? 

Rick Howard: Yeah. 

Nicholas Seeley: And I was thinking - I'm not a huge fan of, like, you know, inspirational quotes, but I do like the one by Einstein that says something along the lines of, like, if I have to solve a - if I'm given an hour to solve a problem, I'm going to spend 45 minutes understanding - or 55 minutes understanding what the problem is and then five minutes solving it. And it just seemed like it was completely backwards, right? Everybody jumped into a solution without really understanding - at least in my opinion, without really understanding what the actual problem was. 

Rick Howard: The actual and probably apocryphal quote from Einstein is "if I had an hour to solve a problem and my life depended on the solution, I would use the first 55 minutes determining the proper question to ask, for once I know the proper question, I could solve the problem in less than five minutes." Isn't that the perfect description of what a first principle is? In my conversation with Nicholas, I asked him about his paper's conclusion. He basically stopped short of identifying the absolute cybersecurity first principle, and then he uses his axioms to design a better proof than Bell and LaPadula to decide if one system design over another is more secure, using something he calls eigenvalue analysis of the associated graphs. Now, I never got to eigenvalues in my linear algebra class in college, so I'm going to defer to Nicholas here. But instead of identifying the absolute cybersecurity first principle, he went back to the traditional well of trying to design secure systems. In an email exchange later, he said that was a fair synopsis, and he wished he had said something along the lines of, we are probably not going to prove our way to better security. That said, if you're looking for some fantastic analysis of cybersecurity first principles, check out Nicholas' paper, his master's thesis, and the link is in the show notes. It's amazing. 

(SOUNDBITE OF SONG, "AMERICAN PIE") 

Don Mclean: (Singing) So bye, bye Miss American Pie. Drove my Chevy to the levy, but the levy was dry. 

Rick Howard: I started thinking and writing about cybersecurity first principles as early as 2016. My thoughts weren't fully formed yet. But even then, I knew that the security practitioner community was going in the wrong direction. We had somehow chosen, in a groupthink kind of way, that securing individual systems with mathematical proofs was the way to go or, minus that, a simple checklist in the form of a CIA triad was going to suffice in this very complex world. It didn't matter that when most of us were following those best practices that the number of breaches reported just in the public square alone continued to grow. As Parker suggested, I knew that the CIA triad wasn't elemental enough. And as Bell and LaPadula said in their original 1973 report, the theoretical proof that a system is secure is possible, but in a practical, real-world scenario, it might be impossible to build one. As Seeley said, even the giants in the field back in the day couldn't figure that out. Besides, in hindsight, it was the wrong focus. We didn't need to protect individual computer systems as a first principle. We might do that for special niche cases as a tactic, where the security requirements are extreme, like government classified systems. But for the rest of us, the normal use cases, we needed to prevent material impact to our organizations. 

Rick Howard: For the past three years, this entire podcast series has been a discussion of what cybersecurity first principles are and how we might achieve them. If you're still listening to this stuff after all this time, you at least don't think I'm completely crazy. You possibly think that there might be some merit to it all. Thank you for that. The one downside to this is that we publish this content in pieces scattered across the CyberWire's website. Well, if you're one of those people who like to have all of this material in a tight little container - like, I don't know, a book - well, your prayers have been answered. The folks at the CyberWire have spent the last six months compiling that very book. It's called "Cybersecurity First Principles: A Reboot of Strategy and Tactics," and we're rolling it out in time for the big RSA Conference shindig in April of this year. You can pre-order your copy now at Amazon. The link is below in the show notes. Or just go to the Amazon web page and search for the title. You can't miss it. And if you're traveling to the great state of California for the conference this year, I will be signing copies at the bookstore. I would love to see you there. 

Rick Howard: And that's a wrap. Next week, we're going to get down in the mud and stop talking about theory and start talking about practicality. We're going to bring executive experts to the CyberWire Hash Table to discuss what they're actually using to pursue zero-trust. You don't want to miss that. The CyberWire's "CSO Perspectives" is edited by John Petrik and executive produced by Peter Kilpe. Our theme song is by Blue Dot Sessions, remixed by the insanely talented Elliott Peltzman, who also does the show's mixing, sound design and original score. And I am Rick Howard. Thanks for listening.