Prior research on cybersecurity first principles.
N2K logoFeb 13, 2023

CSO Perspectives is a weekly column and podcast where Rick Howard discusses the ideas, strategies and technologies that senior cybersecurity executives wrestle with on a daily basis.

Prior research on cybersecurity first principles.

Listen to the audio version of this story.

For the past three years, I have been going on and on about cybersecurity first-principle thinking as if the concept was somehow unique; as if it were cut out of virgin cloth and walked down from Mount Olympus by Athena herself, the goddess of wisdom, courage, inspiration, and the arts. Well, that just isn’t true. In the cybersecurity thought leadership space, there have been plenty of big-brained researchers, from the early days until the present, trying to find the edges of just what exactly is cybersecurity. As Sir Isaac Newton said, "If I have seen further than others, it is by standing on the shoulders of giants.” I figured it was time to give some of those amazing scientists some recognition and, in the process, document the evolution of the security community’s thinking on the subject.

The four (maybe five) phases of infosec history.

I have said in the past that I study infosec history because I’m a student of the cybersecurity game. I want to understand the trends in order to make my own assessments of their validity: to see what went right and what went wrong, to determine why some ideas worked, why some failed, and why some may have just been ahead of their time. I want to learn from the failures of others so that I don’t have to repeat those mistakes myself. I want to steal the best ideas from the giants that came before me so that I can benefit from their wisdom. From my viewpoint, I can’t understand the current state of the infosec community unless I have some understanding of what has happened in the past. 

When I think about our relatively short 50+ year infosec history, I can make the case that it roughly coalesces around four phases: 

Phase 1: The mainframe (1960 - 1981)

Phase 2: The personal computer (1981 - 1995)

Phase 3: The Internet (1995 - 2006)

Phase 4: The Cloud (2006 - Present)

And, with all the discussion about AI and Chat GPT this year, we might be moving into a new phase, the AI phase, but it’s too soon to tell.

These phases are not a perfect representation of the history, but each one represents a major change that disrupted how people used computers and, consequently, changed how security practitioners thought about securing those computers.

The beginning.

In the modern world, the computer era started in earnest in Phase 1, when the mainframe computer became useful to governments, universities, and the commercial world. It took about a decade before the mainframe community realized that they might have a computer security problem, and it started with the U.S. military. Willis Ware’s paper “Security Controls For Computer Systems,” published in 1970 when Ware was working for the Rand Corporation, started the process.1 His paper is not so much a definition of cybersecurity or a statement about cybersecurity first principles as it is a listing and description of all the ways computers were going to be a problem in the future when they started sharing resources across networks. I would put this in the category of, “the first step in solving any problem is recognizing that you have a problem.” The paper hints at the idea that the security community needs to determine how to build secure systems. That means designing a computer architecture that is mathematically proven to be impenetrable. That idea would be the focus of researchers through the 1990s. In the Cybersecurity Canon Hall of Fame book, "A Vulnerable System: The History of Information Security in the Computer Age," published in 2021, the author, Andrew Stewart, laments the fact that since the beginning of the digital age, nobody has been able to build a secure system.2 That’s true. Today, that idea has largely been abandoned.

The 70s and 80s.

The paper, “Computer Security Technology Planning Study,” published by James Anderson for the U.S. Air Force in 1972 feels like a continuation of thought from the Willis Ware paper.3 It’s an early expression, maybe the first expression, of the idea that security shouldn’t be added on after the system is built; something that security professionals still talk about today when you hear them discuss the idea of shifting left or security-by-design. It mirrors the idea that building a secure system is the ultimate goal but proposes that any secure systems will require a way to monitor that system for defects and intrusions.

The next year, David Bell and Len LaPadula, then working for MITRE, published their paper called "Secure Computer Systems: Mathematical Foundations.”4 In it, they provide the arithmetic proof that would guarantee that a computer system is secure. Unfortunately, they admit up front that even if you could build a system that adhered to the proof, how would system builders guarantee that they implemented everything correctly? Theoretically, you could do it, but practically, how would you vouch for its security? And this is the problem that plagued this kind of research for 20 years.

In 1975, Jerome Saltzer and Michael Schroeder published their paper, "The Protection of Information in Computer Systems," in the Proceedings of the IEEE.5 According to Jen Reed, a former CISO but now working as an AWS Principle, and is a regular guest here at the Cyberwire Hash Table, told me in a Linked-In conversation last year (2022) that Saltzer and Schroeder’s paper may be the first paper to describe the CIA triad. They didn’t call it that but they refer to three types of invasion: unauthorized information release (confidentiality), unauthorized information modification (integrity), and unauthorized denial of use (availability). 

What’s interesting is that Saltzer and Schroeder, as well as other researchers during this period, talk about the elements of the CIA Triad but they never group them together as a coalesced concept. These early papers refer to those elements as things you might do and should do in a checklist, or things that can go wrong if you don’t do them. But they never lump all three characteristics into one cybersecurity first principle as in, if you just get these three things done, then you will have solved cybersecurity.

Saltzer and Schroeder also likely make the first case that userid/password combinations are a weak form of authentication and two-factor authentication will be required. Further, they might be the first to champion the reduction of complexity in all things related to security design and, for whatever the design becomes, to not hide it in secrecy. In other words, this may be the first public record of researchers making the argument against security through obscurity. Finally, they promote an idea called “Fail-safe defaults;” deny everything first and allow by exception. This idea is possibly the first inklings of perimeter defense; building an outer barrier to the network that could control access. This was about a decade before we had the technology to do it–we call that technology “firewalls”. 

 U.S. Department of Defense published “TRUSTED COMPUTER SYSTEM EVALUATION CRITERIA,” more commonly known as The Orange Book. This was an effort to establish standards for how secure a computer system should be depending on the classification level of the data that will reside on the system. The standards have changed over time but the idea is still in practice today not just by the U.S. government but many governments worldwide when they think about security requirements for various levels of security classification like Top Secret, Secret, Confidential, and Sensitive But Unclassified (SBU).

Defense in depth.

Dr. Fred Cohen published the first papers in the early 1990s that used defense-in-depth to describe a common cybersecurity model in the network defender community.6 7 Back in 2016, I discovered his papers but couldn’t tell if he originated the idea of defense-in-depth. He referred to the concept but didn’t take credit for it. So, I called him on the phone and asked him. I said “Fred, are you the guy that invented defense in depth?” He said that, no, he didn’t invent the idea but he was probably the first one to document it in a research paper.8 So, there you go. Let’s give it to Fred. Since scientific tradition typically gives credit to the person who publishes first, Fred is our man. 

Defense-in-depth is the idea that network architects would erect an electronic barrier that sits between the internet and an organization's digital assets. In order to get on the inside of the barrier from the internet, you had to go through a control point (usually a firewall but sometimes, in the early days, with a router). From the 1990s until today, the common practice has been to add additional control tools behind the firewall to provide more granular functions. In the early days, we added intrusion-detection systems and anti-virus systems. All of those tools together formed something called the security stack, and the idea was that if one of the tools in the stack failed to block an adversary, then the next tool in line would. If that one failed, then the next would take over. That’s defense-in-depth. If you asked cybersecurity practitioners today to describe their security model, many would say they follow the defense-in-depth model. 

But you will notice that I don’t have defense-in-depth as one of my first principle strategies. When you say that you are strategically trying to reduce the probability of material impact to your organization due to a cyber event, and you say that you are deploying a defense-in-depth strategy, how does that work exactly? In the old days (say the late 1990s), we were just trying to block bad technical things from happening. Call it cyber hygiene. We would block all network traffic and only allow by exception with our firewall. We would disallow even more network traffic based on known bad guy IP addresses with our intrusion detection systems. And we would block malicious code with our anti-virus systems; three systems, defense in depth. As time went on, we added more and more tools to the stack.

Back then, cyber hygiene was our first and best guess about how to stop malicious behavior coming onto our networks. I used it back in the day like everybody else because we didn’t have an alternative. But today, defense-in-depth doesn't have anything to say about the more modern notions of probability and materiality. Defense-in-depth is an idea whose time has come to an end.

The CIA triad.

In 1998, Donn Parker published his book “Fighting Computer Crime: A New Framework for Protecting Information” where he strongly condemns the elements in the CIA Triad as being inadequate.9 He also never mentions the phrase “CIA Triad,” however. He proposed adding three other elements (Possession or control, Authenticity, and Utility) that eventually became known as the Parkerian Hexad, but the idea never really caught on for reasons probably only a marketing expert could explain.

It’s unclear when or even who is responsible, but sometime after the Parker book, the infosec community started referring to the three elements—confidentiality, integrity, availability—as a single concept, an inextricably linked triad. During this period, most security practitioners spent time improving the security stack in one form or the other. 

As cloud environments emerged around 2006, however, the number of digital environments we had to protect exploded. Organizations started storing and processing data in multiple locations that I like to call data islands (traditional data centers, mobile devices, cloud environments, and SaaS applications). The security stack idea became more abstract. It wasn’t one set of tools physically deployed behind the firewall any longer It was now a series of security stacks deployed for each data island. The security stack became the set of all tools deployed that improved the organization’s defensive posture regardless of where they were located; Defense-in-Depth applied abstractly to all of the environments. 

Most of the research in this period focused on improving CIA Triad capability by building better tools for the security stack (like application firewalls, identity and access management systems, XDR, etc) and better models for stopping adversary activity (Kindervag’s zero trust "No More Chewy Centers” paper - 201010, Lockheed Martin’s intrusion kill chain model - also 201011, the U.S. Department of Defense’s Diamond model - 201112, and the MITRE ATT&CK Framework - 201313).

Modern day.

About the same time, the academic community started some preliminary thinking about how to apply the first principle idea to cybersecurity. The State University of New York at Buffalo’s Charles Arbutina and Sarbani Banerjee tied what they called “foundational propositions” to the U.S. National Security Agency (NSA) checklist of what makes up a secure system.14 But their work assumes that building a secure system is the absolute cybersecurity first principle without any discussion. It’s the right idea, pursuing cybersecurity first principles, but it’s not atomic enough: it doesn’t get to what the actual first principle is. Some of their proposed tasks—like domain separation, process isolation, and information hiding—might be and should be used as a tactic, but the authors don’t illustrate exactly what it is they are trying to do. They don’t get to the essence of the problem. 

In 2017, Dr. Matthew Hale, Dr. Robin Gandhi, and Dr. Briana Morrison covered similar ground using the NSA checklist in their “Introduction to Cybersecurity First Principles” designed for elementary students (K-12).15 And, in 2021, Dr. John Sands, Susan Sands, and Jaime Mahoney, from Brookdale Community College, cover the same material with more detail but again don’t offer any argument about why these are first principles, just that they are.16 

Shouhuai Xu published his paper, "The Cybersecurity Dynamics Way of Thinking and Landscape" at the 7th ACM Workshop on Moving Target Defense in 2020.17 Xu proposes a three-dimensional axis with first principles modeling analysis (Assumption driven), data analytics (Experiment driven), and metrics (Application and semantics driven). But again, there is no discussion of why his first principles are elemental.

Nicholas Seeley published his master’s thesis at the University of Idaho in 2021: “Finding the Beginning to Discover the End: Power System Protection as a Means to Find the First Principles of Cybersecurity.”18 Out of all the papers reviewed here, this is the most complete in terms of first-principle thinking. In his thesis, Seeley also reviewed most of the papers I’ve listed in this essay before he drew any conclusions and makes the case that the main ideas that emerge from those papers revolve around the issue of trust. He then questions whether or not the idea of trust is fundamental enough to be a first principle. He quotes James Coleman and his book “The Foundations of Social Theory” 19 that says “situations that involve trust are a subset of situations that involve risk.” Or, as Seeley says, “without risk there is no need for trust.” Seeley says that risk is a function of probability; a measure of uncertainty. He believes that uncertainty is more fundamental than the CIA Triad or any of the other analytical checklists that the previous authors came up with. Interestingly, the father of Decision Analysis theory, Dr. Ron Howard, says the same thing in his book, “The Foundations of Decision Analysis Revisited.”20

Seeley takes an idea from the Luhmann/King/Morgner book, “Trust and Power,” that trust allows us to reduce complexity in our lives.21 He then proposes a set of assumptions (postulates or axioms in the styl of Euclid), that form his set of cybersecurity first principles.

1. Complete knowledge of a system is unobtainable; therefore, uncertainty will always exist in our understanding of that system.

2. The principal of a system must invest trust in one or more agents.

3. Known risks can be mitigated using controls, transference, and avoidance, else the risks must be accepted.

4. Unknown risks manifest through complexity.

But then he stops short of identifying the absolute cybersecurity first principle and uses his axioms to design a better proof than Bell and LaPadula to decide if one system design over another is more secure using eigenvalue analysis of the associated graphs. In other words, he went back to the traditional well of trying to design secure systems. 

The beginning of cybersecurity first principles.

I started thinking and writing about cybersecurity first principles as early as 2016. My thoughts weren’t fully formed yet, but even then, I knew that the security practitioner community was going in the wrong direction. We had somehow chosen, in a groupthink kind-of way, that securing individual systems with the CIA Triad was the way to go. And yet, when most were following that best practice, the number of breaches reported, just in the public alone, continued to grow. As Parker suggested, I knew that the CIA Triad wasn’t elemental enough. The elements were good tactics but they didn’t represent the essence of what we were trying to do.

And I knew it was probably hopeless to design a secure computer system too even with Seeley’s eigenvalue analysis. Even the giants in the field couldn’t figure that out in the1970s and the 1980s. Besides, in hindsight, it was the wrong focus. We didn’t need to protect individual computer systems as a first principle. We might do that for special niche cases as a tactic where the security requirements are extreme (like government classified systems). But for the rest of the normal use cases, we needed to prevent material impact to our organizations. 

For the past three years, this entire essay and podcast series has been a discussion of what cybersecurity first principles are and how we might achieve them. If you’re still reading this stuff after all this time, you at least don’t think I'm completely crazy. You possibly think that there might be some merit to it all. Thank you for that.

The one downside to this is that we publish this content in pieces scattered across rhe Cyberwire’s website. Well, if you’re one of those people who like to have all of this material in a tight little container, like, I don’t know, a book, well, your prayers have been answered. The folks at The CyberWire have spent the last six months compiling that very book. It’s called, “Cybersecurity First Principles: A Reboot of Strategy and Tactics”22 and we are rolling it out in time for the big RSA Conference shindig in April of this year. You can pre-order your copy now at Amazon. The link is below. Or just go to the Amazon web page and search for the title. You can’t miss it. And, if you’re traveling to the great state of California for the conference this year, I will be signing copies at the bookstore. I would love to see you there.


1 Ware, W.H., 1970. Security Controls for Computer Systems (U): Report of Defense Science Board Task Force on Computer Security. The Rand Corporation. 

2 Stewart, A.J., 2021. A Vulnerable System: The History of Information Security in the Computer Age. Cornell University Press. 

3 Anderson, J.P., 1972. Computer Security Technology Planning Study (Volume I). Electronics System Division 1. 

4 Bell, D., LaPadula, L., 1973. Secure Computer Systems: Mathematical Foundations. Mitre. 

5 Saltzer, J., Schroeder, M., 1975. The Protection of Information in Computer Systems. Proceedings of the IEEE 63, 1278–1308. 

6 Cohen, F., 1989. Models of practical defenses against computer viruses. Computers & Security 8, 149–160. 

7 Cohen, F., 1992. [PDF] Defense-in-depth against computer viruses. Computers and Security 11, 563–579. 

8 Cohen, F., 2016. Defense in Depth phone conversation with Rick Howard. 

9 Parker, D.B., 1998. Fighting Computer Crime: A New Framework for Protecting Information. Wiley.

10 Kindervag, J., 2010. No More Chewy Centers: Introducing The Zero Trust Model Of Information Security. Forrester.

11 Hutchins, E., 2010. Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains. Lockheed Martin. 

12 Caltagirone, S., Pendergast, A., Betz, C., 2011. The Diamond Model of Intrusion Analysis. Center for Cyber Threat Intelligence and Threat Research. 

13 Strom, B., Applebaum, A., Miller, D., Nickel, K., Pennington, A., Thomas, C., 2020. MITRE ATT&CK: Design and Philosophy. Mitre. 

14 Sarbani, B., Arbutina, C., n.d. Cybersecurity First Principles. 

15 Hale, M., 2017. Introduction to Cybersecurity First Principles · nebraska-gencyber-modules [WWW Document]. Nebraska-Gencyber-Modules. URL (accessed 10.29.22). 

16 Sands, J., Sands, S., Mahoney, J., n.d. Cybersecurity Principles [WWW Document]. NCyTE, WA. URL 

17 Xu, S., 2020. The Cybersecurity Dynamics Way of Thinking and Landscape, in: Proceedings of the 7th ACM Workshop on Moving Target Defense. ACM, New York, NY, USA. 

18 Seeley, N., 2021. Finding the Beginning to Discover the End: Power System Protection as a Means to Find the First Principles of Cybersecurity (Degree of Master of Science). University of Idaho. 

19  Coleman, J.S., 1994. Foundations of Social Theory [Book]. URL (accessed 2.7.23).

20 Howard, R.A., Abbas, A.E., 2015. Foundations of Decision Analysis [WWW Document]. URL (accessed 2.7.23).

21 Luhmann, N., 2018. Trust and Power. John Wiley & Sons.

22 Howard, R., April 2023. Cybersecurity First Principles: A Reboot of Strategy and Tactics [Book]. URL (accessed 2.7.23). 

  Billings, R., 2018. Another favorite quote: “Standing on shoulders” - Newton [WWW Document]. Acellus Learning System. URL (accessed 2.8.23).