History of Infosec: a primer.
By Rick Howard
Apr 25, 2022

CSO Perspectives is a weekly column and podcast where Rick Howard discusses the ideas, strategies and technologies that senior cybersecurity executives wrestle with on a daily basis.

History of Infosec: a primer.

Listen to the audio version of this story.

"We study history not to be clever in another time, but to be wise always." ―Marcus Tullius Cicero

Cicero was a famous Roman statesman and orator, a contemporary of Julius Caesar, Pompey, Marc Antony and Octavian. His writings on classical rhetoric and philosophy influenced the great thinkers of the Renaissance and Enlightenment many years later. And he’s absolutely right about history. 

I don’t study infosec history so that I can win at Nerd-Trivial-Pursuit tournaments at security conferences. I study infosec history so that I can understand the day-by-day changes going on in the industry. I believe you can’t understand the current state of the infosec community unless you have some understanding of what has happened in the past. For example, you can’t really have any detailed understanding for what’s going on, and what’s not going on, in the Ukraine war in cyberspace without having a background on Russian cyber operations from the beginning:

  • 1988: Made famous by Dr. Clifford Stoll’s paper “Stalking the Wily Hacker” and subsequent book, “The Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage.” The Russians sponsored the first ever public cyber espionage campaign using East German hacker mercenaries that targeted U.S. governmental agencies. 
  • 1991: The collapse of the Soviet Union and the subsequent liberation of Ukraine.
  • 1996: Moonlight Maze: A series of Russian probes and attacks against the Pentagon, NASA (the National Aeronautics and Space Administration), and affiliated academic and laboratory facilities. 
  • 2007: Russia launched DDOS attacks against Estonia. 
  • 2008: Russia launched cyber attacks against the country of Georgia and penetrated the Pentagon’s classified networks. 
  • 2013: General Valery Gerasimov, the Chief of the General Staff of the Russian Federation, established the unofficial Gerasimov doctrine that advocates for asymmetric targets (physical and virtual critical infrastructure including outer space) across the spectrum during war.  
  • 2014: Russia Annexed Crimea, attacked Ukraine’s power grid for the first time, and attempted to change Ukraine’s Election.
  • 2014: The U.S. discovered that Russian cyber forces had penetrated the electrical grid, the White House, the DNC (Democratic National Committee), and the State Department.
  • 2015: Russia stole NSA Classified Documents by back-dooring Kaspersky anti-virus software and also penetrated the German lower house of parliament.
  • 2015: Russians caused a second power outage in Ukraine.  
  • 2016: Russians penetrated Secretary Clinton’s Campaign Networks and managed to steal offensive cyber weapons from the NSA’s TAO Office (Tailored Access Operations) and released them to the public via the Shadow Brokers.
  • 2016: Russians hit Ukraine’s power grid a third time and the World Anti-Doping Agency.
  • 2017: Russia launched the Notpetya attack against Ukraine that destroyed pieces of the country’s critical infrastructure and caused collateral damage to other organizations around the world.
  • 2017: Russians Penetrated the Wolf Creek Nuclear Operating Corporation, Petro Rabigh (Saudi petrochemical plant and oil refinery), and the German Defense Ministry.

Without knowing that history, pundits have expressed their confusion about the fact that the Russians haven't launched (as of this writing) some massively crippling cyber attack directed against Ukraine along the same lines of the 2017 notPetya attacks. But as you look at their history conducting espionage, influence operations, and low-level-cyber-conflict operations, you can see that the Russians are absolutely following their own unofficial Gerasimov Doctrine across the spectrum of physical and virtual attacks. Offensive cyber was never going to be the only lever the Russian war fighting machine was going to pull. Offensive cyber was most likely only going to be in support of the main, physical, tank offensive. That might change if the Ukrainians have success and the West pushes Russia further and further into a corner with sanctions. President Biden and CISA (Cybersecurity and Infrastructure Security Agency) have said to be ready for that. So far though, we aren’t there yet.  

As an aside, I do get a thrill when discovering how cybersecurity things are connected to the nerd community. Like, I just learned that Eric Corley founded the famous hacker magazine “2600” in 1984 and chose his hacker name to be Emmanuel Goldstein, the shadowy resistance leader in George Orwell’s novel, “1984.” Now that’s some cyber-nerd-trivia-symmetry that I can get behind. And, for the bonus round, the name of the magazine is a reference to the 2600-hertz tone that formerly controlled AT&T’s switching system. John Draper and other phone phreakers in the 1960s became famous for using toy whistles found in Cap’n Crunch cereal boxes, and other home made devices, that emitted a sound at that exact frequency that could seize a dial tone from an AT&T pay phone and allowed phreakers to make free phone calls.

OK, that’s kind of cool and makes my nerd meter peg. But, it's not the reason I study infosec history. Really. 

How to think about infosec history

When I think about our relatively short 50 year infosec history, I can make the case that it roughly coalesces around four phases: 

  • Phase 1: The mainframe (1960 - 1981)
  • Phase 2: The personal computer (1981 - 1995)
  • Phase 3: The Internet (1995 - 2006)
  • Phase 4: The Cloud (2006 - Present)

It’s not a perfect representation but each phase represents a major disruption in how people used computers and consequently, changed how security practitioners thought about securing those computers too.

As we look at the history, certain recurring elements show up at each point.

  • Adversary Playbook Names: Code names assigned to hacker attack sequences across the intrusion kill chain that researchers have noticed repeatedly in the wild like BlackByte (AKA Digital Shadows), an infamous ransomware group.
  • Entities: Government, commercial and academic organizations that instigated some new idea or program or research, like how Gartner coined the term CASB (Cloud Access Service Broker) for security technology that protects SaaS applications in 2011.
  • Firsts: The initial time something happens, like when Citicorp hired Steve Katz in 1995 to be the first ever Chief Information Security Officer (CISO).
  • Papers and Books: Written research that invented new things like how Dr. Dorothy Denning published her paper, "An Intrusion Detection Model," in 1986 leading the way for the first commercial Intrusion Detection tools. 
  • People: The humans behind the great infosec ideas like how Dr. Fred Cohen published the first papers in the early 1990s that used Defense-in-Depth to describe a common cyber defense architecture model.
  • Law: The legislation that governments passed to control activity in cyberspace like the European Parliament’s General Data Protection Regulation (GDPR), a legal framework that requires businesses to protect the personal data and privacy of European Union (EU) citizens.
  • Technologies: A term of art referring to an application of knowledge for practical ends like passwords or two-factor-authentication. 
  • Tools: A hardware / software device that accomplishes some cybersecurity function or functions like a firewall.
  • Strategy and Tactics: Strategy is the action plan that takes you where you want to go, like zero trust, and tactics are the individual steps that will get you there, like identity and authorization management (IAM) systems.

Phase 1: The mainframe (1960 - 1981)


In this early phase, as you would expect, there were a number of firsts that launched the infosec community. During the early 1960s, one of computer science’s founding fathers, Dr. Fernando Corbató (who invented time sharing, among his many other accomplishments), introduced the idea of using passwords to keep users on the same mainframe out of each other’s files and to limit each user’s time. (The initial max was four hours.) Who knew that this first security tool would still be the most prominent means to accomplish identity and authorization fifty years later? 

In 1969, the internet came to life as UCLA and the Stanford Research Institute established the first internet connection over a telephone line. This little fact wouldn’t really  affect the security community until much later, probably the 1990s, but this was the start that would change everything forever. As  Andrew Blum said in , "Tubes: A Journey to the Center of the Internet,” “The internet took in its first breath.”

In the later part of that same decade, Gary Thuerk, a marketing manager, sent ‌the first unsolicited bulk email (SPAM) to roughly 400 prospects via ARPANET (Advanced Research Projects Agency Network), a forerunner to the modern internet, and reaped $13 million in sales for his company. So, we have him to blame for that. 

By the late 1970s, Wulf, Cohen, Corwin, Jones, Levin, Pierson, and Pollack introduced the idea of virtual machines (Virtual Sandboxes) for their Carnegie Mellon University Hydra system. This first idea will eventually turn into cloud computing 30 years later. 

And we also have Ward Christensen and Randy Suess to thank for the first dial-up bulletin board system. They built it because in Chicago, during a blizzard, they wanted a way to keep up with their computer club without having to gather together in person. These bulletin board systems that sprang up after in the 1980s are where many of the first hackers learned their craft before the internet.


The aforementioned John Draper was active during this phase. The phone phreaker movement that he was a part of became instrumental in establishing the early hacker culture.


Dr. Willis Ware pubplished "The Ware Report" to the Defense Science Board for ARPA (Advanced Research Projects Agency) in 1967.  This eventually led to the first formal penetration testing efforts in the U.S. Government and to the publication of the "Rainbow Series" of publications, the first formal documents that describe what is required in computer security.

James P. Anderson, in a report to the Electronics System Division of the U.S. Air Force in 1972, outlined a series of definitive steps that tiger teams (the first Penetration Testers) could take to test systems for their ability to prevent computer compromise. Two years later,  the US Air Force conducted probably the first penetration test of its Multics operating system.

By the end of this phase (1980), our guy, James Anderson, published his second influential paper, "Computer Security Threat Monitoring and Surveillance," some of the first research on intrusion detection.

Phase 2: The personal computer (1981 - 1995)


This second phase really kicks off a series of first time events that has greatly impacted the security community ever since. It all started with the release of the IBM Personal Computer in 1981. There were other companies making personal computers at the same time like Apple and Tandy, but IBM had the marketing and distribution clout to convince the public to buy these new machines for their homes. This meant that computers were no longer isolated to government and academic researchers working on giant mainframes in underground bunkers somewhere. Now anybody with some extra cash laying around could have them in their living rooms. This led to a lot of hackers tinkering around with what they could do.

When I was a second lieutenant stationed at Fort Polk, Louisiana, I wandered up to the brigade headquarters to deliver some paperwork one morning. I noticed that the colonel’s admin, Abigail Thibodeaux - a cajun native who some claimed had been working at Fort Polk all the way back to when General Patton trained here, was busily cranking out memos on her Wang word processor. But, over in the corner of the office, unplugged, was a brand new IBM Personal Computer fully loaded. This machine probably cost the Army over $3000 back in the 1980s. When I had the temerity to suggest to Miss Abigail that she should try to learn the new machine, she about took my head off (Colonel’s admins back then, and still do, have the power of three star generals). She told me that she had no use for that new fangled contraption. The bottom line is that, after I made a coffee run for her, I convinced her and the colonel to let me take the PC home for a while to learn how to use it. So, I had that going for me.

In 1983, Steve Capps created the first fuzzer program (although he didn’t coin the term) by repurposing another tool called “The Monkey,” where a Macintosh computer could demo itself by playing back recorded actions, to create random mouse clicks and keyboard input, in order to test the MacWrite and MacPaint applications. Slinging random input into software programs to see what would crash them gave hackers a place to start looking for exploit code possibilities. Think of it like throwing radar signals into the air looking for stealthy fighter planes. The signals that bounced back gave clues as to where the planes were. It’s the same idea with fuzzing software. In 1988, University of Wisconsin’s Professor Barton Miller coined the phrase “fuzz test” in his paper “An empirical study of the reliability of UNIX utilities.” When I ran a commercial cyber intelligence service called iDefense back in the 2000s, we ran racks of server farms dedicated to fuzzing software.

With the number of personal computers in the world escalating, the first computer viruses started to appear. In 1987, Bernd Fix discovered a method to neutralize the Vienna virus, becoming the first documented antivirus software ever written. In that same year, Omni magazine first coined the word “cyberwar” and  defined it in terms of giant robots and autonomous weapons.

At the INTEROP conference in 1989, John Romkey created the first Internet of Things (IoT) device; a toaster that could be turned on and off over the Internet. We didn’t get the name (Internet of Things) until the next phase (1999) when Kevin Ashton coined the term at a Procter & Gamble conference.


In terms of entities starting big ideas, there were two in this period. Between 1990 and 1991, the Chinese government trained a group of North Korean hackers and gave them the idea they could use the Internet to steal secrets and attack the enemies of their government. 

In 1994, Amazon began work on an e-commerce service called Merchant.com to help third-party merchants like Target or Marks & Spencer build online shopping sites on top of Amazon’s e-commerce engine. This effort eventually led to AWS.

Papers and Books

In 1983, the U.S. Government published the first book in the series of Rainbow Books, “The Orange Book: DOD Trusted Computer System Evaluation Criteria” that gave the first guidance on how to secure government computers.

I referenced at the top of this essay Eric Corley’s  (AKA Emmanuel Goldstein) launch of “2600: The Hacker Quarterly” in 1984,  an American magazine (sometimes called “the hacker’s bible) that discussed issues around legal, ethical, and technical debates over hacking. In other words, that magazine cultivated early hacker culture.

In 1986, Dr. Dorothy Denning published her paper, "An Intrusion Detection Model," in the proceedings of the Seventh IEEE Symposium on Security and Privacy leading the way for the first commercial Intrusion Detection tools. Her paper is the basis for most of the work in IDS technology that followed.

Dr. Clifford Stoll, while working as a system administrator for the Lawrence Berkeley Lab in California, detected the first ever public cyber espionage campaign sponsored by Russia using East German hacker mercenaries that targeted U.S. governmental agencies. In 1988, he published “Stalking the Wily Hacker” documenting his investigation and then, the next year, published “The Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage” that covered the same material with more detail. That book influenced many technically oriented people, like myself, to choose cybersecurity as a profession. The U.S. Army sent me to grad school to become an IT guy when the book came out, but after I read it over a weekend (when I should have been working on my thesis),  it changed the trajectory of my military career ever since. Back then, the internet was small and authors put their personal email addresses in their books. When I finished it, I immediately sent a fan-girl note to Dr. Stoll telling him how much I loved the book. He answered me in 15 minutes. I have been a fan ever since. 

Dr. Fred Cohen published the first papers in 1991 and 1992 that used Defense-in-Depth to describe a common cybersecurity model in the network defender industry. That model is still used by many today. I couldn’t find any published research claiming that Dr. Cohen coined the phrase, so, many years ago, I called him on the phone and asked him. Dr. Cohen said that he probably didn’t invent the phrase, but he was most likely the first to use it in a research paper.

In 1993, ‌Jon Arquilla and David Ronfeldt, working for the RAND Corporation, refined the term cyberwar when they published “Cyberwar Is Coming!”, introducing the idea that cyber attacks could be used for traditional warfare.

The next year, 1994, William Cheswick and Steven Bellovin, published “Firewalls and Internet Security: Repelling the Wily Hacker,” the first book on firewalls as a technology. They called it a circuit-level gateway and packet filtering technology. Interestingly, their ideas came from the desire not to keep intruders out of their networks but to keep employees from going to bad places on the internet. Many years later (late 2000ss), I got to meet Bill Cheswick. The NSA had invited me, Bill, and a host of subject matter experts across a diversified set of disciplines to a retreat in New Mexico to see if a cross pollination effort could help the NSA think differently about cybersecurity. Nothing really came of it that I know of, but Bill was trapped with me on a long bus ride from the airport to the retreat. It was a fabulous conversation.


In terms of iconic people in the security community, three got their start in this phase.  In 1988, Robert Tappan Morris, as a first-year computer science graduate student at Cornell, created and launched the “Morris Worm” onto the internet; the first of its kind to cause as much damage as it did (10% of the existing internet affected). It also resulted in the first felony conviction in the US under the 1986 Computer Fraud and Abuse Act and prompted DARPA to fund the establishment of the CERT/CC at Carnegie Mellon University. In 1993, Jeff Moss (AKA Dark Tangent) organized the first DEFCON security conference that caters to the Hacker ethos. And finally, in 1994, Vladimir Levin successfully hacked Citibank to the tune of $10 million and he is likely the instigator of the first significant cyber crime.


The U.S. Congress passed two significant pieces of legislation in this period. This first was the aforementioned Computer Fraud and Abuse Act (CFAA) in 1986 as an amendment to the first federal computer fraud law. It levied harsh penalties to hackers intentionally accessing a computer without authorization. The second was the Electronic Communications Privacy Act (“ECPA”) to promote “the privacy expectations of citizens and the legitimate needs of law enforcement.”


In terms of new technologies, in 1988, the Kerberos v4 protocol was first publicly described in a Usenex conference paper. Kerberos is a network security protocol that authenticates service requests between two or more trusted hosts across an untrusted network and is the underlying technology in Microsoft’s Active Directory today. In 1993, Tim Howes, Steve Kille, and Wengyik Yeong developed the Lightweight directory access protocol (LDAP), a open source application protocol to manage authentication access to usernames, passwords, email addresses, printer connections, and other static data within directories. This protocol is also an important piece to Microsoft’s Active Directory today.


Firewalls emerged on the scene during this phase as the security tool to deploy in your defense-in-depth architecture. In 1988, Jeff Mogul, Brian Reid, and Paul Vixie,  working for Digital Equipment Corp, conducted the first research on firewall technology with tools like the  gatekeeper.dec.com gateway and “screend." This was the  first generation of firewall architectures. Between 1989 and 1990, Dave Presotto and Howard Trickey of AT&T Bell Laboratories pioneered the second generation of firewall architectures with their research in circuit relays. They also implemented the first working model of the third generation firewall architectures, known as application layer firewalls. However, they neither published any papers describing this architecture nor released a product based upon their work.  Between 1990 and 1991, Gene Spafford of Purdue University, Bill Cheswick of AT&T Bell Laboratories, and Marcus Ranum independently researched application layer firewalls. These eventually evolved into next generation firewalls many years later. Marcus Ranum's firewall work received the most attention and took the form of bastion hosts running proxy services. In 1992, Digital Equipment Corp shipped DEC SEAL, the first commercial firewall and included proxies developed by Marcus Ranum. In 1994, Check Point Software released the first stateful inspection commercial firewall.

Phase 3: The Internet (1995 - 2006)

Comparing the history of infosec to the life of a human, the first phase would be the toddler years just learning about the existing new environment. The second phase would be the elementary school years where the human starts learning about how to interact with the rules of the world. This third phase would be the teenage years where the human starts rebelling against all the things the parents did in their generation. 


For firsts, we have the internet kicking into high gear with the mainstream adoption of the world wide web around 1995. In the same year, as I mentioned above, Citicorp hired Steve Katz to be the first ever Chief Information Security Officer (CISO). In 1997, U.S. Deputy Secretary of Defense John Hamre, during a congressional hearing, coined the phrase “electronic Pearl Harbor” as a calamitous, surprise cyberattack designed not just to take out military command-and-control communications but to physically devastate American infrastructure. Government leaders have been concerned with that idea ever since. And that same year, this NSA Red Team conducted a no-notice Vulnerability Assessment/Penetration Test (Code name: Eligible Receiver) of critical government networks to include the DoD. The report showed the network was so poorly protected that leadership quickly classified the results. In 2003, Dave Wickers and Jeff Williams, working for Aspect Security, a software consultancy company, published an education piece on the top software security coding issues of the day. That eventually turned into the OWASP Top 10 (The Open Web Application Security Project), a reference document describing the most critical security concerns for web applications.

Adversary playbook names

For adversary playbook names, we got the first cool one (Moonlight Maze) in 1998 when the U.S. Defense Information Systems Agency discovered Russian hacker activity directed against the Pentagon, NASA (the National Aeronautics and Space Administration), and some affiliated academic and laboratory facilities. I was the network manager for the Army’s Operation Center located in the Pentagon at the time and remember exactly the moment when the Warrant Officer from DISA (Defense Information Systems Agency) knocked on my door and informed me that he would be taking control of my network for the next few hours while he did the investigation. He didn’t find the Russians on my network but you could tell he was spooked. The hackers stole unclassified information on contracts, general research, military data, troop data, and maps of military installations. 

By 2003, the U.S. Department of Defense discovered the first Chinese computer cyber espionage operations conducted against military targets. Eventually the public learned the  codenamed: “Titan Rain." I was the commander of the Army CERT during that time. Before Titan Rain, we were more concerned with low-level cyber crime against Army networks. After, we had to elevate our game to combat nation-state cyber espionage.


In terms of entities, in 2003, Amazon began building infrastructure-as-code projects internally (the beginnings of DevOps); a set of common infrastructure services everyone could access without reinventing the wheel every time. Amazon business leaders realized that they could build the operating system for the internet from these services. This eventually led to AWS. By 2004, Google followed suit and invented Site Reliability Engineering (SRE), a fight against the manual toil involved in maintaining networks. That same year VoIP (Voice Over IP) service provider BroadVoice introduced the idea of Bring Your Own Device (BYOD) to work. Up to this point, organizations supplied all computing equipment to their employees. This was a first step in making it OK for employees to use their own computing systems to do work for the entity. In 2005, Concur became the first Company to offer a SaaS Cloud Platform and Gartner security analysts, Mark Nicolett and Amrit Williams, coined the term SIEM (Security Event and Information Management) as an improvement to traditional log collection systems to offer long term storage, combined log analytics, with a focus on security events. 

Papers and Books

Researchers published two important papers during this period. In 1996, Aleph One published “Smashing The Stack For Fun And Profit,” the first public document about the practice of using buffer overflow attacks to exploit software. As I mentioned above, I ran a team at iDefense that tried to find zero day exploits in common software. We had a handful of guys who were really good at fuzzing software looking for vulnerabilities and then creating zero day exploits to leverage those weaknesses by using the buffer overflow technique. I have to say, buffer overflows have always been a mystery to me. I mean, I could explain how buffer overflows work conceptually, but there is some magic involved to get them to work consistently.  

In 1999, David Baker, Steven Christey, William Hill, and David Mann, working for Mitre, published "The Development of a Common Enumeration of Vulnerabilities and Exposures," the establishment of the first public Common Vulnerability Enumeration (CVE) database. 


In terms of deep thinkers and influencers in this phase, in 1999, Qiao Liang and Wang Xiangsui, two Chinese colonels, published “Unrestricted Warfare : China’s Master Plan to Destroy America,” that proposes the strategy of what will eventually be known as asymmetric warfare designed to level the playing field against the U.S. military might. When I was at the Army CERT, we consumed this book looking for clues about how to defend against Titan Rain.

In 2002, Bill Gates turned Microsoft on a dime to implement  "Trustworthy Computing.'' He shut down Windows development for the first time ever to get a handle on the security issues the products were facing and that resulted in the Microsoft Security Development Lifecycle (SDL). 


For legislation, the U.S. Congress passed four new laws, and an industry association agreed to one set of standards that have shaped the cybersecurity landscape ever since. In 1996, the U.S. Congress passed the Health Insurance Portability and Accountability Act (HIPAA) to require the adoption of national standards for electronic health care transactions, code sets, and unique health identifiers for providers, health insurance plans and employers. In 1999, they passed ‌the Gramm-Leach-Bliley Act (GLBA) to protect consumers' personal financial information held by financial institutions. In 2001, the Payment Card Industry Security Standards Council established the Payment Card Industry Data Security Standard (PCI DSS), cybersecurity controls and business practices that any company that accepts credit card payments must implement. In 2002, the U.S. Congress passed the Federal Information Security Management Act (FISMA) that requires federal agencies to implement a program to provide security for their information systems. And finally, that same year, they passed the Sarbanes-Oxley Act to protect investors and the public by increasing the accuracy and reliability of corporate disclosures and held companies liable for bad Identity and Access Management.


For emerging tools, in 1998, hacktivist group “Cult of the Dead Cow” released the first version of Back Orifice, authored by Sir Dystic, at DEFCON 6 to demonstrate the lack of security in Microsoft's Windows 9x series of operating systems. In 2000, Poul-Henning Kamp introduced “Jails” that allowed administrators to partition a FreeBSD Unix computer system into several independent, smaller systems  with the ability to assign an IP address for each system and configuration. This was the next step in virtual machines. In 2002, Oasis, a non-profit standards body, approved the Security Assertion Markup Language (SAML) V1.0 standard that allows identity providers to pass authorization credentials to service providers. And finally, in 2005, Brad Fitzpatrick developed the first generation OpenID authentication protocol. This eventually becomes the authentication layer for OAuth, an open-standard authorization protocol that provides applications the ability for “secure designated access.”

Strategies and Tactics

For new strategies and tactics, two emerged during this phase. In 2000, internet founding father Vint Cerf coined the phrase “cyber hygiene” when he testified to the United States Congress Joint Economic Committee. Infosec practitioners had been executing this best practice for at least a decade prior, but Mr. Cerf  gave it a name. 

In 2001, 17 software developers published the “Agile Manifesto,” a rejection of the Waterfall model and an embrace of the idea of producing real, working code as a milestone of progress. This is the start of the Agile software development movement and the precursor to DevOps and DevSecOps.

Phase 4: The Cloud (2006 - Present)


In this phase, the infosec community is transitioning from the teenage years to the young adult years. In terms of firsts, in 2006, Amazon became the first company to offer an IaaS cloud platform (Amazon Elastic Compute or AWS) and the infosec industry started seeing managed identity services for the first time. Between 2008 and 2009, the idea of the Internet of Things (IoT) became real when Cisco reported that more “things” were connected to the Internet than people. 

In 2008, Dr. Gary McGraw published the first Building Security In Maturity Model (BSIMM) report; a survey of some 30+ companies that collated initiatives and activities around software security. This wasn't a prescription maturity model. It was a survey that captured what companies were actually doing to write secure code. I spoke at an FS-ISAC conference back in the late 2000s. The speaker who preceded me was Dr. McGraw, and thus we were seated together for the conference dinner. After a long discussion well into the night about all manner of things, we realized that our two offices back home were in the same building. Now that is some cosmic kismet.  The next week, he walked his book, “Software Security: Building Security in” and gave me a personally signed copy.

But the next year, 2009, Pravir Chandra published the first SAMM (Software Assurance Maturity Model); a prescriptive software security model that gave practitioners a way to measure how well they’re doing against a set of prescribed best practices. Also in 2009, Intel became probably the first commercial company to approve a formal Bring Your Own Device (BYOD) policy when the company realized that many of its employees were bringing their own devices into work and connecting to the corporate network. And finally, Robert Gates, President Obama’s secretary of defense, concluded after the Russian penetration of the Pentagon’s classified networks in the previous year, to create the US Cyber Command. 

Adversary playbook names

In terms of adversary playbook activity, the big five cyber powers (The United States, China, Russia, Iran, and North Korea) all stepped up their game around cyber espionage and continuous low level cyber conflict operations. In 2007,  Russia launched DDOS attacks against Estonia in the first real example of what cyber warfare might look like. The following year, 2008, Russian Hackers (AKA Turla, AKA Snake, and AKA APT 28) penetrated the Pentagon’s classified networks. The Pentagon rolled out the fix (Code name Operation Buckshot Yankee) later that day. This event led to the creation of what has become Cyber Command. 

Also in 2008, The Chinese People’s Liberation Army (PLA) penetrated Lockheed Martin’s networks and stole the plans related to the F-35, the world’s most sophisticated, and certainly most expensive, fighter jet. 

In 2010, the U.S. and Israeli governments launched “Olympic Games,” the first public cyber attack (Stuxnet) to destroy another country’s critical infrastructure; in this case, the Iranian uranium enrichment plant at Natanz. This might be the first public cyber attack to crossover from cyber espionage to cyber warfare, an escalation from the Russian DDOS attacks in Estonia to actually physically sabotaging  equipment. 

When I worked at iDefense, we had contracts with several U.S. intelligence agencies. We had done some of the initial reporting on Stuxnet and one agency in particular asked me to come over to give their analysts a brief on what we knew. This was several years after I retired from the Army, and I no longer had a government clearance. So, I sat in a room by myself at the agency waiting for my turn to brief (apparently, they had many groups coming in to do the same thing). They had a red bubble police light on the ceiling flashing  indicating to everybody around that I wasn’t cleared. When it came my turn to brief, I walked into a room with about 30 analysts. Throughout the entire hour-long presentation, they didn’t say one thing. I mean, I was cracking jokes left and right (some of my best material) and they all sat stone faced, I guess, in fear that they would give away some state secret. At the end, one brave analyst raised her hand to ask a question. She said, “who do you think was behind the attack?” I was gobsmacked. After a few seconds, I said “We think you did it.” 

They never asked me back to brief again.

In 2011, the Chinese People’s Liberation Army hacked RSA and stole their secret cryptographic keys responsible for the encryption function of their SecurID tokens product line. Many organizations used SecurID token for two factor authentication. It was one of the first public supply chain attacks and led to the compromise of Lockheed Martin, Northrop Grumman, and L-3. It was also the first time that a pure play commercial company (not a government contractor) noticed adversary lateral movement as a step in the hacking sequence; a step that had been captured by Lockheed Martin’s intrusion kill chain strategy a year before. 

That same year, responding to Olympic Games, Iranian hackers began DDOSing roughly four dozen American financial institutions—including JPMorgan Chase, Bank of America, Capital One, PNC Bank, and the New York Stock Exchange. The next year, 2012, they crippled Saudi Aramco, the world’s largest oil producer, destroying 30,000 computers and 10,000 servers. And in 2013, they breached the New York State’s Bowman Avenue Dam’s command-and-control system causing ripples of panic around governments everywhere. 

The Bowman Street Dam is in Rye near Long Island Sound. As my editor, John Petrik says, It's a dinky little flood control dam that’s been described as “designed to keep a babbling brook from babbling and flooding a little league field.” There's a big hydroelectric dam, the Bowman Dam, in Idaho, and some think that was the real target. Others think they really were after Bowman Street in Rye as a proof-of-concept. Regardless, many government leaders received the information as a threat, that even though Iran was small, they could do tremendous damage via a cyber attack to a nation’s critical infrastructure if they wanted. 

In 2014, Iranian hackers cripled the Sands Casino in Las Vegas because of negative public comments made by the owner against Iran. Also in 2014, U.S. intelligence agencies discovered that Russia hackers had penetrated the U.S. electrical grid in many locations using malware called “BlackEnergy.”

Back in 2013, Deep Panda (A Chinese hacking group) compromised OPM’s database containing PII (Personal Identifiable Information) on U.S. government clearance holders in what might be the largest and most impactful cyber espionage campaign known to the public against any country. The vast amounts of data collected plus the longevity of it (over 50 years, since that’s how long it will take for all individuals caught in the net to age out of government service) will be useful for many years to come.

I wrote another complete history of the OPM breach and accompanying podcast. If you want to hear me rant for over 30 minutes on how badly OPM handled that situation, you should check it out. 

Not to be outdone, in 2014, North Korean hackers (Guardians of Peace) crippled Sony because of a movie that depicted the North Korean Great Successor, Respected Comrade General Secretary Kim Jong-un, in an unfavorable light. It marked the first time that a U.S. President, President Obama,  confirmed a cyber attribution on national television. In 2016, North Koreans stole $81 Million from the Bangladesh Central Bank. This marked the first public discovery of a new trend, nations states using government assets to conduct cyber crime for two reasons: an APT Side Hustle to fund their nation state missions, and State Sanctioned Organized Cyber Crime to bring revenue into the country. In 2017, they launched a ransomware attack (code name: WannaCry) using the “Eternal Blue” exploit tool in the attack sequence, a tool that was stolen from the NSA by the Shadow Brokers hacktivist group and made public. 

A couple of years after this, when I was working for Palo Alto Networks, I visited the Sony CISO. In the aftermath, they became a customer and I was checking in with him. My nerd meter pegged again because, after you walk through the iconic Sony Pictures archway and hang a left, you end up on an outdoor soundstage that replicates New York City in the 1940s. Our guide walked us into the Verdi Square Bar, down some stairs, and there we were, right in the middle of the Sony Pictures IT department. How cool is that?


In terms of Government, commercial and academic organizations, in 2010, the Industrial Control Systems CERT started tracking Industrial Control Systems vulnerabilities. The Internet Engineering Task Force (IETF) released OAuth as an open-standard authorization protocol that describes how unrelated servers and services can safely delegate authenticated access to their assets without actually sharing credentials. 

The Iranian Government announced the creation of a cybercorps; their answer to the U.S. Cyber Command.

Google publicly announced that it had been hacked by the Chinese government in what came to be known as Operation Aurora. Before that, no commercial company would ever admit such a breach for fear of the reputational damage they might suffer. After Google’s announcement, and aided by public disclosure laws, more and more companies followed the practice. The event also led to Google Site Reliability Engineers rebuilding the Google internal network from the ground up using Software Defined Perimeter and Zero Trust as their main strategies.

In 2011, Gartner coined the term CASB (Cloud Access Service Broker) for security technology that protects SaaS applications. The World Economic Forum began to use the term “resilience” for “… the ability of systems and organizations to withstand cyber events …” Also in 2011, the U.S. Office of Management and Budget (OMB) established The Federal Risk and Authorization Management Program (FedRAMP) to empower federal agencies to use modern cloud technologies but with the ability to protect federal information.

In 2013, Docker released an open source container management platform called dotCloud and established a partnership with Red Hat Linux. The idea of containers had been around for a while, but this started the momentum to make them standard practice. That same year, Mitre established the ATT&CK Framework, an extension of the intrusion kill chain model that operationalized the Lockheed Martin strategy document with adversary tactics, techniques, and procedures.

Google released Kubernetes 1.0 in 2015, an open-source container orchestration system and gave it to The Cloud Native Computing Foundation (CNCF) to manage. In 2017, Gartner coined the phrase Security Orchestration and Automation (SOAR);  tools to orchestrate the security stack.

In 2018, Palo Alto Networks founder and CTO, Nir Zuk, coined the phrase: XDR (Extended Detection and Response), a tool that would collect telemetry from endpoints and the network across the intrusion kill chain and use machine learning algorithms to detect malicious behavior. I was actually sitting in the audience of our customer conference when he made the announcement and I was like, of course, that’s exactly what we need to do.

Finally, Gartner coined the phrase “Secure Access Service Edge” (SASE) in 2019, reimagining traditional security architectures to take advantage of the cloud. 

Papers and books

This phase has been an extraordinary period where researchers published new ideas that resonated with most infosec practitioners. In 2010, Lockheed Martin’s Hutchins, Cloppert, and Amin published “Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains," the origination of the intrusion kill chain strategy. That same year, John Kindervag, working for Forrester, published “No More Chewy Centers: Introducing The Zero Trust Model Of Information Security.” The idea of zero trust had been around for a number of years but this paper solidified the concept. As you know, both papers figure heavily into thinking about cybersecurity first principles.

The next year, 2011, Sergio Caltagirone, Andrew Pendergast, and Christopher Betz, working for the U.S. Department of Defense, published "The Diamond Model of Intrusion Analysis,” written around the same time that the Lockheed Martin research team published their intrusion kill chain model. The authors designed the Diamond model specifically for intelligence analysts to track adversary groups across the intrusion kill chain.

In 2013, Mandiant published “APT1: Exposing One of China’s Cyber Espionage Units,” the first public document that outlined the Chinese government cyber attack campaigns across the intrusion kill chain. Also, this is the first time the general public starts to notice Cyber Threat Intelligence as something infosec professionals do.

That same year, Kim, Gene, Kevin Behr, and George Spafford published “The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win” introducing the idea of DevOps to the general business world.

By 2014,  the National Institute of Standards and Technology (NIST) published the “Framework for Improving Critical Infrastructure Cybersecurity” that became a cybersecurity best practice maturity model for the community around the ideas of Identify, Protect, Detect, Respond, and Recover.

And finally, in 2020, my colleague Ryan Olson and I published “Implementing Intrusion Kill Chain Strategies by Creating Defensive Campaign Adversary Playbooks,” the next extension to the Intrusion Kill Chain / Diamond / Mitre Attack framework models. 


In 2013, Gartner’s Anton Chuvakin coined the term “Endpoint Threat Detection and Response” (ETDR), now commonly referred to as EDR (Endpoint Detection and Response).

That same year, General Valery Gerasimov, the Chief of the General Staff of the Russian Federation, established the unofficial Gerasimov doctrine that seeks asymmetric targets (physical and virtual critical infrastructure including outer space) across the spectrum during war.  This is the Russian version of the Chinese asymmetric warfare plan.


In 2016, the European Parliament adopted the General Data Protection Regulation (GDPR), a legal framework that requires businesses to protect the personal data and privacy of European Union (EU) citizens for transactions that occur within EU member states. 


Palo Alto Networks  launched the first next-generation firewall in 2007, a firewall that not only does stateful inspection at layer 3, but more importantly,  allows rules at the application layer, layer 7. Today, alla firewall vendors offer next generation firewalls.

In 2010, the infosec community started seeing the first Identity as a Service in the cloud. By 2014, Amazon became the first company to offer serverless functions (AWS Lambda).

Strategy and Tactics

In 2015, Security Orchestration merged as an idea to manage the complexity of the security stack. In phase one, you could count the number of security tools deployed in a typical network environment on one hand. By this phase, the number of tools infosec practitioners managed ranged anywhere from 15 to 300 depending on how big the organization was. Supervising that complexity became too hard and security orchestration was the strategy that emerged to solve the problem. This led to the introduction of XDR (Extended Detection and Response Tools, orchestration platforms and SASE architectures.

By 2016, six out of every ten companies had a Bring-Your-Own-Device (BYOD)-friendly policy in place.

In 2020, I introduced the idea of cybersecurity first principles in the CSO Perspectives podcast as a re-imagining of the ultimate goal for what infosec practitioners were actually trying to accomplish. 

Take Away

With a nod toward Cicero, I couldn’t have conceived of the idea of cybersecurity first principles without understanding the backstory; the path for how we all got here. Studying that path, I learned that many of these ideas coalesced around four phases over the last few decades. In each phase, those ideas aligned along recurring themes: adversary playbook names, entities, firsts, papers and books, people, law, technologies, tools, and strategy and Tactics. Using those themes, you can draw a straight line of coherency through each time period around the notions of secure software development, infrastructure-as-code, security architectures, identity and authorization management, complexity management, zero trust, intrusion kill chain prevention, and resilience.


"A Network Security Monitor," by Todd Heberlein, Gihan Dias, Karl Levitt, Biswanath Mukherjee, Jeff Wood, David Wolber, IEEE Computer Society Symposium on Research In Security and Privacy, May 7-9, 1990.

‌“Marcus Tullius Cicero.” HISTORY, 16 December 2009. 

The Firewall, a Brief History of Network Security,” by ALANA DEVICH, Illumio, 16 FEBRUARY 2015.

The Importance of History, Why Do We Study History, Why Is History Important? Quotations,” Age-of-the-sage.org, 2022. 

Infosec Timeline

Here is a PDF that captures all the all the elements in the essay and podcast in one infographic.