CSO Perspectives (Pro) 4.25.22
Ep 74 | 4.25.22

History of Infosec: a primer.


Rick Howard: Hey, everybody.


Unidentified Musical Group: (Singing) We're back - we're back - in our own backyard. We're gonna... 

Rick Howard: Welcome back to the "CSO Perspectives" podcast. We have locked and loaded some fantastic topics for you this season across a wide range of cybersecurity themes like technology. We're going to talk about S-bombs, single sign-on, two-factor authentication and software-defined perimeter and like strategy, we're going to get into the whole intelligence-sharing thing, and we're going to finish things off with a couple of case studies around the Colonial Pipeline attacks of 2019 and the Netflix Chaos Monkey System. But for today's show, we're going to go back to school because you can't really understand the current state of the cybersecurity landscape unless you have a clear idea about how we got here in the first place. So let's get started. 


Unidentified Musical Group: (Singing) Baby, we're back. 

Rick Howard: My name is Rick Howard, and I'm broadcasting from the CyberWire's Secret Sanctum Sanctorum Studios, located underwater somewhere along the Patapsco River near Baltimore Harbor, located in the state of Maryland in the good, old U.S.A. And you're listening to "CSO Perspectives" - my podcast about the ideas, strategies and technologies that senior security executives wrestle with on a daily basis. 

Rick Howard: Regular listeners to this show know that my friend Steve Winterfeld, the Akamai Advisory CISO and CyberWire Hash Table subject matter expert, not to mention the Al to my Rick The Tool Man persona, gets annoyed with me every time I talk a little about our collective cybersecurity history. Studying things from the past is just not his thing. 


Richard Karn: (As Al Borland) I don't think so, Tim. 


Rick Howard: So in past seasons, I would find reasons to insert little nuggets of history in these shows on a regular basis just so that I could annoy him. Well, this show is entirely about history, and I can't wait to see his reaction when he hears this. 


Richard Karn: (As Al Borland) Well, I'm up to the challenge, Tim. 

Rick Howard: And the best part is that I tricked him into commenting on pieces of this episode, so you'll hear from him in a bit. But from now on, whenever Steve rolls his eyes at my history lessons, I get to throw this back in his face. And that is best-friend gold. 

Rick Howard: Let me start it off with a quote from a really old Roman, a guy by the name of Marcus Cicero, a guy you've probably heard of because he has regularly shown up in popular entertainment over the centuries. From Dante's "Divine Comedy" poem to Shakespeare's "Julius Caesar" play to the 2005 HBO series "Rome." Now, fun fact - British actor Alan Napier played Cicero in the 1953 movie of "Julius Caesar." But Napier is probably most famous to the American audience as the guy who played Alfred Pennyworth, Batman's butler in the 1960s TV show. Sometimes I out-nerd myself. But Cicero, the real guy, was a famous Roman statesman and orator, a contemporary of Julius Caesar, Pompey, Mark Antony and Octavian. His writings on classical rhetoric and philosophy influenced the great thinkers of the Renaissance and Enlightenment many years later. Here's his quote - "We study history not to be clever in another time, but to be wise always." And he's absolutely right. For me personally, I don't study infosec history so that I can win at nerd Trivial Pursuit tournaments at security conferences, although the Batman factoid might come in handy someday. No, I study infosec history so that I can understand the day-to-day changes going on in the industry. I believe you can't understand the current state of the infosec community unless you have some understanding of what has happened in the past. 

Rick Howard: As an aside, I get a thrill when discovering how cybersecurity things are connected to the nerd community. Like, I just learned that Eric Corley, who founded the famous hacker magazine 2600 in 1984, chose his lead hacker name to be Emmanuel Goldstein, the shadowy resistance leader in George Orwell's novel "1984." Now, that's some cyber nerd trivia symmetry that I can get behind. And for the bonus round, the magazine's name is a reference to the 2600 hertz tone that formerly controlled AT&T's switching system. John Draper and other phone phreakers in the 1960s became famous for using toy whistles found in Captain Crunch cereal boxes and other homemade devices that emitted a sound at that exact frequency that could seize a dial tone from an AT&T payphone and allowed phreakers to make free phone calls. And for you youngsters out there who don't know what a payphone is, check out my favorite hacker movie, "WarGames," starring Matthew Broderick, and you'll know what I mean. OK. OK. That's kind of cool, and it makes my nerd meter peg, but it's not the reason I study infosec history. Really, it's not. 

Rick Howard: When I think about our relatively short 50-year infosec history, I can make the case that it roughly coalesces around four phases - the mainframe years between 1960 and 1981, the personal computer years between 1981 and 1995, the internet explosion years between 1995 and 2006 and, finally, the cloud explosion years between 2006 and present. It's not a perfect representation, but each phase represents a major disruption in how people use computers and, consequently, change how security practitioners thought about securing those computers, too. As we look at the history, certain recurring elements show up in each phase over and over again. We have security firsts - the initial time something happens, like when Citicorp hired Steve Katz in 1995 to be the first-ever chief information security officer. We have adversary playbook names - code names assigned to hacker attack sequences across the intrusion kill chain that researchers have noticed repeatedly in the wild, like Black Byte, aka Digital Shadows, an infamous ransomware group. We have entities - government, commercial and academic organizations that instigated some new idea or program or research, like how Gartner coined the term CASB for Cloud Access Servers Broker or security technology that protects SAS applications back in 2011. We have important papers and books, written research that invented new things, like how Dr. Dorothy Denning published her paper "An Intrusion-Detection Model" in 1986, leading the way for the first commercial intrusion detection tools. 

Rick Howard: We have people, the humans behind the great infosec ideas, like how Dr. Fred Cohen published the first papers in the early 1990s that used defense in depth to describe a common cyberdefense architecture model. We have laws - the legislation that governments pass to control activity in cyberspace, like the European Parliament's General Data Protection Regulation, or GDPR, a legal framework that requires businesses to protect the personal data and privacy of European Union citizens. We have technologies - a term of art referring to an application of knowledge for practical ends, like passwords or two-factor authentication. We have specific tools - a hardware or software device that accomplishes some cybersecurity function or functions, like a firewall. And finally, we have developing strategy and tactics - strategy's the action plan that takes you where you want to go, like zero trust, and tactics are the individual steps that will get you there, like identity and authorization management systems. As we go through the four phases of infosec security, we will see these threads pop up time and again. 


Neil Diamond: (Singing) Hot August night and the leaves hanging down and the grass on the ground smelling sweet. 

Rick Howard: That great 1969 song you're listening to is by Neil Diamond, called "Brother Love's Traveling Salvation Show." It marks the beginning of our infosec history, the mainframe years between 1960 and 1981. In this early phase, as you would expect, there were a number of firsts that launched the infosec community. During the early 1960s, one of computer science's founding fathers, Dr. Fernando Corbato, who invented time-sharing among his many other accomplishments, also introduced the idea of using passwords to keep users on the same mainframe out of each other's files and to limit each other's time. Remember; back then, only one user could be on the computer at a time. The initial max was just 4 hours. Who knew that this first security tool would still be the most prominent means to accomplished identity and authorization 50 years later? Dr. Corbato didn't, I can tell you. In 1969, the internet came to life as UCLA and the Stanford Research Institute established the first internet connection over a telephone line. This little fact wouldn't really affect the security community until much later, probably the 1990s, but this was a start that would change everything forever. As Andrew Blum said in his book "Tubes: A Journey to the Center of the Internet," the internet took its first breath that day. 

Rick Howard: In the later part of the same decade, Gary Thuerk, a marketing manager, sent the first unsolicited bulk email, or spam, to roughly 400 prospects via ARPANET, the Advanced Research Projects Agency Network, a forerunner to the modern internet, and reaped $13 million in sales for his company. So we have him to blame for that. By the late 1970s, Woolf, Cowin, Corwin, Jones, Levin, Pearson and Polock introduced the idea of virtual machines, or virtual sandboxes, for their Carnegie Mellon University hydrosystem. This first idea will eventually turn into cloud computing 30 years later. And we also have Ward Christensen and Randy Suess to thank for the first dial-up bulletin board system. They built it because in Chicago during a blizzard, they wanted a way to keep up with their computer club without having to gather together in person. These bulletin board systems that sprang up after in the 1980s are where many of the first hackers learned their craft before the internet. 

Rick Howard: In terms of people, the aforementioned John Draper was active during this phase. The phone phreaker moment that he was part of became instrumental in establishing the early hacker culture. For important papers, Dr. Willis Ware published the Ware report to the Defense Science Board for ARPA, the Advanced Research Projects Agency, in 1967. This eventually led to the first formal penetration testing efforts in the U.S. government and to the publication of the Rainbow Series of publications - the first formal documents that describe what is required in computer security. James P. Anderson, in a report in the Electronic Systems Division for the U.S. Air Force in 1972, outlined a series of definitive steps that tiger teams, or the first penetration testers, could take to test systems for their ability to prevent computer compromise. Two years later, the U.S. Air Force conducted probably the first penetration test of its Multics operating system. By the end of this phase - 1980 - our guy James Anderson published his second influential paper, "Computer Security Threat Monitoring and Surveillance." This was some of the first research done on intrusion detection. 


Soft Cell: (Singing) Sometimes I feel like... 

Rick Howard: You're listening to the great 1981 song "Tainted Love" by Soft Cell, which means we're talking about Phase 2, the personal computer years, from 1981 to 1995. This second phase really kicks off a series of first-time events that has greatly impacted the security community ever since. It all started with the release of the IBM personal computer in 1981. 


Unidentified Person: The very first computers seemed as big as houses and so mysterious that, for most of us, the computer was behind a closed door. But IBM was thinking how to make the computer more useful. And as one good idea led to another, it began getting smaller, faster, less expensive and easier to use. Today, a new IBM computer has reached a personal scale - the IBM personal computer, now in selected stores across the country. 

Rick Howard: There were other companies making personal computers at the same time, like Apple and Tandy. But IBM had the marketing and distribution clout to convince the public to buy these new machines for their homes. This meant that computers were no longer isolated to government and academic researchers working on giant mainframes in underground bunkers somewhere. Now anybody with some extra cash laying around could have them in their living rooms. This led to a lot of hackers tinkering around with what they could do. 

Rick Howard: Hey, do we have time for an old Army war story? Of course we do. When I was a second lieutenant stationed at Fort Polk, La., I wandered up to the brigade headquarters to deliver some paperwork one morning. I noticed that the colonel's admin, Abigail Thibodeaux (ph), a Cajun native who some claimed had been working at Fort Polk all the way back to when General Patton trained there, was busily cranking out memos on her Wang word processor. But over in the corner of the office, unplugged, was a brand-new IBM personal computer, fully loaded. This machine probably cost the Army over $3,000 back in the 1980s. When I had the temerity to suggest to Ms. Abigail that she should try to learn the new machine, she about took off my head. Colonel's admins back then - and probably still do - have the power of three-star generals, just so you know. She told me that she had no use for that newfangled contraption. The bottom line is that after I made a coffee run for her, I convinced her and the colonel to let me take the PC home for a while to learn how to use it. So I had that going for me. 

Rick Howard: In 1983, Steve Capps created the first fuzzer program, although he didn't coin the term, by repurposing another tool called the monkey, where a Macintosh computer could demo itself by playing back recorded actions to create random mouse clicks and keyboard input in order to test the MacWrite and MacPaint applications. Slinging random input into software programs to see what would crash them gave hackers a place to start looking for exploit code possibilities. Think of it like throwing radar signals into the air, looking for stealthy fighter planes. The signals that bounced back gave clues as to where the planes were. It's the same idea with fuzzing software. 

Rick Howard: In 1988, University of Wisconsin's Professor Barton Miller coined the phrase fuzz test in his paper "An Empirical Study of the Reliability of Unix Utilities." And when I ran a commercial cyber intelligence service called iDefense back in the 2000s, we ran racks of server farms dedicated to fuzzing software. We would sell newly discovered vulnerabilities to our customer base, and we would even build and sell exploit code to our government clients based on those vulnerabilities. If you want an in-depth history lesson of the exploitation market, read Nicole Perlroth's "This Is How They Tell Me the World Ends." There's an entire chapter of how iDefense invented the market of selling exploit code. That happened well before I got there, but I ran that part of the business for years after. 

Rick Howard: With the number of personal computers in the world escalating, the first computer viruses started to appear. In 1987, Bernd Fix created a method to neutralize the Vienna virus, becoming the first documented antivirus software ever written. And that same year, Omni magazine first coined the word cyberwar and defined it in terms of giant robots and autonomous weapons. At the INTEROP conference in 1989, John Romkey created the first Internet of Things device, a toaster that could be turned on and off over the internet. We didn't get the name Internet of Things until the next phase, 1999, when Kevin Ashton coined the term at a Procter and Gamble conference. 

Rick Howard: In terms of entities starting big ideas, there were two in this period. Between 1990 and 1991, the Chinese government trained a group of North Korean hackers and gave them the idea that they could use the internet to steal secrets and attack the enemies of their government. In 1994, Amazon began work on an e-commerce service called merchant.com to help third-party merchants like Target and Marks & Spencer build online shopping sites on top of Amazon's e-commerce engine. This effort eventually led to AWS. 

Rick Howard: In terms of important papers and books, in 1983, the U.S. government published the first volume in the series of Rainbow Books, called the Orange Book, DoD "Trusted Computer System Evaluation Criteria." That gave the first guidance on how to secure government computers. I referenced at the top of the show Eric Corley's launch of "2600: The Hacker Quarterly" in 1984, an American magazine sometimes called the hackers' bible that discussed issues around legal, ethical and tactical debates over hacking. In other words, the magazine cultivated early hacker culture. In 1986, Dr. Dorothy Denning published her paper "An Intrusion-Detection Model" in the proceedings of the seventh IEEE Symposium on Security and Privacy, leading the way for the first commercial intrusion-detection tools. Her paper is the basis for most of the work in IDS technology that followed. 

Rick Howard: I already talked about Dr. Clifford Stoll while working as a system administrator in the Lawrence Berkeley lab in California. He detected the first-ever public cyber-espionage campaign sponsored by the Russians. In 1988, he published "Stalking the Wily Hacker," documenting his investigation, and then, the next year, published "The Cuckoo's Egg," that covered the same material in more detail. That book influenced many technically oriented people like myself to choose cybersecurity as a profession. The U.S. Army sent me to grad school to become an IT guy when the book came out. But after I read it over a weekend when I should have been working on my thesis, it changed the trajectory of my military career ever since. Back then, the internet was small, and authors put their personal email addresses in their books. When I finished it, I immediately sent a fangirl note to Dr. Stohl telling him how much I loved the book. He answered me in 15 minutes. And I've been a fan ever since. Dr. Fred Cohen published the first papers in 1991 and 1992 that used defense-in-depth to describe a common cybersecurity model in the network defender industry. That model is still used by many today. I couldn't find any published research claiming that Dr. Cohen coined the phrase. So many years ago, I called him on the phone and asked him. Dr. Cohen said that he probably didn't invent the phrase, but he was most likely the first one to use it in a research paper. So we're going to give him the credit. In 1993, John Arquilla and David Ronfeldt, working for the RAND Corporation, refined the term cyberwar when they published "Cyberwar is Coming!", introducing the idea that cyberattacks could be used for traditional warfare. Now, Winterfeld and I cut our teeth in the U.S. army around this idea of cyberwarfare. And both of us have observed the massive evolution of the concept over the years. Here's Steve. 

Steve Winterfeld: Originally, back in the '80s, early '90s, we were always hearing about the electronic Pearl Harbor. And all the senior leaders were talking about the Maginot line and comparing all of this current information warfare to the warfare they understood. And a lot of the different examples, obviously, just didn't work very well. I remember at one point talking about information guerrilla warfare, you know? And at the time, we were talking about low-intensity conflict. And then low-intensity conflict became a military operations other than war. And now we're talking about asymmetric warfare and how information warfare fits into the instruments of national power, diplomatic information, military and economic. And so there's just been so much maturity. Now we talk about weapons of mass destruction. And there is doctrine, you know? Information warfare is part of most national strategies. When you think back in '94, that's when we had the first book came out. Winn Schwartau put out his book on information warfare, "Chaos on the Superhighway." And it was a great foundational book. But now, just the maturity and thought process - and the flip side of that is the operational excellence, then amount of weapons, you know? We talk about weapons of cyber that have a kinetic effect. And so it's just really been a long evolution along that side that's kind of fascinating. 

Rick Howard: In 1994, William Cheswick and Steven Bellovin published "Firewalls and Internet Security: Repelling the Wily Hacker," the first book on firewalls as a technology. They called it a circuit-level gateway in packet-filtering technology. Interestingly, their ideas came from the desire not to keep intruders out of their networks, but to keep employees from going to bad places on the internet. Many years later, late 2000s, I got to meet Bill Cheswick. The NSA had invited me, Bill and a host of subject matter experts across a diversified set of disciplines to a retreat in New Mexico to see if a cross-pollination effort could help the NSA think differently about cybersecurity. Nothing really came of it that I know of. But Bill was trapped with me on a long bus ride from the airport to the retreat. It was a fabulous conversation. In terms of iconic people in the security community, three got their start in this phase. In 1988, Robert Tappan Morris, as a first-year computer science graduate student at Cornell, created and launched the Morris worm onto the internet, the first of its kind to cause as much damage as it did. When all was said and done, 10% of the existing internet was impacted. His actions also resulted in the first felony conviction in the U.S. under the 1986 Computer Fraud and Abuse Act and prompted DARPA to fund the establishment of the CERT CC at Carnegie Mellon University. In 1993, Jeff Moss - aka Dark Tangent - organized the first DEF CON security conference that caters to the hacker ethos. And finally, in 1994, Vladimir Levin successfully hacked Citibank to the tune of $10 million. And he is likely the instigator of the first significant cybercrime. As with cyberwarfare, we have seen massive changes in the cybercrime landscape since it all started. Here's Steve again. 

Steve Winterfeld: So one of the things I find interesting looking at kind of the evolution of criminal activity is the amount of innovation they've shown. As we continue to build out more and more capabilities that are driving businesses, that have money involved with them, the criminals, as always, are following the money. Why do criminals rob banks? That's where the money is. And so when they started to see money online at a retail organization - you know, they saw credit cards, so they went and went after the credit cards. So we protected the credit cards. And then they said, OK, well, we can go after the gift cards. And so then we protected the gift cards. And then they went after the reward points because they could monetize those. Then we protected the reward points. They constantly innovate in different ways to figure out, wherever we have something of value, how they can monetize it. And that has been going on and will continue. As we now switch from, you know, typical webpages back to APIs, you see the criminals following us back into this new infrastructure. And again, we have another generation and evolution of people that were protecting servers. Now we have developers. And the developers have to be able to protect that API infrastructure. So it's this constant evolution that I find fascinating. 

Rick Howard: For those of you keeping score at home, I managed to get Steve Winterfeld, a man who hates all things related to InfoSec history, to contribute not one but two pieces to my omnibus InfoSec history episode. And I never want to see him rolling his eyes at me again when it comes to InfoSec history. Enough said. 

Rick Howard: In terms of new law, the U.S. Congress passed two significant pieces of legislation in this period. The first was the aforementioned Computer Fraud and Abuse Act in 1986 as an amendment to the first federal computer fraud law. It levied harsh penalties to hackers intentionally accessing a computer without authorization. The second was the Electronic Communications Privacy Act, designed to promote the privacy expectations of citizens and the legitimate needs of law enforcement. 

Rick Howard: In terms of new technologies, in 1988, the Kerberos Version 4 protocol was first publicly described in a USENIX conference paper. Kerberos is a national security protocol that authenticates service requests between two or more trusted hosts across an untrusted network and is the underlying technology in Microsoft's Active Directory today. In 1993, Tim Howes, Steve Kille and Wengyik Yeong developed the Lightweight Directory Access Protocol, or LDAP, an open-source application protocol to manage authentication access to usernames, passwords, email addresses, printer connections, and other static data within directories. This protocol is also an important piece to Microsoft's Active Directory. 

Rick Howard: In terms of tools, firewalls emerged on the scene during this phase as the security tool to deploy in your defense-in-depth architecture. In 1988, Jeff Mogul, Brian Reid and Paul Vixie, working for Digital Equipment Corp., conducted the first research on firewall technology with tools like the gatekeeper.dec.com gateway and screend. This was the first generation of firewall architectures. Between '89 and '90, Dave Presotto and Howard Trickey of AT&T Bell Laboratories pioneered the second generation of firewall architectures with their research in circuit relays. They also implemented the first working model of the third generation, known as application-layer firewalls. However, they neither published any papers describing this architecture nor released a product based on their work. 

Rick Howard: Between 1990 and '91, Gene Spafford of Purdue University, my guy Bill Cheswick of AT&T Bell Laboratories and of the wily hacker fame and Marcus Ranum independently researched application-layer firewalls. These eventually evolved into the next generation of firewalls many years later. Marcus Ranum's firewall work received the most attention and took the form of bastion hosts running proxy services. In 1992, Digital Equipment Corp. shipped DEC SEAL, the first commercial firewall, and included proxies developed by Marcus Ranum. In 1994, Checkpoint Software released the first stateful inspection commercial firewall. 


The Rembrandts: (Singing) So no one told you that was gonna be this way. 

Rick Howard: You're listening to the iconic 1995 song and theme from the "Friends" TV show called "I'll Be There for You" by The Rembrandts, which means we're talking about phase three - the internet years from 1995 through 2006. Comparing InfoSec history to the life of a human, the first phase would be the toddler years - just learning about the existing new environment. The second phase would be the elementary school years, where the human starts learning about how to interact with the rules of the world. This third phase would be the teenage years, where the human starts rebelling against all the things the parents did in their generation. 

Rick Howard: For our first, we have the internet kicking into high gear with a mainstream adoption of the World Wide Web sometime around 1995. In the same year as I mentioned at the top of the show, Citicorp hired Steve Katz to be the first ever CISO. And in 1997, U.S. Deputy Secretary of Defense John Hamre, during a congressional hearing, coined the phrase electronic Pearl Harbor as a calamitous surprise cyberattack designed not just to take out military command and control communications, but to physically devastate American infrastructure. Government leaders have been concerned with that idea ever since. And that same year, the NSA Red Team conducted a no-notice vulnerability assessment/penetration test - code name Eligible Receiver - of critical government networks to include the Department of Defense. The report showed the network was so poorly protected that leadership quickly classified the results. 

Rick Howard: In 2003, Dave Wichers and Jeff Williams, working for Aspect Security, a software consultancy company, published an education piece on the top software security coding issues of the day. That eventually turned into the OWASP Top 10 - the Open Web Application Security Project, a reference document describing the most critical security concerns for web applications. 

Rick Howard: For adversary playbook names, we got the first cool one - Moonlight Maze - in 1998 when the U.S. Defense Information Systems Agency discovered Russian hacker activity directed against the Pentagon, NASA - the National Aeronautics and Space Administration - and some affiliated laboratories. I was the network manager for the Army's operation center located in the Pentagon at the time and remember exactly the moment when Warrant Officer Stephens (ph) from DISA, the Defense Information Systems Agency, knocked on my door and informed me that he would be taking control of my network for the next few hours while he did his investigation. He didn't find the Russians on my network, but you could tell he was spooked. The hackers stole unclassified information on contracts, general research, military data, troop data and maps of military installations. 

Rick Howard: By 2003, the U.S. Department of Defense discovered the first Chinese computer cyber-espionage operations conducted against military targets. Eventually, the public learned the code name, Titan Rain. I was the commander of the Army's CERT during that time. And before Titan Rain, we were more concerned with low-level cybercrime against army networks after we had to elevate our game to combat nation-state cyber-espionage. In terms of entities, in 2003, Amazon began building infrastructure-as-code projects internally - the beginnings of DevOps, a set of common infrastructure services everyone in the company could access without reinventing the wheel every time. Amazon business leaders eventually realized that they could build the operating system of the internet from these services. This eventually led to AWS. By 2004, Google followed suit and invented site reliability engineering, a fight against the manual toil involved in maintaining networks. That same year, a voice-over IP service provider, Broadvoice, introduced the idea of bring your own device to work. Up to this point, organizations supplied all computing equipment to their employees. This was a first step in making it OK for employees to use their own computing systems to do work for the entity. In 2005, Concur became the first company to offer a SaaS cloud platform. And Gartner security analyst Mark Nicolett and Amrit Williams coined the term SEIM, for security event and information management, as an improvement to the traditional log collection systems in order to offer long-term storage, combine log analytics with a focus on security events. Researchers published two important papers during this period. In 1996, Aleph One published "Smashing the Stack for Fun and Profit," the first public document about the practice of using buffer overflow attacks to exploit software. As I mentioned at the top of the show, I ran a team in data defense that tried to find zero-day exploits in common software. We had a handful of guys who were really good at fuzzing software, looking for vulnerabilities, and then creating zero-day exploits to leverage those weaknesses by using the buffer overflow technique. I have to say, buffer overflows have always been a mystery to me. I mean, I could explain how buffer overflows work conceptually, but there is some magic involved to get them to work consistently every time. In 1999, David Baker, Steven Christey, William Hill and David Mann, working for MITRE at the time, published "The Development of Common Enumeration of Vulnerabilities and Exposures," the establishment of the first public common vulnerability enumeration database - or CVE. In terms of deep thinkers and influencers in this phase, in 1999, Qiao Liang and Wang Xiangsui, two Chinese colonels, published "Unrestricted Warfare: China's Master Plan to Destroy America" that proposes the strategy of what will eventually be known as asymmetric warfare, designed to level the playing field against the U.S. military might. When I was at the Army CERT, we consumed this book, looking for clues about how to defend against Titan Rain. In 2002, Bill Gates turned Microsoft on a dime to implement trustworthy computing. He shut down Windows Development for the first time ever to get a handle on the security issues the products were facing. And that resulted in the Microsoft Security Development Lifecycle. For legislation, the U.S. Congress passed four new laws. And an industry association agreed to one set of standards that have shaped the cybersecurity landscape ever since. In 1996, the U.S. Congress passed the Health Insurance Portability and Accountability Act, HIPA, to require the adoption of national standards for electronic health care transactions, code sets and unique health identifiers for providers, health insurance plans and employers. In 1999, they passed the Gramm-Leach-Bliley Act to protect consumers' personal financial information held by financial institutions. In 2001, the Payment Card Industry Security Standards Council established the Payment Card Industry Data Security Standard - or PCI DSS, cybersecurity controls and business practices that any company that accepts credit card payments must implement. In 2002, the U.S. Congress passed the Federal Information Security Management Act, or FISMA, that requires federal agencies to implement a program to provide security for their information systems. And finally, the same year, they passed the Sarbanes-Oxley Act to protect investors and the public by increasing the accuracy and reliability of corporate disclosures and held companies liable for bad identity and access management systems. For emerging tools, in 1998, hacktivist group Cult of the Dead Cow released the first version of "Back Orifice," authored by Sir Dystic at DEF CON 6 to demonstrate the lack of security in Microsoft's Windows 9x series of operating systems. In 2000, Poul-Henning Kamp introduced jails that allowed administrators to partition a FreeBSD Unix computer system into several independent smaller systems, with the ability to assign an IP address for each system and configuration. This was the next step in virtual machines. In 2002 OASIS, a nonprofit standards body, approved the security assertion markup language - or SAML - version 1.0, a standard that allows identity providers to pass authorization credentials to service providers. And finally, in 2005, Brad Fitzpatrick developed the first-generation OpenID authentication protocol. This eventually becomes the authentication layer for OAuth, an open standard authorization protocol that provides applications the ability for secure designated access. For new strategies and tactics, two emerged during this phase. In 2000, internet founding father Vint Cerf coined the phrase cyber-hygiene when he testified to the United States Congress Joint Economic Committee. InfoSec practitioners had been executing this best practice for at least a decade prior, but Mr. Cerf gave it a name. In 2001, 17 software developers published the Agile manifesto, a rejection of the waterfall model and an embrace of the idea of producing real working code as a milestone of progress. This is the start of the Agile Software Development Movement and the precursor to DevOps and DevSecOps. 


Gnarls Barkley: (Singing) I remember when - I remember, I remember when I lost my mind. 

Rick Howard: You're listening to the 2006 song "Crazy" by Gnarls Barkley, which means we're talking about Phase 4, the cloud explosion from 2006 to present. In this phase, the InfoSec community is transitioning from the teenage years to the young adult years. In terms of firsts, in 2006, Amazon became the first company to offer an infrastructure a service cloud platform, Amazon Elastic Compute, or AWS. And the InfoSec industry started seeing managed identity services for the first time. 

Rick Howard: Between 2008 and 2009, the idea of the Internet of Things became real when Cisco reported that more things were connected to the internet than people. In 2008, Dr. Gary McGraw published the first Building Security In Maturity Model, or BSIMM Report, a survey of some 30-plus companies that collated initiatives and activities around software security. This wasn't a prescription maturity model. It was a survey that captured what companies were actually doing to write secure code. I spoke at an FS-ISAC conference back in the late 2000s. The speaker who preceded me was Dr. McGraw, and thus we were seated together at the conference dinner. After a long discussion well into the night about all manner of things, we realized that our two offices back home were in the same building. Who knew? Now, that is some cosmic kismet. 

Rick Howard: The next week he walked his book, "Software Security: Building Security In" to my office and gave me a personally signed copy. But the next year, 2009, Pravir Chandra published the first SAMM, Software Assurance Maturity Model, a prescriptive software security model that gave practitioners a way to measure how well they're doing against a set of prescribed best practices. Also in 2009, Intel became probably the first commercial company to approve a formal bring-your-own device policy, when the company realized that many of its employees were bringing their own devices into work and connecting to the corporate network. And finally, Robert Gates, President Obama's secretary of defense, concluded, after the Russian penetration of the Pentagon's classified networks in the previous year, to create the U.S. Cyber Command. 

Rick Howard: In terms of adversary playbook activity, the big five cyber powers - the United States, China, Russia, Iran and North Korea - all stepped up their game around cyber-espionage and continuous low-level cyber conflict operations. In 2007, Russia launched DDoS attacks against Estonia in the first real example of what cyberwarfare might look like. The following year, 2008, Russian hackers aka Turla, aka Snake and aka APT28 penetrated the Pentagon's classified networks. The Pentagon rolled out the fix, code name Operation Buckshot Yankee later that day. This is the event that led to the Robert Gates decision to create Cyber Command. 

Rick Howard: Also in 2008, the Chinese People's Liberation Army, or PLA, penetrated Lockheed Martin's networks and stole the plans related to the F-35, the world's most sophisticated and certainly most expensive fighter jet. In 2010, the U.S. and Israeli governments launched Olympic Games, the first public cyberattack, Stuxnet, to destroy another country's critical infrastructure, in this case, the Iranian uranium enrichment plant at Natanz. This might be the first public cyberattack to cross over from cyber-espionage to cyberwarfare, an escalation from the Russian DDoS attacks in Estonia to actually physically sabotaging equipment. 

Rick Howard: When I worked at iDefense, I had contracts with several U.S. intelligence agencies. We had done some of the initial reporting on Stuxnet. And one agency in particular asked me to come over to give their analysts a brief on what we knew. This was several years after I retired from the Army, and I no longer had a government clearance. So I sat in a room by myself at the agency waiting for my turn to brief. Apparently, they had many groups coming in to do the same thing. They had a red-bubble police light on the ceiling flashing, indicating to everybody around that I wasn't cleared. 

Rick Howard: When it came to my turn to brief, I walked into a room with about 30 analysts. Throughout the entire hourlong presentation, they didn't say one thing. I mean, I was cracking jokes left and right - some of my best material. And they all said stone faced, I guess, in fear that they would give away some state secret. At the end, one brave analyst raised her hand to ask a question. She said, who do you think was behind the attack? I was gobsmacked. After a few seconds, I said, we think you did it. They never asked me back to brief again. I wonder why. 


Tim Allen: Oh, no. 

Rick Howard: In 2011, the Chinese PLA hacked RSA - the company, not the conference, and stole their secret cryptographic keys responsible for the encryption function of their SecurID tokens product line. Many organizations used SecurID tokens for two-factor authentication. It was one of the first public supply chain attacks and led to the compromise of Lockheed Martin, Northrop Grumman and L3. It was also the first time that a pure play commercial company, not a government contractor, noticed adversary lateral movement as a step in the hacking sequence, a step that had been captured by Lockheed Martin's intrusion kill chain strategy a year before. That same year, responding to Olympic Games, Iranian hackers began DDoSing roughly four dozen American financial institutions, including JPMorgan Chase, Bank of America, Capital One, PNC Bank and the New York Stock Exchange. The next year, 2012, they crippled Saudi Aramco, the world's largest oil producer, destroying 30,000 computers and 10,000 servers. 

Rick Howard: And in 2013, they breached the New York state's Bowman Avenue Dam's command and control system, causing ripples of panic around governments everywhere. The Bowman Street Dam is in Rye, near Long Island Sound. And as my editor John Petrik says, it's a dinky little flood control dam that's been described as designed to keep a babbling brook from babbling and flooding a Little League field. There's a big hydroelectric dam, the Bowman Dam, in Idaho, and some think that was the real target. Others think they really were after Bowman Street in Rye as a proof of concept. In 2011, the Chinese PLA hacked RSA - the company, not the conference - and stole their secret cryptographic keys responsible for the encryption function of their SecurID Tokens product line. Many organizations used SecurID Tokens for two-factor authentication. It was one of the first public supply chain attacks and led to the compromise of Lockheed Martin, Northrop Grumman and L3. It was also the first time that a pure-play commercial company, not a government contractor, noticed adversary lateral movement as a step in the hacking sequence, a step that had been captured by Lockheed Martin's intrusion kill chain strategy a year before. That same year, responding to Olympic Games, Iranian hackers began DDoS'ing roughly four dozen American financial institutions, including JPMorgan Chase, Bank of America, Capital One, PNC Bank and the New York Stock Exchange. 

Rick Howard: The next year, 2012, they crippled Saudi Aramco, the world's largest oil producer, destroying 30,000 computers and 10,000 servers. And in 2013, they breached the New York state's Bowman Avenue Dam's command-and-control system, causing ripples of panic around governments everywhere. The Bowman Street Dam is in Rye, near Long Island Sound, and as my editor John Petrik says, it's a dinky, little flood control dam that's been described as designed to keep a babbling brook from babbling and flooding a Little League field. There's a big hydroelectric dam, the Bowman Dam, in Idaho, and some think that was the real target. Others think they really were after Bowman Street in Rye as a proof of concept. Regardless, many government leaders received the information as a threat, that even though Iran was small, they could do tremendous damage via a cyberattack to a nation's critical infrastructure if they wanted. 

Rick Howard: In 2014, Iranian hackers crippled the Sands Casino in Las Vegas because of negative public comments made by the owner against Iran. Also in 2014, U.S. intelligence agencies discovered that Russian hackers had penetrated the U.S. electrical grid in many locations using malware called BlackEnergy. Back in 2013, though, Deep Panda, a Chinese hacking group, compromised OPM's database containing PII, or personal identifiable information, on U.S. government clearance holders in what might be the largest and most impactful cyber-espionage campaign known to the public against any country. The vast amounts of data collected, plus the longevity of it all - over 50 years, since that's how long it will take for all individuals caught in the net to age out of government service - will be useful for many years to come. I did an entire episode on the OPM breach last season. If you want to hear me rant for over 30 minutes of how badly OPM handled that situation, you should check it out. 

Rick Howard: Not to be outdone, in 2014, North Korean hackers Guardians of the Peace crippled Sony because of a movie that depicted the North Korean great successor, respected comrade and General-Secretary Kim Jong Un, in an unfavorable light. It marked the first time that a U.S. president, President Obama, confirmed a cyber attribution on national television. In 2016, North Koreans stole $81 million from the Bangladesh central bank, and this marked the first public discovery of a new trend - nation-states using government assets to conduct cybercrime for two potential reasons, an APT side hustle to fund their nation-state missions and state-sanctioned organized cybercrime to bring revenue into the country. In 2017, they launched a ransomware attack codenamed WannaCry using the EternalBlue exploit tool in the attack sequence, a tool that was stolen from the NSA by The Shadow Brokers hacktivist group and made public. A couple of years after this, when I was working for Palo Alto Networks, I visited the Sony CISO. In the aftermath, they became a customer, and I was checking in with them. My nerd meter pegged again because after you walk through the iconic Sony Pictures archway and hang a left, you end up on an outdoor soundstage that replicates New York City in the 1940s. Our guide walked us into the Forty Square Bar, down some stairs, and there we were, right in the middle of the Sony Pictures IT department. How cool is that? 

Rick Howard: In terms of government, commercial and academic organizations, in 2010, the industrial control system CERT started tracking industrial control systems' vulnerabilities. The Internet Engineering Task Force, or IETF, released OAuth as an open standard authorization protocol that describes how unrelated servers and services can safely delegate authenticated access to their assets without actually sharing credentials. The Iranian government announced the creation of Cyber Corps, their answer to the U.S. Cyber Command. Google publicly announced it had been hacked by the Chinese government in what came to be known as Operation Aurora. Before that, no commercial company would ever admit such a breach for fear of the reputational damage they might suffer. After Google's announcement and aided by public disclosure laws, more and more companies followed the practice. The event also led to Google's site reliability engineers rebuilding the Google internal network from the ground up using software-defined perimeter and zero trust as their main strategies. 

Rick Howard: In 2001, Gartner coined the term CASB, cloud access servers broker, for security technology that protects SaaS applications. The World Economic Forum began to use the term resilience for the ability of systems and organizations to withstand cyber events. And also in 2011, the U.S. Office of Management and Budget, OMB, established the Federal Risk and Authorization Management Program, FedRAMP, to empower federal agencies to use modern cloud technologies but with the ability to protect federal information. In 2013, Docker released an open-source container management platform called dotCloud and established a partnership with Red Hat Linux. The idea of containers have been around for a while, but this started the momentum to make them standard practice. That same year, MITRE established the ATT&CK framework, an extension of the intrusion kill chain model that operationalized the Lockheed Martin strategy document with adversary tactics, techniques and procedures. Google released Kubernetes Version 1.0 in 2015, an open-source container orchestration system, and gave it to the Cloud Native Computing Foundation to manage. In 2017, Gartner coined the phrase security orchestration and automation, or SOAR, tools to orchestrate the security stack. In 2018, Palo Alto Networks founder and CTO Nir Zuk coined the phrase XDR, extended detection and response, a tool that would collect telemetry from endpoints and the network across the intrusion kill chain and use machine learning algorithms to detect malicious behavior. I was actually sitting in the audience of our customer conference when he made the announcements, and I was like, of course, that's exactly what we need to do. Finally, Gartner coined the phrase secure access service edge, or SASE, in 2019, reimagining traditional security architectures to take advantage of the cloud. 

Rick Howard: For important papers, this phase has been an extraordinary period where researchers published new ideas that resonated with most infosec practitioners. In 2010, Lockheed Martin's Hutchins, Cloppert and Amin published "Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains." This is the origination of the intrusion kill chain strategy. That same year, John Kindervag, working for Forrester, published "No More Chewy Centers," introducing the zero trust model of information security. The idea of zero trust had been around for a number of years, but this paper solidified the concept. And as you know, both papers figured heavily into the thinking about cybersecurity first principles in this podcast. 

Rick Howard: The next year, 2011, Sergio Caltagirone, Andrew Pendergrast and Christopher Betz, working for the U.S. Department of Defense, published "The Diamond Model of Intrusion Analysis," written around the same time that the Lockheed Martin research team published their intrusion kill chain model. The authors designed the diamond model specifically for intelligence analysts to track adversary groups across the intrusion kill chain. In 2013, Mandiant published APT1, exposing one of China's cyber-espionage units, the first public document that outlined the Chinese government cyberattack campaigns across the intrusion kill chain. Also, this is the first time the general public starts to notice cyberthreat intelligence as something infosec professionals do. 

Rick Howard: That same year, Jin Kim, Kevin Burr and George Spafford published "The Phoenix Project," a novel about IT, DevOps and helping your business win, introducing the idea of DevOps to the general business world. By 2014, the National Institute of Standards and Technology, NIST, published a framework for improving critical infrastructure cybersecurity. That became a cybersecurity best practice maturity model for the community around the ideas of identity, protect, detect, respond and recover. And finally, in 2020, my colleague Ryan Olson and I published "Implementing Intrusion Kill Change Strategies by Creating Defensive Campaign Adversary Playbooks," the next extension to the intrusion kill chain, diamond, MITRE attack framework models. 

Rick Howard: In terms of people, in 2013 Gartner's Anton Chuvakin coined the term endpoint threat detection and response, now commonly referred to as EDR, endpoint detection and response. That same year, General Valery Gerasimov, chief of the General Staff of the Russian Federation, established the unofficial Gerasimov doctrine that seeks asymmetric targets, physical and virtual critical infrastructure, including outer space, across the spectrum during the war. This is the Russian version of the Chinese asymmetric warfare plan and what pundits have been expecting in the Ukraine war. 

Rick Howard: In terms of law, in 2016, the European Parliament adopted the General Data Protection Regulation, GDPR, a legal framework that requires businesses to protect the personal data and privacy of European Union citizens for transactions that occur within EU member states. For tools, Palo Alto Networks launched the first next-generation firewall in 2007, a firewall that not only does stateful inspection at layer three but, more importantly, allows rules at the application layer, layer seven. Today, all firewall vendors offer next-generation firewalls. 

Rick Howard: In 2010, the infosec community started seeing the first identity as a service in the cloud. By 2014, Amazon became the first company to offer serverless functions, AWS Lambda. And for strategy and tactics, in 2015, security orchestration emerged as an idea to manage the complexity of the security stack. In phase one, you could count the number of security tools deployed in a typical network environment on one hand. 

Rick Howard: By this phase, the number of tools infosec practitioners managed ranged anywhere from 15 to 300 depending how big the organization was. Supervising that complexity became too hard, and security orchestration was the strategy that emerged to solve the problem. This led to the introduction of XDR, extended detection and response tools, orchestration platforms and SASSY architectures. And by 2016, 6 out of every 10 companies had a bring-your-own-device-friendly policy in place. Finally, in 2020, I introduced the idea of cybersecurity first principles in the "CSO Perspectives" podcast as a reimagining of the ultimate goal of what infosec practitioners were actually trying to accomplish. 

Rick Howard: With a nod toward my man Cicero, I couldn't have conceived of the idea of cybersecurity first principles without understanding the backstory, the path of how we all got here. Studying that path, I learned that many of these ideas coalesced around four phases over the last decade. In each phase, those ideas aligned along recurring themes - first adversary playbook names, entities, important papers and books, people, law, technologies, tools and strategies and tactics. Using those themes, you can draw a straight line of coherency through each time period around the notions of secure software development, infrastructure as code, security architectures, identity and authorization management, complexity management, zero-trust, intrusion kill chain prevention and resilience. 

Rick Howard: And that's a wrap. Thanks for joining me on this next season of "CSO Perspectives." And a special thanks to Steve Winterfeld, the Akamai advisory CISO for helping me out. Steve, don't forget; I will never let you live this down. For those interested in a printed or online timeline of the material I just covered, my colleague Brendan Karpf built a beautiful .PDF document that contains all of the information. You can find the link in the show notes. 

Rick Howard: And as always, if you agree or disagree with anything I've said or if I left something out of the timeline, hit me up on LinkedIn or Twitter, and we can continue the conversation there. Or, if you prefer, drop a line to csop@thecyberwire.com. That's CSOP, the at sign, thecyberwire - all one word - .com. And if you have any questions you would like us to answer here at CSO Perspectives, send a note to the same email address. We will try to address them in the show. Next week we will be doing a Rick the Tool Man episode on software building materials, or SBMs. You don't want to miss that. 

Rick Howard: The CyberWire's "CSO Perspectives" is edited by John Petrik and executive produced by Peter Kilpe. Our theme song is by Blue Dot Sessions, remixed by the insanely talented Elliott Peltzman, who also does the show's mixing, sound design and original score. And I am Rick Howard. Thanks for listening.