Incident response and cybersecurity first principles.
Rick Howard: On September 9, 2014, the United States Office of Personnel Management, or OPM, fired one of their third-party contractors, a company called USIS, or the U.S. Investigations Services. Now, back in 1996, almost 20 years before, Vice President Al Gore was looking for ways to reduce the civil workforce in the U.S. government. At the time, the Federal Investigative Service conducted all background checks on government employees. Under VP Gore's leadership, the U.S. government decided to privatize that service, to roll it out as a private company.
Rick Howard: So fast-forward to 2012. USIS receives about $253 million a year for the contract that they got from OPM, and that is about 67% of OPM's contract spending budget for that fiscal year. But even with that substantial contract, USIS couldn't get that work done, which was about 21,000 background checks a month. USIS leadership decided to cut some corners, and OPM accused them of fabricating or not finishing some 650,000 investigations.
Rick Howard: Now, USIS is the same company that re-cleared Edward Snowden in 2011 for the classified work at the NSA and the Navy Yard shooter who assassinated 12 random people in 2013, so the OPM leadership fired the company. Little did they know that a Chinese hacker group called Deep Panda, aka Shell Crew, aka Deputy Dog, aka X2, had already penetrated this OPM third-party supplier and used them for the greatest public cyber-espionage operation in history, the OPM data breach, and caused the world to witness the worst public incident response operation in modern times.
Rick Howard: My name is Rick Howard. You are listening to "CSO Perspectives," my podcast about the ideas, strategies and technologies that senior security executives wrestle with on a daily basis. This is the ninth show in our series that discusses the development of a general-purpose cybersecurity strategy using the concept of first principles to build a strong and robust infosec program. The other shows talked about so far have been, what is cybersecurity first-principle thinking, zero trust, intrusion kill chains, resilience, DevSecOps, risk, cyberthreat intelligence and security operations centers. Today, we are talking about incident response.
Rick Howard: My quick take on NIST's "Computer Security Incident Handling Guide" that they published in August 2012 is that the idea of incident response is not rocket science. It can be complicated, for sure, because you have to coordinate things across the entire organization, but the basic idea is simple. You devise a plan on how to respond to cyber issues. You make sure you can detect an attack. Once discovered, you don't let the adversaries move somewhere else in your network. In other words, you destroy their capability to burrow in undetected somewhere else and to connect back out. Once all that's done, you recover the systems that were affected. And then, finally, you do a post-mortem review to improve the plan for the next time.
Rick Howard: And according to another document by NIST, the cybersecurity "Framework for Improving Critical Infrastructure," your incident response program should have all those things, plus a communications plan about how you will convey the right information to your employees internally and to your customers and stockholders around the world externally.
Rick Howard: From a first-principle angle, though, detection, containment and eradication, and communications are the three key pieces. If we are to reduce the probability of material impact due to a cyber event, we have to accept the fact that sometimes the adversaries will come after us. For those instances, we have to detect the intruder's behavior soon enough so as not to give the adversary time to succeed in their task before we can determine the most advantageous plan to thwart them, and then we need to execute that eradication plan flawlessly.
Rick Howard: But as important as the first two pieces are, the communications plan can make or break the event at the end. Even if you execute the first two perfectly, how you communicate what happened and what you did can materially affect the value of a company in the commercial space and can also potentially severely affect an institution's reputation in the government and academic worlds. For the first two pieces, the bulk of the work is done by the technical teams. For this third piece, though, you need a task force that cuts across the entire organization, from the risk office to the legal office to the marketing and PR office and to the senior leadership team. When the event reaches the senior leadership team, you most likely have an outside public relations firm consulting on the communications plan as well. If the event is serious enough, you might have an outside incident response contracting team come in to help, too. And somebody has to manage all of those pieces.
Rick Howard: As network defenders develop the plan, they immediately start thinking in terms of stages of escalation. When something suspicious pops up in the SOC, the escalation team is small, mostly within the infosec team. As they collect more evidence of potential bad news, the escalation team expands to the IT teams and to security leadership. If more bad news comes in, other nontechnical teams start warming up in the bullpen just in case. The crisis task force forms.
Rick Howard: If evidence emerges that an actual intruder is operating in the network or has operated in the network in the past, it is time to warn the senior leadership team and call in the potential outside contracting teams. If the adversary is successful, the senior leadership team needs to decide how and when to execute the communications plan.
Rick Howard: Even though incident response is not rocket science in concept, executing the incident response plan can get messy quickly. There are a lot of moving parts. With new people coming into and leaving the organization or changing jobs all the time, the chances that not everybody will be on the same page is high. What you don't want to happen is decision-makers at each escalation stage weighing options they've never considered before during a crisis, when stress is high and they have no time for reflection. The best way around that is to conduct crisis exercises a couple of times each year, and they don't have to be that complicated.
Rick Howard: Once you develop the plan, bring the stakeholders into a lunch-and-learn conference room. Offer food. This is very important. That provides the incentive to get people to the meeting who might not want to spend time on a drill. It also helps that the CEO is sponsoring the exercise, too - just a little added incentive. Once you get everybody there, drop your favorite cybersecurity worst-case scenario on the table and facilitate the group's walking through the plan over lunch.
Rick Howard: Now, I've done many of these kinds of exercises in my career, and every time I thought I knew how the senior executive team was going to react to a particular twist in the scenario, I was wrong, which is what you want. You adjust a plan based on the exercise and plan for the next exercise down the road. And when something happens in the real world, you are ready.
Rick Howard: I'm not saying that you will execute the plan as practiced. As the famous Prussian military commander Helmuth von Moltke said back in the 1800s, no plan of operations reaches with any certainty beyond the first encounter with the enemy's main force, or, if you like, as Mike Tyson more eloquently said, everyone has a plan until they get punched in the mouth. I like that guy. I'm just saying that by practicing what you might do gives experience that allows decision-makers to improvise when the actual event happens.
Rick Howard: For a good example of how to handle the communications plan well, I point to Zoom. When the pandemic began, everybody on the planet started to use the Zoom video conferencing application to host all of their online meetings. The network defender community expressed serious concerns about the newly discovered security issues in the Zoom product. The CEO took immediate steps and told everybody what he was doing. That was a success story. What seemed like a potential disaster at the beginning of the pandemic is a nonstory today. There are still lingering security issues in the Zoom product, but network defenders are, for the most part, giving Zoom a pass because they know or believe that Zoom is working on them. That's how you roll out a crisis to the public.
Rick Howard: For an example of how not to do it, I point to the OPM data breach. From 2012 to 2016, the Chinese government used their own Unit 61398 - aka the Axiom group, aka X1 - and another group called Deep Panda - aka Shell Crew, aka Deputy Dog, aka X2 - to pull off one of the most valuable cyber-espionage campaigns in modern times.
Rick Howard: These two groups successfully exfiltrated 5.6 million electronic fingerprint records as well as personnel files of 4.2 million former and current government employees and security clearance background investigation information on 21.5 million individuals. And this cache just wasn't names and Social Security numbers either. Besides the fingerprints, the Chinese government got their hands on the SF-86 forms. These are the forms that government employees fill out to get their secret clearances. They are required to record everything about their personal lives for the past 10 years - where they lived, who their friends and neighbors were, who they worked for, the citizenships of all their relatives and housemates, foreign contacts and financial interests, foreign travel, psychological and emotional health, illegal drug use and many other matters.
Rick Howard: The impact is that the Chinese government has some kind of leverage on every single U.S. government employee and will have it until employees start to age out of government service some 50 to 75 years. Hence, the U.S. House of Representatives Committee on Oversight and Government Reform report on the OPM data breach quotes former CIA Director Michael Hayden saying this - quote, "OPM data remains a treasure trove of information that is available to the Chinese until the people represented by the information age off. There is no fixing it," end quote. If you are looking to get your blood moving this weekend, take an hour and thumb through the Congressional Oversight report on the OPM breach. It made me mad.
Rick Howard: In terms of first-principle cybersecurity thinking before the incident, OPM failed at every philosophical point. They had no concept of reducing the probability of material impact to their organization and to the government at large. What I mean by that is the OPM leadership was in charge of protecting the crown jewels of all government employees, their very sensitive personally identifiable information, or PII.
Rick Howard: Now, stolen government employee PII is even more impactful than stolen commercial or academic PII because it could potentially be used for foreign entities as weapons to influence the political landscape at a global scale. By all accounts, OPM leadership didn't accept that responsibility, didn't treat that information any differently than anything else on their network and didn't know that they should.
Rick Howard: After constant urging from the inspector general as far back as 2005 - that is seven years before the first Chinese penetration - OPM had deployed no zero trust measures. In fact, they had no security stack deployed at all for the most part. Like other security organizations in government entities, they applied few resources to improving their security posture over the years, let alone attempt to track known adversaries across the intrusion kill chain.
Rick Howard: And then when they finally noticed the penetration two years after the Chinese had successfully broken in, OPM had no incident response game plan to execute. OPM leadership up and down the chain, from the director of IT security operations to the CIO to the OPM director herself, decided it was better to conceal information or downplay its importance to other key players like the inspector general and the House Oversight Committee.
Rick Howard: When the OPM security team finally discovered the evidence that X1 might be in their network, OPM leadership made the classic mistake of choosing to collect more intelligence - in other words, to watch the adversary rather than kicking it out of the network. Let me say it again. With an infosec program that at best could be described as immature, the leadership decided that the next course of action was to leave the intruders in place and just watch them instead of - let me see; how did NIST put it? - oh, yes - eradicate the intruder from the network.
Rick Howard: Now, just to be sure, I looked up what eradicate means on Dictionary.com. It says that eradicate means to remove or destroy utterly, to extirpate, to exterminate. Those are some pretty strong words. And let me check that definition again. Yes, there is no mention of observation in the definition. I am gobsmacked.
Rick Howard: OPM leadership assumed that X1 was a single point of entry when in reality, X2 was already inside another part of the network undetected. And also, the Chinese had infiltrated not one but two of their third-party supply chain contractors. One was USIS, who I mentioned at the top of the show, and the other compromised company was Kingpoint (ph), who OPM used for government background checks after they fired USIS. Yikes. While OPM was gathering intel, the Chinese were scooping up every bit of PII in the U.S. government.
Rick Howard: In the end, the OPM director, the CIO and the director of IT security operations all were fired or forced to retire. The congressional report on the breach had many suggestions for improvement, and I don't disagree with any of them, but from my perspective, everything OPM leadership did wrong before the breach and during can be boiled down to the atomic fact that they weren't thinking in terms of cybersecurity first principles.
Rick Howard: As I've said many times in this series of podcasts, our goal as network defenders is to reduce the probability of material impact to our organization due to a cyber event using a combination of these eight strategies, like zero trust, intrusion kill chains, resilience, DevSecOps, risk, cyberthreat intelligence, security operations centers and now incident response. Reading through the congressional report on the breach, it is clear that OPM's leadership not only didn't implement any of them before the breach, but during the breach, most of their decisions devolved to protecting their jobs and not protecting their organization. This is not how to do incident response.
Rick Howard: You could make an argument that the same precipitating event that caused the creation of the first modern-day security operation centers, the Morris Worm, also caused the need to build incident response teams. It was the early days of the Internet - no AOL, no World Wide Web, no always-on internet connection at your house. If you wanted to connect, you most likely drove into the office at your university or your military base. If you connected from home, you used a dial-up modem over your existing phone line to make the connection to one of the only 60,000 computers on the internet at the time. And just for contrast, some experts estimate that the number of internet-connected devices will reach 75 billion - that's billion with a B - by 2025. In other words, the internet wasn't a thing yet for the masses, but it was vitally important for government and research institutions.
Rick Howard: At the witching hour on 3 November, 1988, I was working late in my Navy housing apartment trying to get a program working for my data structures class at a Naval Postgraduate School in Monterey, Calif. The deadline for the assignment was just three hours away, but I couldn't get my 2,400-baud modem to connect to the university's modem bank, and I was starting to panic.
Rick Howard: Little did I know that just after midnight, a 23-year-old Cornell University graduate student named Robert Tappan Morris would bring the internet to its knees. He had launched the first-ever internet worm. And for at least some days after, the internet ceased to function as UNIX wizards of all stripes across the globe worked to eradicate - hey, there's that word again - eradicate the worm from their systems.
Rick Howard: As I mentioned in the security operations center episode, the Morris Worm caused DARPA, or the Defense Advanced Research Projects Agency, which is a science and technology organization of the U.S. Department of Defense, to sponsor Carnegie Mellon University to establish the first CERT/CC, or Computer Emergency Response Team Coordination Center, to manage future cybersecurity emergencies, but it also sparked a discussion in the newly forming network defender space about how to respond to a cyber incident within your organization.
Rick Howard: At the Naval Postgraduate School, where I was during the event, the response consisted of faculty members who could spell UNIX correctly three times out of five running around the hallways with their hair on fire shouting esoteric computer slang at each other like Sendmail, rsh attacks, Telnet and Finger. Perhaps there might be a better way.
Rick Howard: Enter my all-time computer science hero, Dr. Clifford Stoll. Really, if there were baseball cards for computer science giants, my collection would include Grace Hopper, Alan Turing and multiple copies of Dr. Stoll. His book "The Cuckoo's Egg" was one of the first and still is one of the most influential cybersecurity books ever published. One of the reasons his book has remained influential over the last 30 years is that he almost single-handedly invented incident response, and the techniques he developed haven't changed that much since he did.
Rick Howard: Dr. Stoll was an astronomer at the University of California at Berkeley in 1986 - not a security guy by any means, but he was asked to help out in a UNIX lab on campus and to track down an accounting error in the student body computer records. Back then, universities charged their students for computer time. And each month, the sum of the accounting records for all the Berkeley student computer users was off by 75 cents, and nobody could figure out why.
Rick Howard: His investigation to fix the error led to the discovery of the first-ever public cyber-espionage campaign run by the Russians using East German hacker mercenaries to break into U.S. university systems in order to break into U.S. military systems because back then, we didn't really have any security, per se. The internet was basically connected with strings and cans.
Rick Howard: Because of his astronomer background, he treated the entire exercise like a science project. He developed hypotheses, built experiments to test his hypotheses and wrote everything down in a logbook. He published the paper from his log in the journal Communications of the ACM in 1988, which eventually turned into the book he published in 1989.
Rick Howard: If you haven't read this book yet, stop what you're doing right now and get it done. Dr. Stoll is - how would you say it? - eccentric. His kookiness pervades the entire book, and his joy for life is palpable. If you are not a techie, you will love it. I know my wife did. I promise you will be delighted, too. And in the process, you will witness the birth of incident response as a network defender best practice.
Rick Howard: And that's a wrap. If you agree or disagree with anything I've said, hit me up on LinkedIn or Twitter, and we can continue the conversation there. Next week, I've invited our pool of CyberWire's experts to sit around the Hash Table with me to discuss incident response. It should be a blast. You will not want to miss it.
Rick Howard: The CyberWire's "CSO Perspectives" is edited by John Petrik and executive produced by Peter Kilpe. Mix, sound design and original music all done by the insanely talented Elliott Peltzman. And I am Rick Howard. Thanks for listening.