CSO Perspectives (Pro) 9.19.22
Ep 88 | 9.19.22

Risk Forecasting with Bayes Rule: A practical example.

Transcript

Rick Howard: Hey, everybody. Rick here. This is the third and final podcast in my series about how we should reframe our calculating models about cybersecurity risk. In the first show, I talked about the book "Superforecasters" by Dr. Tetlock and how it changed my mind that it's possible to forecast the probability of complex questions using some basic techniques like Fermi estimates. Since almost the beginning of this podcast almost two years ago, I've said that the absolute cybersecurity first principle is reducing the probability of material impact due to a cyber event. If that's true - and I think it is - then all of us have to be able to calculate our organization's current probability of material impact and what it will be when some new thing happens in people, process and technology. I made the case that from the beginning, we've all been too overwhelmed by the problem. The complexity seemed beyond our ability to calculate. And because of that complexity, we punted. We waved our hands at the problem and said it couldn't be done, that there were too many variables, that we'll just have to be satisfied with qualitative estimates like high, medium and low, build our color-coded heatmaps for the board and be done with it. After studying Tetlock's book, I changed my mind. I realized that rough estimates that are in the same ballpark as a detailed calculation are probably good enough to make resource decisions with. In the second show, I explained the mathematical foundation as to why superforecasting techniques work. It comes from Bayes' theorem, coined by Thomas Bayes in the 1700s. I explained his billiard table thought experiment, where an assistant would roll a cue ball onto a billiard table, and a guesser would estimate the cue ball's location based on new evidence. The assistant would roll subsequent balls under the billiard table and inform the guesser of its location relative to the cue ball. The guesser would revise her prior cue ball estimates based on this new evidence. Essentially, Bayes allows superforecasters to estimate an initial answer, the prior, by whatever means - staff-informed guesses, basic outside-in stats about the industry, etc. - and then adjust the estimate over time as new evidence comes in. In the cybersecurity case, that new evidence comes in the form of how well we're doing in implementing our first-principle strategies. In other words, if the outside-in estimate this year, the prior, that any U.S. company will be materially impacted by a cyberattack is 32%, what would that estimate be for your organization if you have a fully deployed zero-trust program or a robust intrusion kill chain prevention program or a highly efficient and practiced resiliency operation or some combination of all of them? Let's figure it out.

(SOUNDBITE OF FILM, "JURASSIC PARK") 

Samuel L Jackson: (As Arnold) Hold on to your butts. 

Rick Howard: My name is Rick Howard, and I'm broadcasting from the CyberWire's secret sanctum sanctorum studios, located underwater somewhere along the Patapsco River near Baltimore Harbor, Md., in the good old US of A. And you're listening to "CSO Perspectives," my podcast about the ideas, strategies and technologies that senior security executives wrestle with on a daily basis. 

(SOUNDBITE OF JAMES HORNER'S "TEACHING MATHEMATICS AGAIN") 

Rick Howard: You're listening to one of the songs from the movie soundtrack "A Beautiful Mind." The movie is about John Forbes Nash, an American mathematician who made fundamental contributions to game theory and the factors that govern chance and decision-making inside complex systems found in everyday life. The song is called "Teaching Mathematics Again," and I thought it was appropriate because today, we're going to do some math. 

(SOUNDBITE OF JAMES HORNER'S "TEACHING MATHEMATICS AGAIN") 

Rick Howard: In order to calculate our first estimate of the probability of material impact to our organization this year, the first question, the prior, we should probably answer is what is the probability that any company would get hit with a material impact cyberattack? In this analysis, I'm going to restrict my calculations to U.S. organizations, mainly for the reason that U.S. data on breaches is relatively abundant compared to other countries. You recall from last show that Enrico Fermi, the Italian American physicist, was famous for his rough estimates about outcomes that were remarkably close to real answers generated by physical experiments. That's what we're going to do here. Let's start with the FBI's Internet Crime Report of 2021. In that study, the FBI's Internet Crime Complaint Center, the IC3, said that they received just under a million complaints. Now, let's assume that all of those represent material losses. That's probably not true, but let's assume that for now. But the IC3 also estimated that only 15% of organizations actually report their attacks. So how many total should there be? Doing the math, that means that over 5 1/2 million U.S. organizations got hit with a cyberattack that year. That said, my assumption is that there are many reasons why organizations don't report their cyber incidents to the FBI. And the main one might be that the incident didn't turn out to be material. So as a conservative estimate, then, let's assume that only 25% of the potential unreported incidents were material. That number is probably way smaller, too, but it's good enough for now. The number of unreported material complaints then is equal to what the IC3 expected, that 5 1/2 million, million minus the just under 1 million known reported complaints. So doing the subtraction, that number is just over 4 1/2 million. With my assumption that only 25% of the unreported complaints were material, 25% of just over 4 1/2 million is an estimated 1.2 million. So the total number of material complaints is the known reported complaints, just under 1 million, plus the estimated unreported complaints, 1.2 million, for a total of just over 2 million. Hold that number in your head for a second. I know listening to math problems on the radio can be mind-numbingly dull and hard to keep track of. If I was listening to this, I'd be snoring by now. 

Unidentified Person: (Snoring) Mi-mi-mi-mi-mi-mi-mi-mi-mi-mi. 

Rick Howard: To help, the companion essay for this podcast that you can find on the CyberWire website has all these basic math calculations, so you can follow along if you want to. You've been warned because we got more math to do. 

Rick Howard: I'm assuming in this analysis that no organization is getting hit twice in the same year. That's probably not true either, but for now, let's roll with it. Let's also assume that any nation-state attacks that caused material damage will be included in the IC3 stats. The question then arises about how many organizations exist in the United States that could potentially report to the IC3. We know from stats published by the U.S. Census Bureau in 2019 that the United States has 6.1 million registered companies. Employee sizes for that group ranged from five to over 500. For the moment, we'll assume that employee size doesn't matter in our forecast. We know that's probably not true either, but we'll list it as an assumption and look for data later that will inform that assumption one way or the other. We'll also assume that the number includes NGOs, or non-governmental organizations. 

Rick Howard: Further, according to the National Center for Education Statistics, in 2020, there were 128,000 total schools for public and private pre-kindergarten, elementary, middle, secondary, post-secondary and other schools. For the post-secondary schools, that's a mix of four year and two-year programs of various student sizes. It also represents a mix of student sizes for the elementary schools. We will also assume that student body size doesn't matter for this forecast either. Interestingly, we don't really have an official number of sanctioned federal government entities, according to Clyde Wayne at Forbes in 2021, there is no official authoritative list maintained by any one of them. In other words, no one U.S. federal government entity is officially tasked with keeping track of all the other federal agencies. 

Unidentified Person: Oh, no. 

Rick Howard: I know. That sounds crazy, but apparently it's so. He lists eight different reports from the Administrative Conference of the United States to the Federal Register Agency list that estimate the number range of government agencies from 61 to 443, depending on how they count it. So let's take the average - 274 - as a starting point. Finally, from the U.S. Census Bureau, in 2017, 90,000 local governments existed in the United States. Assume that the size of the local government doesn't matter for this forecast either. To summarize, then, within the United States, there are 6.1 million registered companies, 129,000 schools, 274 federal government agencies, and 90,000 local government organizations - state, city, county, et cetera - for a total of 6.3 million U.S. organizations, all of which could register a material report to the FBI's IC3. With our assumption that 2 million organizations should have reported to the IC3 in 2021, that's roughly - drum roll, please... 

(SOUNDBITE OF DRUMROLL, FANFARE) 

Rick Howard: A 32% chance that any officially recognized organization in the United States could have had a material cyberattack that year. That's 2 million reported breaches, divided by 6.3 million total organizations, 32%, or almost a one-in-three chance. But before we call that our official Bayesian prior, though, let's check our assumptions. 

Rick Howard: No. 1, all of the just under a million complaints to the 83 were material. No. 2, only 25% of the estimated unreported incidents were material. No. 3, any nation-state attacks that caused material damage would be included in the IC3 stats. No. 4, no company got hit more than once in any given year. No. 5, the number of employees or students of an organization doesn't matter for the forecast. No. 6, the total number of companies listed by the U.S. Census Bureau includes NGOs. And No. 7, the average - 274 - of existing federal organizations, taken from eight different reports, is close enough. 

Rick Howard: Those are some big assumptions. But I would argue that for this first estimate, this first Bayesian prior, it's probably good enough. This is us rolling the cue ball onto the billiard table and making a first guess as to where it is. Using Fermi's outside-in forecast, a technique used by Dr. Tetlock superforecasters, described in the book of the same name, for any organization in the United States, the probability of material impact due to a cyber incident this year is 32%, just over a one-in-three chance. Let me say that again. In the general case, for any United States organization, there is a one-in-three chance of experiencing a material cyber event this year. 

Rick Howard: I hear what you're saying. That's all great and fine, Rick, but I'm special. I work for a small startup making concrete. There is no way that there's a 32% chance that the company will be materially impacted by a cyber event this year. It must be way lower than that. Or I work for a Fortune 1000 company. There is no way that there is only a 32% chance. It has to be much bigger than that. Your 32% chance has no meaning to me. It doesn't help me at all. Well, OK. But remember, that first prior is just the assistant rolling the cue ball under the table and asking us to make a first estimate about its placement. 

Rick Howard: The next thing we're going to do is check our assumptions. We'll be looking to collect new evidence about those assumptions and adjust our 32% forecast up or down, depending on where the evidence leads us. For example, if we found some time in the future that my assumption about unreported material events in the IC3 report was 10% versus 25%, we would adjust the probability down. On the other hand, if we found that the actual number of federal organizations was really 80 versus the average 274, then we would adjust the probability up, just like Tetlock superforecasters do on a regular basis. So keep your eye on your assumptions. 

Rick Howard: The next step is to continue to collect new evidence. In other words, we're going to roll more balls under the billiard table. Two research reports published by the Cyentia Institute will help us in this round. They're called "Information Risk Insights Study: A Clearer Vision for Assessing the Risk of Cyber Incidents" and "IRIS Risk Retina - Data for Cyber Risk Quantification." Cyentia is spelled Cyentia. I talked to Wade Baker, one of the co-founders, and he admits that the name of the company is hard to pronounce. 

Wade Baker: Cyentia. It's a little bit weird. Yeah. We have a shirt that has various pronunciations in it, and then at the bottom of it just says, however you say it, it means good research. So we just go with it. 

Rick Howard: Fun fact. You may remember Wade. He was one of the original founders of the Verizon Data Breach Report or DBIR, a highly respected annual survey that Verizon has been publishing for some 15 years. From their latest 2022 report, quote, "the DBIR was created to provide a place for security practitioners to look for data-driven, real-world views of what commonly befalls companies with regard to cybercrime," end quote. Now, I've been thinking about how to calculate cyber risk for a while now, and these two Cyentia reports are the closest thing I have found that matches my thinking around superforecasters, Bayesian philosophy and Fermi estimates. In the first paper, Cyentia partnered with Advisen, a Zywave company who provided the breach data set for Fortune 1000 companies in the past decade. Here is David Zaversky (ph), one of the paper's authors, and Wade, explaining how Advisen gets its data. 

David Zaversky: So we were delighted to partner with Advisen, now part of Zywave, for their commercial cyber data loss breach feed, which is, in my experience, the largest, most comprehensive source of verifiable, publicly identifiable breaches. They're going out and they're collecting information from Freedom of Information Acts, through news queries, through looking at state departments, attorneys generals, et cetera, there to say, how much public information can we find about breaches and how much of that information can we verify? It is, as I mentioned, it's not complete, but it is much more complete than anything else I've seen out there. 

Wade Baker: And used heavily in the insurance community. You know, that's where they have most exposure and that sort of actuarial data set, if you will. 

Rick Howard: So I have a high confidence in the data set since it's public knowledge who all the Fortune 1000 companies are, and because of compliance reasons, the data breach reporting is robust. The first finding that is important to our study is that for the past five years, just under one in four Fortune 1000 companies get hit each year by a material cyber event. That number is slightly lower than our first Bayesian prior of one in three. But Cyentia pulled their analysis apart by looking at the odds of ranked quartiles. In other words, they looked at the odds of the top 250 firms, then the next 250, et cetera. 

Rick Howard: It turns out that if your company is in the Fortune 250, you are five times as likely to have a material breach than if you are in the bottom 250. 

Unidentified Person: Oh, no. 

Rick Howard: From their report, a Fortune 250 company has a one-in-two chance, between 251 and 500, a one-in-three chance, between 501 in 750, a one-in-five chance, and between 751 in 1000, a one-in-10 chance. They also calculated different probabilities for different loss scenarios. They used a graph called a loss exceedence curve, which, according to Bryan Smith at the FAIR Institute, quote, "is a way to visualize the probability of the loss exceeding a certain amount. The X axis plots the annualized lost exposure for the given risk scenario considered in the analysis. The Y axis plots the probability of a loss being greater than the intersection with the x-axis from 0 to 100%," end-quote. What that means is that there is a different probability for different values of loss. From the Cyentia report - a 25% chance for any loss whatsoever, a 14% chance for losing 10 million or more, and a 6% chance for losing 100 million or more. This is important when it comes to risk tolerance. For some Fortune 1000 companies, a 7 in 50 chance of losing 10 million is an acceptable risk. For a handful of them, that's just couch cushion money. For others, though, that 14% chance of losing 10 million might be too much to bear compared to all the other risks their leadership team is trying to avoid. The reason to use loss exceedance curves is to give the leadership the option to choose. When we were using qualitative heatmaps with our high, medium and low assessments, there was no way for company leadership to evaluate whether or not the risk was within their tolerance or not. Loss exceedance curves gives them a visual reference of where their tolerance falls. 

Rick Howard: Cyentia then combined three data sets from Advisen, Dun & Bradstreet and the U.S. Census Bureau for breaches reported for all companies in the United States, not just the Fortune 100. But Wade and David admit in the report that compared to the Fortune 1000 data set, this data set is not as robust. But they still have high confidence in it being the best available. The report has a section where they forecast the probability of a material breach for each commercial sector, like construction, agriculture, trade, etc. They conclude that there is a less than 1 in 100 chance for any company, regardless of sector, to have a material breach this year. But they caveat it greatly. In an email conversation with Wade, he said that, quote, "Since each sector is composed of mostly smaller firms, it pulls the typical probability down dramatically," end-quote. I'll say. The contrast between Cyentia's 1% compared to my IC3 forecast of 32% is quite large. 

Rick Howard: But Wade says that the more accurate forecast comes from the size of the organization, not the sector. In the report, they show quite the large probability gap among revenue groupings. When a company has less than 1 billion in avenue revenues, where most of us live, it has a less than 2% chance of having a material breach; between 1 billion and 10 billion, a 10% chance; between 10 billion and 100 billion, a 23% chance; and finally, greater than 1 billion in revenue, a whopping 75% chance. But Wade and David also point out that larger organizations are more likely to report a breach - over 1,000 times more likely, compared to small - less than $10 million in revenue - businesses. So the probabilities are probably skewed in the bigger company direction, which begs the question, how do you incorporate this data into your forecast? How do you use the prior IC3 forecast of 32% with this report? 

Rick Howard: Well, first things first - if you're working for a Fortune 1000 company, I would throw out the generic forecast that I just did from the FBI's reporting. Cyentia's report for Fortune 1000 companies is way more precise, and the data set is so robust that I feel confident that those forecasts are more accurate for Fortune 1000 companies than my generic forecast for any and all companies using FBI data. Also, the second Cyentia report I mentioned called "IRIS Risk Retina - Data for Cyber Risk Quantification" is all about nonprofits. If I was working for a nonprofit, I would use that report to establish my prior. 

Rick Howard: But if you don't work for a Fortune 1000 company or a nonprofit - say you're Marvel Studios, for example - how do you absorb this new data about revenue size into your forecast? If I were inclined to throw this into a Bayes algorithm computer program and do the math, we could, but we're doing Fermi Estimates here. They will likely be good enough. According to Zippia, a company that tracks analytics about companies, Marvel Studios made almost 116 million in revenue in 2021. That puts them in the less than 1 billion in avenue revenues, where most of us live. According to Cyentia, that type of company has a less than a 2-in-1 chance of having a material breach. 

Rick Howard: Now, that's a big gap compared to my IC3 prior of 32%. So does that make you want to reduce the prior or increase it? Since the Cyentia forecast is lower than my IC3 forecast, logic says that I would lower it, but how much? Do you lower it all the way down to 2%? Well, you could. If you feel that the Cyentia report is so strong that it overwhelms the IC3 analysis like it did for the Fortune 1000 companies or the nonprofits, you could absolutely do that. But the authors of that analysis say in the report that the data is not as robust as the Fortune 1000 data. And I like my IC3 analysis. I feel confident in it. Remember, the concept behind Bayes is that it's a measure of your belief, your personal confidence. For me, then, it's not a complete replacement. I would just adjust the IC3 prior of 32% down some - say to 15% - and start looking for more evidence to help support the change. One technique used by Tetlock's superforecasters when making these adjustments is asking themselves how confident they are in the change. In their minds, they want to be at least 95% confident that the adjustment is correct - not 100%, but mostly. Now, I know that's an abstract way to think about it. How can you be 95% confident about something? How would you rate the difference between 95% and, say, 85%? I know I can't do that. One trick use is asking themselves to make a bet. Would they bet $100 that this adjustment was correct? A bet implies some risk. You may be totally sure about something when you make a bet, but you're not 100% sure. So if you're so positive about your adjustment that you're willing to bet $100 on it, that's a good approximation for being 95% confident. If you're not, back the adjustment off a point or two. With my new prior 15%, I wouldn't bet $100 of my own money that 15% is the correct number. So what about 17%? OK. I'd bet $100 on that. To recap then, I used two different frequentist data sets. I used the FBI IC3 data and some Fermi estimations to find the initial prior. I then used the Scientia report to make an adjustment to the initial forecast. The bottom line is that for Marvel Studios, I'm forecasting the probability of material impact this year as 17%, or just under a 1 in 5 chance. Now, it's a gut call. But remember also that this is still outside-in analysis, a Fermi prediction. This forecast has nothing to do with the Marvel Studios' actual defensive posture inside out. In other words, it doesn't take into consideration any defensive measures that Marvel Studios has deployed to strengthen its posture in terms of cybersecurity first principles. We'll look at that next. 

Rick Howard: With outside-in analysis, I have demonstrated how network defenders can take an initial estimate and adjust it as new evidence comes in. We took the IC3 prior and adjusted it with the Scientia data. We can repeat that process now with the inside-out analysis. In other words, we can use our outside-in forecast as the new prior and then estimate how well we have deployed our first-principle strategies in turn and adjust a prior up or down based on that new evidence. That means we have to assume some things. Let's assume that if we fully deploy each of our first-principle strategies, then the impact is a reduction of risk probability to our organization by some amount. For the sake of argument, let's assume these values - for zero trust, about 10%; for intrusion kill chain prevention, another 10%; for resilience, about 15%; and for automation, just 5%. Now, these are best guesses - Fermi guesses on my part - and that's why they're assumptions. You might use different numbers, and that's perfectly fine. Over time, the superforecaster in me will look for new evidence that will validate or invalidate those values. But for now, the Fermi analyst in me says they're close enough. And remember, in this model, you only get the full probability reduction if you have completely deployed each strategy. Most network defenders, even those that work for robust security organizations, don't have any of these strategies fully deployed. 

Rick Howard: To see how this works, let's analyze a company through this first-principle lens. My good friend Todd Inskeep, a veteran security executive, has been a fan of "CSO Perspectives" and providing me feedback since we started some two years ago. He suggested we use the Contoso Corporation as the case study. For those that don't know, the Contoso Corporation is an imaginary company that Microsoft uses to explain to potential customers about how to deploy its set of products. They explain that the company is, quote, "a fictional but representative global manufacturing conglomerate with its headquarters in Paris." Think Fujitsu but French. Since Microsoft analysts have put a lot of work into the backstory of how the Contoso Corporation is architecturally deployed, I don't have to make up one myself that has enough detail to be useful. Further, I don't have to pick on a real company like Marvel Studios for this analysis. So let me hit the high points for how the Contoso Corporation operates. There's a link in the show notes for anybody that wants to look into the details later. The Paris office has 25,000 employees. And each regional office, of which there are many, has about 2,000 employees each. It has a large sales and support organization for more than 100,000 products. It has an annual revenue of 35 billion, similar to Fujitsu, but is not a Fortune 1000 company or a nonprofit organization. For network architecture, Contoso uses Microsoft 365 for office applications - you know, email, word processing, spreadsheets, etc. It is currently transitioning from data center operations to cloud-based operations but is probably years away from completing the transition. Customers use their Microsoft, Facebook or Google email accounts to sign in to the company's public website. And vendors and partners use their LinkedIn, Salesforce and Google email accounts to sign in to the company's partners' extranet. It has deployed an SDON to optimize its connectivity to Microsoft services in the cloud, and it has deployed regional application servers that synchronize with the centralized Paris campus data centers. For zero trust, Contoso uses an on-premise Active Directory forest for authentication to Microsoft 365 Cloud resources with password hash synchronization, but it also uses third-party tools in the cloud for Federation Services. It has deployed special authorization rules for senior leadership, executive staff and specific users in the finance, legal and research departments who have access to highly regulated data. It collects system application and driver data from devices for analysis and can automatically block access or patch with suggested fixes. It requires multifactor authentication for their sensitive data and categorizes data into three levels of access. It deploys data-loss protection services for Exchange Online, SharePoint and OneDrive, and it designates people to execute global system administrator changes, and those administrators only receive time-based temporary passwords with their active directory privileged identity management system. For resilience, Contoso's data is encrypted at rest and available only to authenticated users. 

Rick Howard: And finally, for intrusion kill chain prevention, Contoso uses Microsoft Defender Antivirus on the endpoint. Since the Contoso Corporation is a global manufacturing conglomerate and not an entertainment company like Marvel, we need to start over with our outside-in Fermi estimate using the FBI's IC3 data. Our first prior then is 32%. But according to Scientia there is a 22% chance that Contoso, annual revenue of 35 billion, will be impacted by a material breach this year, just over a 1 in 5 chance. The question is then, how far down do you adjust the 32% prior with this new information? I still have high confidence in my own IC3 outside-in analysis, but I have less confidence in the Scientia data with all the caveats I have already explained. But it's still a good forecast. I would bet $100 of my own money that the actual probability of material impact is a good five points below my generic prior. 

Rick Howard: So let's set the prior to 27%. Using the 27% as our current prior now, the next step of incorporating new evidence, more balls on the billiard table, is to assess how well the Contoso Corporation is doing in implementing our cybersecurity first-principles strategies. Based on how well or poorly they are deployed, it will impact our forecast up or down. For zero trust, I'm giving them an 8% reduction out of a possible 10%. The Contoso Corporation as described has a strong identity access management program that consists of information governance and administration, privileged identity management and privileged access management. They provide their customers, contractors and employees with single sign-on capability and multifactor authentication for sensitive data. For vulnerability management, they have a strong program for Microsoft products, but it's a lot less strong for any third-party applications. And there is no mention of a software bill of materials program anywhere. But they do track devices, applications and operating system patch levels for Microsoft products. 

Rick Howard: And finally, there is no discussion of a software-defined perimeter. With all of that, the Contoso Corporation is well along its zero-trust journey. They still have a ways to go, but it's mature. Good for them. That's not the case for their intrusion kill chain prevention program. I'm giving them a 1% out of a possible 10% reduction adjustment. The Contoso Corporation doesn't really think about specific adversary tactics. It has a security stack of mostly Microsoft security products, and it has the capability to deliver telemetry from the stack to a security operation center, but there is no specific mention that Contoso has a SOC, an intelligence group, a red-blue-purple team or a desire to share adversary playbook intelligence with its peers. I'm giving them a 1% reduction only because Contoso uses Microsoft Defender Antivirus for automatic endpoint protection from malware. But really, they have no intrusion kill chain prevention program. 

Rick Howard: For resilience, the story is not that good there, either. I'm giving them a 1% out of a possible 15% reduction adjustment. The Contoso Corporation does have a healthy encryption program that works with its multilevel zero trust program. That said, I found no mention of any crisis planning, backup programs, incident response capability or even the incipient beginnings of a chaos engineering capability. The Contoso Corporation might well deflect an inexperienced ransomware crew, but any attack from a professional crew will likely materially impact it. Lastly, for automation, I'm giving them a 0% out of a possible 5% reduction adjustment. The Contoso Corporation doesn't mention anything about its site reliability engineering practices or its DevSecOps program or even its agile development program. It mentions nothing about securing its own code, nor even trying to track the components it's using from open source. In other words, the security team is getting no benefit from automation that I can see. 

Rick Howard: So adding it all up, they get a significant reduction because of their robust zero trust program but hardly any reduction adjustments at all for their intrusion kill chain, resilience and automation programs. The total reduction then is 10 points off the 27% Bayesian prior. In other words, I would bet $100 that the Contoso Corporation has a 17% chance of being materially impacted by a cyberattack this year, just under a 1 in 5 chance. 

Rick Howard: That's it. That's my recommendation about how to forecast cyber risk for your organization. Using superforecasting techniques and specifically Fermi estimates, you forecast an outside-in estimate for the general case. Then you adjust the Bayesian prior up or down using inside-out analysis, based on how well you deploy our first-principles strategies. Because of that work for the Contoso Corporation, I forecast that it has just under a 1 in 5 chance of being materially impacted this year by a cyberattack. So what do you do with that information now that we have it? Well, if I was the Contoso CSO, there are several next steps to consider, assumptions to validate. The first thing to do is to confirm the dollar amount of what is material for the company. With annual revenues of 35 billion, is a $10 million loss material, 100 million, something bigger or maybe something smaller? And how do you determine that number? Well, that would be several one-on-one conversations with the CFO, the CEO and perhaps key members of the board. And by the way, that number will likely change over time as the fortunes of the company go up and down. Make sure you're checking in with senior leadership annually to confirm the number. I would definitely take the Cyentia loss exceedance curve for Fortune 1000 companies as a baseline, as another prior - find the value in the curve that corresponds to a material event and adjust my forecast up or down depending. For example, Cyentia says that for Fortune 1000 companies, there is a 14% chance of losing $10 million or more. If $10 million is the Contoso indicator for materiality, I would adjust our current prior of 17% down one or two points to 15%, a 3 in 20 chance. 

Rick Howard: The next step is to determine if the current forecast is in the tolerance of the leadership chain. If it is, if they think that a 3 in 20 chance is an acceptable risk to the business, then nothing needs to be done here in terms of significant new investment in people, process and technology. The infosec team needs to maintain and perhaps become more efficient in executing its zero-trust, intrusion kill chain resilience and automation tactics. But we're not going to roll out some big, new initiative here. 

Rick Howard: On the other hand, if senior leadership is uncomfortable with the 3 in 20 chance and demands that they get it under 10%, or a 1 in 10 chance, I have some planning to do. I would look at resilience first. Contoso's resilience plan is weak, and some improvements in basic, meat-and-potatoes IT functionality like automated backups, restoration practice, crisis planning and incident response could significantly reduce their risk compared to the other first principle strategies that might cost a lot more to implement. After all, getting good at intrusion kill chain prevention is not cheap. That said, let's not forget to keep track of the cost for reducing risk to below 10%. If the spend to accomplish that task is greater than the $10 million loss we were trying to prevent, perhaps we should go back to the drawing board and come up with a cheaper plan. 

Rick Howard: And there you have it. When we come back, we'll wrap up this three-episode series on risk forecasting. 

Rick Howard: I've been thinking about finding a better way to convey cyber risk to the board for a long time, almost a decade. I kept struggling with my lack of knowledge about statistics and kept trying to rely on the frequentist view that I needed more data, that I needed to count all of the things. But I knew deep down that this wasn't the path, that there had to be a better way. Dr. Tetlock's book on superforecasting opened my mind to the idea that infosec professionals didn't need precision answers to make resource decisions about security improvement. We can make good enough estimates - Fermi estimates, back-of-the-envelope estimates - that would take less time, and the answers would be close enough to be impactful. 

Rick Howard: And then I learned that Bayes' rule was the mathematical foundation that explained why superforecasting techniques worked. Working through the examples in this show for Marvel Studios and the Contoso company, you may feel queasy that I'm basing cyber risk forecasts for multimillion dollar companies on Kentucky windage. I get it. It's tough to let go of the frequentist mindset. But I would just remind you that way smarter people than you and I - like Alan Turing, for example - use these techniques to solve more complex problems than calculating cyber risk. Maybe you should try it. Besides, the old way of collecting all the data and using qualitative heatmaps hasn't really worked since we started doing it some 20 years ago. Perhaps it's time to consider a change. 

Rick Howard: And that's a wrap. As always, if you agree or disagree with anything I have said, hit me up on LinkedIn or Twitter, and we can continue the conversation there. Or if you prefer, email, drop a line to csop@thecyberwire.com. That's CSOP, the at sign, thecyberwire - all one word - .com. And if you have any questions you would like us to answer here at "CSO Perspectives," send a note to the same email address, and we will try to address them in the show. For next week's show, I'm going to interview Wade Baker and David Severski about the two papers that I mentioned in this episode. You don't want to miss that. 

William Dozier: Same bat-time, same bat-channel. 

Rick Howard: The CyberWire's "CSO Perspectives" is edited by John Petrik and executive produced by Peter Kilpe. Our theme song is by Blue Dot Sessions, remixed by the insanely talented Elliott Peltzman, who also does the show's mixing, sound design and original score. And I am Rick Howard. Thanks for listening.