CSO Perspectives is a weekly column and podcast where Rick Howard discusses the ideas, strategies and technologies that senior cybersecurity executives wrestle with on a daily basis.
Infosec teams risk assessment.
Out of all the capabilities in the infosec community that have improved over the years, the one essential skill that hasn’t moved forward is calculating risk. Specifically, how do we convey risk to senior leadership and to the board?
In my early network defender days, whenever somebody asked me to do a risk assessment, I would punt. I would roll out my "qualitative heat map" (a fancy name for a color coded spreadsheet where all the risks are listed on the X-Axis and my three levels of potential impact --high, medium, and low-- are plotted on the Y-Axis) and call it a day. Along with many of my peers, I would tell myself that predicting cyber risk with any more precision was impossible; that there were too many variables; that cybersecurity was somehow different from all other disciplines in the world and it couldn’t be done.
We were wrong of course.
The Cybersecurity Canon Project is full of Hall-of-Fame and candidate books that talk about how to calculate cyber risk with precision:
- "How to Measure Anything in Cybersecurity Risk," by Douglas W. Hubbard and Richard Seiersen
- "Measuring and Managing Information Risk: A Fair Approach," by Jack Freund and Jack Jones
- "Security Metrics: A Beginner’s Guide," by Caroline Wong
- "Security Metrics: Replacing Fear, Uncertainty, and Doubt," by Andrew Jaquith
These are all great primers regarding how to think differently about precision probability forecasting and I highly recommend them. If this subject is new to you, they will all change your current view of the world. But my problem with all of them is that I kept waiting for the chapter at the end entitled, “And Here’s How to Do It” or better, “Building the Risk Chart that you Take to the Board.” None had it or anything close. That part was always left as an exercise for the reader.
The book that changed my mind that it was possible to do though was "Superforecasting: The Art and Science of Prediction,” by Philip Tetlock and Dan Gardner, another Cybersecurity Canon Project Hall-of-Fame candidate book. Dr. Tetlock is quite the character. He’s one of those scream-and-shake-your-raised-fist-at-the-tv-because-they-have-no-idea-what-they-are-talking-about people. He would watch news programs like CNN, FOX, and MSNBC where the host would roll out famous pundits to give their opinion on some topic because, once in their lives, they’d predicted something correctly. It didn’t matter that all the predictions they’d made since were wrong. The news programs would still bring them on as if they were Moses coming down from Mount Sinai to present the tablets. Dr. Tetlock thought that they should have to keep score. I always thought that when pundits came on, the viewer should see their batting average rolling across the chyron on the bottom of the screen: “These pundits have made 3 correct predictions out of 100 tries in the last year. Maybe you shouldn’t listen too closely to what they have to say.”
And then Dr. Tetlock decided to test his idea. Working with IARPA (the Intelligence Advanced Research Projects Agency), he devised a test using three groups: the intelligence community, the academic community, and a group I call the Geezers-on-the-Go. Now, the Geezers-on-the-Go were not all old people, they were just regular people with time on their hands who liked to solve puzzles. According to the Washington Post, Tetlock then had them forecast answers to over 500 really hard questions like
- Will the Syrian President, Bashar Hafez al-Assad, still be in power in six months time?
- Will there be a military exchange in the South China Sea in the next year?
- Will the number of terrorist attacks sponsored by Iran increase within one year of the removal of sanctions?
Out of the three communities, the Geezers-on-the-Go outperformed the control group by 60%. They beat the academic teams from 30% to 70% depending on the school (MIT and the University of Michigan were two), and outperformed the intelligence groups who had access to classified information. But Tetlock also discovered a subset of the Geezers-on-the-Go: the superforecasters. By the end of the four-year tournament, these superforecasters had outperformed the Geezers-on-the-Go by another 60% and could also see further out than the control group. "Superforecasters looking out three hundred days were more accurate than regular forecasters looking out one hundred days."
Superforecaster superpowers.
And these Superforecasters don't have extreme mutant abilities either. They are intelligent for sure, but not overly so. This isn’t a collection of Professor X’s from the X-men comic book. They aren’t all card carrying members of Mensa and they are not math nerds either. Most of them only perform rudimentary math calculations when they make their forecasts. But, by following a few guidelines, they can outperform random Kentucky windage guesses by normal people like me.
1: Forecast in terms of quantitative probabilities, not qualitative high-medium-lows: Get rid of the heat maps. Embrace the idea that probabilities are nothing more than a measure of uncertainty. But also understand that just because the probability that something will happen is 70%, doesn't mean it's a lock (See Secretary Clinton in the 2016 U.S. Presidential campaign.)
2: Practice: Do a lot of forecasts and keep score using something called the Brier Score (invented by Glenn W. Brier in 1950). The score is on two axes: Calibration and Resolution. Calibration is how close to the line your forecast is (Are you over confident or under?) Resolution is when you predict something is going to happen, it does.
3: Embrace Fermi estimates (outside-in first, then inside-out forecasts).
Outside-in is looking at the general case before you look at the specific situation. In terms of cybersecurity, that means that outside-in considers the probability that any organization would get hit by say a ransomware attack. Inside-out considers the probability that ransomware criminals will cause a material impact to your organization. Both have merit, but Tetlock says to start with the outside-in forecast and then adjust up or down from there with the inside-out forecast. For example, if your outside-in forecast says that there is a 20% chance of material impact due to a ransomware attack this year for all U.S. companies, that’s the baseline. Then, when you do the inside-out assessment by looking at how well your organization is deployed against our first principle strategies, you might move the forecast up or down depending.
The Italian American physicist Enrico Fermi was a central figure in the invention of the atomic bomb and he was renowned for his back-of-the-envelope estimates. With little or no information at his disposal, he would often calculate a number that subsequent measurement revealed to be impressively accurate. He would famously ask his students things like “estimate the number of square inches of pizza consumed by all the students at the University of Maryland during one semester” and he forbade his students from looking up any information. He encouraged them to make back-of-the-envelope assumptions first. He understood that by breaking down the big intractable question (like how many inches of pizza consumed) into a series of much simpler answerable questions (like how many students, how many pizza joints, how many inches in a slice, etc) , we can better separate the knowable and the unknowable. The surprise is how often good probability estimates arise from a remarkably crude series of assumptions and guesstimates (More on this in a bit).
Frederick Mosteller, a groundbreaking eminent statistician in the 1950s-1970s, said that, “It is the experience of statisticians that when fairly ‘crude’ measurements are refined, the change more often than not turns out to be small. Statisticians would wholeheartedly say make better measurements, but they would often give a low probability to the prospect that finer measures would lead to different policy.”
4: Check your assumptions: Adjust, tweak, abandon, seek new ones, and adjust your forecast from there.
5: Dragonfly eyes: Consume evidence from multiple sources. Construct a unified vision of it. Describe your judgment about it as clearly and concisely as you can, being as granular as you can.
6: Forecast at a 90% confidence level. As you adjust your forecast, remember that you want to be 90% confident about it. If you’re not, then you need to adjust up or down until you are.
The point to all of this is that it's possible to forecast the probability of some future and mind-numbingly complex event with enough precision to make decisions with. If the Geezers-on-the-Go can accurately predict the future of the Syrian President, surely a bunch of no-math CISOs, like me, can forecast the probability of a material impact due to a cyber event for their organizations. That’s cybersecurity risk forecasting.
People don’t think in terms of probabilities, but should.
Tetlock spends time talking about how the U.S. government hasn’t done this kind of thinking in the past. You and I would call them massive intelligence failures:
- WMD in Iraq. 20 years of war on the "slam dunk" CIA assertion that these weapons existed in Iraq when they didn't.
- Vietnam War: 10 years of war on the widely held belief that If South Vietnam fell, the entire world would fall to communism like dominoes. Leaders didn't just think there was a chance this would happen. They thought it was a sure thing.
- Bay of Pigs: President Kennedy's political disaster when the planners didn't consider the probability of success when the plan changed at the last minute
- Is Osama Bin Laden in the Bunker?
Tetlock describes a scene in one of my favorite movies, 2012's "Zero Dark Thirty" starring Jessica Chastain. The CIA director, Leon Panetta - played by the late great James Gandolfini, is in a conference room asking his staff for a recommendation on whether or not Osama Bin Laden is in the bunker. He's looking for a yes or no answer. One of his guys says that he fronted the bad recommendation about WMD in Iraq and because of that failure, they don't deal in certainties anymore. They deal in probabilities. Which is the right answer by the way, just not a very satisfying one. They go around the room and get a range of probabilities from 60% to 80%. Chastain breaks into the conversation and says that the probability is 100%. "OK fine, 95%,” she says, “because I know certainty freaks you out. But, it's a 100%." Which is the wrong answer by the way. The probability was never a 100% no matter how sure she was with her evidence.
It’s clear that as humans in our everyday lives, we don’t really understand probabilities. And, even if we do, they are not satisfying. We’d much prefer a yes or no answer. Will the company have a material breach this year? Telling the CEO yes or no is much more palatable to her than saying there is a 15% chance. What does she do with a 15% chance anyway? That answer is harder to deal with, demands an effort to parse, and requires thinking, strategy, and flexibility. A yes/no answer on the other hand is nothing more than an if-then-else-clause like in a programming language. If we’re going to get breached this year, then spend resources to mitigate the damage; else, spend that money on making the product better. Easy.
Unfortunately, no matter how much we desire to live in a fantasy world full of binary answers (yes/no), the real world doesn’t work that way. In Neal Stephenson’s science fiction novel, “Seveneves,” his Neil deGrasse Tyson character, Doc Dubois, explains how he calculates rocket trajectories through a debris field. “It is a statistical problem. On about [day 1] it stopped being a Newtonian mechanics problem and turned into statistics. It has been statistics ever since.”
Exactly. Calculating cyber risk has never been Newtonian either. It’s always been stochastic no matter how much we desire to simplify the calculation into easy to read heat maps. We just didn’t treat it that way. And, by the way, heat maps are just bad science (See chart). So, don’t use them.
It might be more useful to reframe how we think about probabilities. If you’re like me, your own statistics experience came from guessing what color marble will fall out of an urn in that Probably and Stats 101 course we all had to take in college. And yes, that’s a great introduction to the concept, but that coursework only represents a small sliver of what probabilities really are.
A more useful and broader description in the cybersecurity context comes from Dr. Ron Howard, the father of decision analysis theory. His entire field of study is based on the idea that probabilities represent uncertainty when making a decision; not the number of marbles in our urn collection. Probability is not necessarily found in the data; meaning that you don’t have to count all-the-things in order to make an uncertainty forecast using probability. He says that “Only a person can assign a probability, taking into account any data or other knowledge available.” Counting marbles tumbling out of urns is one way to take account of data, but Howard’s great insight is that: “A probability reflects a person’s knowledge (or equivalently ignorance) about some uncertain distinction.” He says, “Don’t think of probability or uncertainties as the lack of knowledge. Think of them instead as a very detailed description of exactly what you know.”
Tetlock interviewed the real Leon Panetta about that internal CIA meeting and the subsequent meeting Panetta had with President Obama about the decision to send the special forces into Pakistan to get Osama Bin Laden. When the President went around the room with his staff, he also got a range of probabilities. His conclusion though, after reviewing those recommendations, was that his staff didn't know for sure. Therefore, it was simply a fifty fifty chance, a toss up, on whether or not Osama Bin Laden was in the bunker. Which is the wrong conclusion by the way. It was probably much stronger. He ultimately made the right call but he could just as easily erred on the side of caution.
Black swans and resilience.
Tetlock also describes criticism of his Superforecasting approach from his colleague, Nassim Taleb, the author of "The Black Swan: The Impact of the Highly Improbable" published in 2007. Taleb says that forecasting is impossible because history is controlled by “the tyranny of the singular, the accidental, the unseen and the unpredicted.” According to NYTs journalist, Gregg Easterbrook, Taleb argues that "Experts are charlatans who believe in bell curves, in which most distribution is toward the center — ordinary and knowable. Far more powerful, Taleb argues, are the wild outcomes of fractal geometry, in which anything can happen overnight." Taleb says that "What matters can’t be forecast and what can be forecast doesn’t matter. Believing otherwise lulls us into a false sense of security." Acknowledging the argument, Tetlock says that, "The black swan is therefore a brilliant metaphor for an event so far outside experience we can’t even imagine it until it happens.”
Case in point, if we do some first order back-of-the-envelope calculations, some Fermi estimates, we know that in 2021, the press reported on some 5,000 successful cyber attacks to U.S. companies. We also know that there are ~ six million commercial companies in the United States. Doing the outside-in forecast, there was a 5 thousand / 6 million chance of a U.S. company getting breached in 2021, ~ .0008. That's a really small number. (I'm going to refine that forecast later, but for now, just go with me on it.) By definition though, the experience of those 5,000 companies were black swan events, significant impactful events on something that was not very likely to happen at all.
Tetlock's response to Taleb is that there are probably a set of estimate-problems that are too hard to forecast, but he says that they are largely due to the fact that the forecasting horizon is too long. For example, it's tough to forecast who will win the U.S. Presidential election in 2028 (6 years from the time of this writing), but you could do well with the U.S. Congressional elections in 2022 (3 months).
That said, Taleb's solution to black swan events is to not attempt to prevent them, but to try to survive them. He says resilience is the key. For example, instead of trying to prevent a giant meteor from hitting the earth, the question is how would you survive one? In the cybersecurity context, instead of preventing Panda Bear from breaching your organization, what would you do to ensure that your organization continues to deliver its service during and after the attack. And that sounds an awful lot like our cybersecurity first principle strategy: resilience.
Changing my mind.
I’ve been trying to get my hands around how to do risk assessment with more precision for over five years now. I’ve read the books, written book reviews for the Canon Project, interviewed many of the associated authors, published a couple of papers, and even presented those papers in consecutive years at the same security conference; one with Richard Seiersen, an author of one of the books (See references).
My initial thought when I started all of this was that the main reason calculating risk was so hard for the infosec community was that it involved some high-order math; a skill that was beyond most senior security practitioners. I became convinced though that in order to have enough precision to convince senior leadership that my risk calculation was valid, I was going to have to demonstrate my prowess with things like Monte Carlo simulations and Bayesian algorithms. And then I was going to have to explain what Monte Carlo simulations and Bayesian algorithms were to these same senior leaders who were having a hard enough time understanding why our annual firewall subscription was so expensive. This seems like a bridge too far.
So, after five years of looking into how to do that, I’ve become a fan of Fermi and Mosteller. According to Nagesh Belludi from the Right Attitudes website, “Fermi believed that the ability to guesstimate was an essential skill for physicists.” I would say that the skill applies to any decision maker but especially decision makers in the tech and security worlds where the scales of the encountered problems are so enormous. Getting a precise estimate is hard and time consuming but getting an estimate that’s in the right ballpark in terms of order of magnitude is relatively easier and will probably be sufficient for most decisions. And, even if it’s not, you can always decide to do the more precise estimate later.
Case in point, here at the CyberWire, we did an inside-out evaluation of our internal first principle cybersecurity posture in 2022. We evaluated our defenses in terms of zero trust, intrusion kill chain prevention, resilience, automation, and compliance. Once complete, we briefed the boss on our findings and gave him our estimated probability of material impact due to some cyber event in the next year. I then asked him for permission to do a deeper dive on the issue in order to get a more precise answer. His answer to me was spot on.
He looked at the level of effort this deeper dive was going to take, not only for the internal security team, but for the entire company and especially for him. Frankly, it was going to be high. He then asked this question: “What do you think the difference is going to be between this initial inside-out estimate and the deeper dive?” I had to admit. I didn’t think the deeper dive estimate was going to be that far away from the inside-out estimate; maybe a couple of percentage points up or down. He then said that if that was the case, he didn’t need the deeper dive in order to make decisions about any future resource investment of the Cyberwire’s defensive posture. The initial estimate was good enough.
Quite so.
Next steps.
In the next couple of essays, I am going to cover how to do an outside-in estimate for the cybersecurity community and discuss how to adjust it for your specific situation. In other words, we’re going to start with a general outside-in estimate and adjust it based on the size of your organization (Small, medium, Fortune 500) and type of organization (government, academic, commercial). I will then discuss how to get an inside-out estimate based on how well your organization deployed our first principle strategies.
Research on Why Heat Maps Are Poor Vehicles for Conveying Risk
2005: Surveyed NATO Officers believe that Highly Likely could mean anywhere between 40% and 100% likely.(Ronald Howard, “The Foundations of Decision Analysis Revisited.”)
2006: Studies find that experts choose "1" more often in a scale of say "1" to "10" regardless of the subject matter the number is supposed to represent.(Kelly See, Craig Fox, and Rottenstreich, “Between ignorance and truth.”
2008: Ordinal scales inadvertently create range compression – a kind of extreme rounding error.(Louise Cox, “What's Wrong with Risk Matrices?”)
2009: Surveyed students and faculty believe that “Very Likely” could mean anywhere between 43% and 99% likely.Budescu, Broomell, and Por, "Improving Communication of Uncertainty in the Reports of Intergovernmental Panel on Climate Change,"
2016: Cybersecurity scoring systems like OWASP (Open Web Access Security Project), CVSS (Common Vulnerability Scoring System), CWSS (Common Weakness Scoring System), and the CCSS (Common Configuration Scoring System) perform improper math on non-mathematical objects to aggregate a risk score.Hubbard and Seiersen,"How to Measure Anything in Cybersecurity Risk."
2016: The idea of “Risk Tolerance” is not presented. Just because risk officers rate an event as highly likely does not mean that leadership is not willing to accept that risk.Hubbard and Seiersen,"How to Measure Anything in Cybersecurity Risk."
2016: Heat maps convey no information about when the event might happen (e.g., next year, next three years, next decade.)Hubbard and Seiersen,"How to Measure Anything in Cybersecurity Risk."
2016: Some risk officers rate events as more likely just because they could be more impactful.Hubbard and Seiersen,"How to Measure Anything in Cybersecurity Risk."
2016: When percentages are explicitly defined, highly likely is between 90% and 99%; for example, survey participants violated the rules over half the time.Hubbard and Seiersen,"How to Measure Anything in Cybersecurity Risk."
2016: Most surveyed experts using ordinal scales from "1" to "5" chose the values of "3" or "4," reducing the 5x5 matrix to a 2x2 matrix.Hubbard and Seiersen,"How to Measure Anything in Cybersecurity Risk."
References.
"Author Interview: 'Security Metrics: A Beginner’s Guide’ Review'," by Rick Howard, The Cyberwire, the Cybersecurity Canon Project, Ohio State University, 2021.
“Between ignorance and truth: Partition dependence and learning in judgment under uncertainty.” by Kelly See, Craig Fox, and Yuval Rottenstreich, Journal of Experimental Psychology, 4 December 2006
“Book Review: 'How to Measure Anything in Cybersecurity Risk," by Steve Winterfeld, the Cybersecurity Canon Project, Ohio State University, 2021.
"Book Review: "How to Measure Anything: Finding the Value of ‘Intangibles’ in Business,” by Rick Howard, Cybersecurity Canon Project, Palo Alto Networks, 19 July 2017.
“Book Review: 'Measuring and Managing Information Risk: A FAIR Approach',” by Ben Rothke, the Cybersecurity Canon Project, Ohio State University, 2021.
“Book Review: 'Security Metrics: A Beginner’s Guide’ Review," by Ben Smith, the Cybersecurity Canon Project, Ohio State University, 2021.
"Book Review: 'Security Metrics: Replacing Fear, Uncertainty and Doubt," by Rick Howard, The Cybersecurity Canon Project, Ohio State University, 2021.
"BOOK REVIEW: SUPERFORECASTING,” BY SCOTT ALEXANDER, Slate Star Codex, 4 February 2016.
“Fermi Estimations,” by Bryan Braun, 4 December 2011.
“Fermi Problems: Estimation,” by TheProblemSite.com, 2022.
“Founder of Harvard’s Statistics Department, Frederick Mosteller, Dies,” by Alvin Powell, Harvard Gazette, 25 July 2006.
"How to Measure Anything in Cybersecurity Risk," by Douglas W. Hubbard, Richard Seiersen, Published by Wiley, 25 April 2016.
“How Superforecasting Can Help Improve Cyber-Security Risk Assessment,” By Sean Michael Kerner, eWeek, 6 March 2019.
"How to predict the future better than anyone else,” By Ana Swanson, 4 January 2016.
"Improving Communication of Uncertainty in the Reports of Intergovernmental Panel on Climate Change," by David Budescu, Stephen Broomell, and Han-Hui Por, Psychological Science 20, no. 3, 2009,
“Materiality in a nutshell,” by datamaran.
"Measuring and Managing Information Risk: A Fair Approach," by Jack Freund and Jack Jones, Published by Butterworth-Heinemann, 22 August 2014.
“Measuring Reputation Damage Due to Successful Cyber Attack,” by Rick Howard, Linked-In, 29 June 2018.
“Metrics and risk: All models are wrong, some are useful,” By Rick Howard, CSO Perspectives, the CyberWire, 30 March 2020, Last Visited 30 June 2020.
"‘Mindware’ and ‘Superforecasting’," By Leonard Mlodinow, 15 October 2015.
“On Equality of Educational Opportunity: PAPERS DERIVING FROM THE HARVARD UNIVERSITY FACULTY SEMINAR ON THE COLEMAN REPORT,” EDITED BY Frederick Mosteller and Daniel P. Moynihan, Published by Random House New York, 1972.
"Pundits are regularly outpredicted by people you’ve never heard of. Here’s how to change that,” By Sam Winter-Levy and Jacob Trefethen, The Washington Post, 30 September 2015.
"Security Metrics: A Beginner’s Guide," by Caroline Wong, Published by McGraw-Hill Companies, 10 November 2011.
"Security Metrics: Replacing Fear, Uncertainty, and Doubt," by Andrew Jaquith, Published by Addison-Wesley Professional, 1 March 2007.
"Seeing with Fresh Eyes: Meaning, Space, Data, Truth," by Edward R. Tufte, Published by Graphic Press, 2020.
"Seveneves," by Neal Stephenson, narrated by Robinette Kowal and Will Damon, Published by William Morrow, 19 May 2015
“Superforecasting II: Risk Assessment Prognostication in the 21st Century [Paper], [Presentation]” by Rick Howard and Dave Caswell, RSA Conference, 5 March 2019
"Superforecasting: The Art and Science of Prediction,” by Philip E. Tetlock and Dan Gardner, 29 September 2015, Crown.
“Superforecasting: Summary and Review," by HowDo, 16 June 2021.
“The Foundations of Decision Analysis Revisited,” by Ronald Howard, Chapter 3, 060520 V10, last visited 20190117.
“Talking Risk to the Board is Two Different Tasks, not One,” by Rick Howard, Linked-In, 27 August 2018.
“The Art of Approximation in Science and Engineering,” by Sanjoy Mahajanm, Electrical Engineering and Computer Science, MIT OpenCourseWare,” 2022.
"The Black Swan: The Impact of the Highly Improbable," by Nassim Nicholas Taleb, Published by Random House, 17 April 2007.
“The Fermi Rule: Better Be Approximately Right than Precisely Wrong,” by Nagesh Belludi, Right Attitudes, 28 August 2017.
"What's Wrong with Risk Matrices?" by Louise Cox, Risk Analysis 28, no. 2, Society for Risk Analysis, April 2008.
“Zero Dark Thirty Meeting Scene,” YouTube, 1 July 2019.