CSO Perspectives (Pro) 6.15.20
Ep 11 | 6.15.20

Risk assessment: a first principle of cybersecurity.

Transcript

Rick Howard: Probability in stats - ugh. When I was in college, that course was my nemesis. The word probability still sends shivers down my spine almost 40 years later, as it invokes the nightmares that I'm still having of barely getting through that course by the skin of my teeth. And I have talked to a lot of CSOs over the years who feel the same way that I do. There's a reason we're IT people and not math people. Math is hard. And that probability in stats class still holds a special place in my heart because it makes me feel like Ron, Hermione and Harry studying the dark arts at Hogwarts in the "Harry Potter" series.

(SOUNDBITE OF FILM, "HARRY POTTER AND THE GOBLET OF FIRE") 

Brendan Gleeson: (As Alastor Moody) Alastor Moody. Ex-Auror, Ministry malcontent and your new Defense Against the Dark Arts teacher. I am here because Dumbledore asked me - end of story - goodbye - the end. 

Rick Howard: I am still convinced today that in order to solve even simple probability in stats problems involves a mix of a little voodoo and probably the sacrifice of a small wooden creature or two. If somebody mentions probability in stats to me, there is a good chance you'll catch me running at full speed in the opposite direction. In my early network defender days, whenever somebody asked me to do a risk assessment, I would punt. I would roll out my qualitative heat map risk assessments and my three levels of precision - high, medium and low - and I'd call it a day. Along with my peers, I would tell myself that predicting risk with any more precision was impossible, that there were too many variables, that cybersecurity was somehow different from all the other disciplines in the world and it couldn't be done. We were wrong, of course. 

Rick Howard: My name is Rick Howard. You are listening to "CSO Perspectives," my podcast about the ideas, strategies and technologies that senior security executives wrestle with on a daily basis. This is the sixth show in a series that discusses the development of a general purpose cybersecurity strategy using the concept of first principles. So far, I've explained what first principles are and made an argument about what the very first principles should be. Since then, I have covered zero trust, intrusion kill chains, resilience and DevSecOps. In this show, we are talking all things risk. 

Rick Howard: The book that changed my mind on the subject of conducting precision risk assessments was "Superforecasting: The Art and Science of Prediction" by Philip Tetlock and Dan Gardner. Dr. Tetlock is quite the character. He's one of those scream and shake your raised fist at the TV because those people have no idea what they're talking about - kind of like my grandpa every night of the week. Tetlock would watch news programs like CNN, Fox and MSNBC where the host would roll out famous pundits to give their opinion on some topic because, you know, once in their lives they predicted something correctly. It didn't matter that all the predictions they'd made since then were wrong. The news programs would still bring them on as if they were Moses coming down from the mountain to present their wisdom. 

Rick Howard: Dr. Tetlock wanted to keep score. I always thought that when pundits came on, the viewer should see their batting average rolling across the chyron on the bottom of the screen. These pundits have made 73 correct predictions out of 1,000 tries in the last year. Maybe you shouldn't listen too closely to what they have to say. And then Dr. Tetlock decided to test his idea. Working with IARPA - that stands for the Intelligence Advanced Research Projects Agency (ph) - he devised a test using three control groups - the intelligence community, the academic community and a group he called the soccer moms. The soccer moms weren't really soccer moms; they were just regular people with time on their hands who liked to solve puzzles. According to The Washington Post, he then had them forecast answers to 500 really hard questions like, will the Syrian president still be in power in six-months' time? Or will there be a military exchange in the South China Sea in the next year? Or will the number of terrorist attacks sponsored by Iran increase within one year of the removal of sanctions? Out of the three communities, the soccer moms outperformed the academics and the intelligence community by as much as 30% and there were a few analysts who outperformed them all by large margins whom Tetlock called the superforecasters. 

Rick Howard: There are many reasons for their success - both the soccer moms and the superforecasters. According to Tetlock, it mostly came down to ruthlessly admitting their own cognitive bias and not being afraid to change their mind as new evidence comes in. The point of all this is that it is possible to forecast the probability of some future and mind-numbingly complex events with precision. If the soccer moms can accurately predict the future of the Syrian president, surely a bunch of no-math CSOs - like me - can forecast the probability of a material impact due to a cyber event for their organizations. 

Rick Howard: For cybersecurity specifically, I recommend two Cybersecurity Canon Hall of Fame books - "Measuring and Managing Information Risk: A FAIR Approach" by Jack Freund and Jack Jones and "How to Measure Anything in Cybersecurity Risk" by Douglas Hubbard and Richard Seiersen. One key takeaway from both of these books is that the network defender community has a limited understanding of what probability is. My guess is that this understanding comes from that one intro to probability and stats course we all took in college. We remember that to get a probability, we have to count things. We have to count the number of times something happens and then divide it by the number of times it could've happened. And that is one reason why most network defenders say that evaluating cybersecurity risk is impossible. What things do you count? How do you know how many times those things could've happened? And there are so many things to count in cybersecurity. Which things do you choose? While all these questions are valid to a point, probability is so much more than that. For a more useful description of probability, we should turn to the father of decision analysis theory, Dr. Ron Howard - no relation, by the way - and definitely not this guy. 

(SOUNDBITE OF TV SHOW, "THE ANDY GRIFFITH SHOW") 

Ron Howard: (As Opie Taylor) Now, Sheriff Taylor, did you not say that Post Hostess had the corn flakes crackling with fresh corn flavor? 

Andy Griffith: (As Andy Taylor) Well... 

Rick Howard: Decision analysis theory is a formalized approach to making optimal choices under conditions of uncertainty. Dr. Howard defined the discipline in 1964 and has played a major role in the research ever since. He would say that if the CSO forecasts that there is a 20% chance of some bad cyber thing happening next year, that 20% is the current and most precise mathematical representation of what the CSO knows. He says, and I quote, "don't think of probability or uncertainties as the lack of knowledge. Think of them, instead, as a very detailed description of exactly what you know," end quote. From my perspective, you don't have to count things to make that assessment. From my own experience, I've learned that to assess the risk of any organization, you have to ask the right question, and that question must have three components. First, there needs to be a precise, quantitative probability - not some mushy, qualitative guess like high, medium or low. 

Rick Howard: But a probability of what? You just can't tell the boss that there is a 20% chance that some bad cyber thing is going to happen. Even though you are using probabilities, that's still not precise enough. What exactly is bad? We know that not everything in cyber land is important, and we also know that we don't have enough resources to protect everything. We should be focusing, then, on what is material to the organization. In the first podcast of this first principles series, I took a definition of material - and I'm using air quotes here - from the team at Datamaran - quote, "a material issue can have a major impact on the financial, economic, reputational and legal aspects of a company as well as on the system of internal and external stakeholders of that company," end quote. But that still isn't enough. You can't just tell the boss that there is a 20% chance that some material bad thing is going to happen sometime in the future. Of course that's true. The future is a long time. If you wait long enough, some material bad thing is going to happen. So to complete the triad of a good risk question, it has to be time bound. 

Rick Howard: In the essay "Metrics and Risk: All Models are Wrong, Some are Useful" that I published in March, I said this - quote, "the question we need to answer for the board, then, is what is the probability that a cyber event will materially impact the company in the next three years? Answer that question and then board members can decide if they're comfortable with that level of risk or if they need to invest in people, process or technology to reduce it," end quote. 

Rick Howard: So how do you answer that seemingly impossible question? Let's start with a simple but useful model. Let's poll the infosec team. Using techniques described in the books mentioned in this podcast, have the team build consensus on what the probability range is with 95% confidence. In other words, no matter what the upper and lower limits are, they are 95% confident that the actual probability will fall within it. And don't be concerned if the range is wide. This is just the first step. Like I said, it is a simple but useful model. 

Rick Howard: In future podcasts in this first principles series, I'll discuss how we can add complexity to our forecasting models by including Monte Carlo simulations, latency curves and other improvement techniques designed to reduce that range with confidence. But taking this first step, you have a probability you can take to the board. Our atomic first principle is this - reduce the probability of material impact to my organization due to a cyber event. It goes without saying, then, that if this simple statement about probability is the base for everything we are doing in our infosec program, then we must have a way to calculate it. Conducting risk assessments or forecasting probability becomes an essential strategy on the same level of importance as zero trust, intrusion kill chains, resilience and DevSecOps. We are placing all of those bricks on that first pillar to give it strength. What I am advocating for is an expansion of your understanding of what probability is that goes beyond what you learn from your probability in stats course back in college. 

Rick Howard: Take the lead from Dr. Ron Howard and determine a metric - a probability where you can precisely explain to the board what you do know about the security posture of your organization. That is something more meaningful than a squishy high, medium or low guess. Consider that risk assessments are nothing more than forecasting the probability of something material happening to your organization and start developing your own superforecasters now. If soccer moms can do it for the Gordian knot of international geopolitics, I think it is possible for us network defenders to do it to support our infosec first principle wall. Whether we like it or not, we are all going to have to embrace our fear of probability in stats and figure this out, so strap in. And you might want to collect a few small woodland animals just in case, and maybe dust off those "Harry Potter" books. 

Rick Howard: That's a wrap. If you agree or disagree with anything I've said, hit me up on LinkedIn or Twitter and we can continue the conversation there. The Cyberwire's "CSO Perspectives" is edited by John Petrik and executive produced by Peter Kilpe. Mix, sound design and original music by the insanely talented Elliott Peltzman. And I am Rick Howard. Thanks for listening.