8th Layer Insights 8.31.21
Ep 8 | 8.31.21

The Risk Episode: Black Swans, Grey Rhinos, Angels & Demons

Transcript

Perry Carpenter: Hi, I'm Perry Carpenter, and you're listening to "8th Layer Insights." Have you ever wondered, how even as kids, we seem to master these extremely complex skills that are far beyond any school-based learning that we've had? Here's an example. Think about the math and physics involved in playing a simple game of catch. You've got someone throwing a ball and someone catching a ball. And their minds are constantly performing these calculations related to distance and force and mass and velocity and trajectory and arc and things that I can't even think of right now. And honestly, if you handed me some graph paper and a pencil, there's no way I could write out the equations involved, much less solve them.

Perry Carpenter: But somehow, our minds engage in this kind of problem-solving naturally and automatically. And our minds do this type of automatic problem-solving or automatic analysis or background processing or whatever you might choose to call it in tons of areas that can save us both time and effort, but which can also sometimes lead us in a completely wrong direction, so wrong that if we were that kid playing catch, those incorrect calculations would mean that there are times when we missed the ball entirely or, even worse, we get smacked in the face with it. Yeah. Ouch. But here's the thing - getting smacked in the face may actually be a kindness compared with the long-term effects of our mental miscalculations and oversights. 

Perry Carpenter: That smack in the face is embarrassing, and it might hurt for a few days. But what about those lapses in judgments or miscalculations that can last for years or decades or a lifetime? And what about the types of decisions that we make one day, but the effect of which have a delayed consequence, those decisions that lay dormant, waiting like a ticking time bomb? Today, we'll hear from four risk experts. We'll hear from Michele Wucker, Christian Hunt, Arun Vishwanath and Matt Stamper. They'll explain how and why we're so bad at understanding risk and how we can begin to defuse that time bomb before it explodes in our face, before it's too late. 

Michele Wucker: We've seen so much great work in behavioral science over the last couple of decades and people pay a lot of attention to that. But that storyline tends to go, well, you know, humans have loss aversion. Humans have optimism bias. But not every human has the same amount of that cognitive bias. 

Arun Vishwanath: The problem with risk assessment is that the entire technology is built to make us think that things are not risky. Remember, there was this movement - they used to call it skeuomorphism. We came up with fancy words for word - right? - where they would make things appear like their real-world counterparts. Well, what's the object of that? Well, the object of that is to reduce your perception of risk. 

Michele Wucker: What is it that makes each one of us choose the things that we do, the risks that we take and the risks that we don't take? 

Christian Hunt: If I'm going out, going shopping, I am taking risks in the things I buy. But I'm also taking risks in the things I'm not buying because I'm missing opportunities. Crossing the road is a risky activity. So we are brought up in the world to take risk and manage it and handle it. And there are no activities that we can engage with that are 100% safe and secure. So whatever we're doing in the world, we are trying to manage that. 

Michele Wucker: And that led me to this concept of the risk fingerprint. 

Christian Hunt: We very often don't recognize the risks that we're running. We don't think intelligently about how to manage them. 

Matt Stamper: One of the most critical things that I think CISOs need to do is take all of that fact pattern that they're gathering it up and put it into an enterprise risk register. And that's where you can take digital risk, you can take technology risks, you can take any other forms of risk, if you will, and translate it into, what are the impacts to our organization's operations? Interrogate reality. Try different areas where certain types of risks just haven't been thought about or contemplated. 

Perry Carpenter: Today, we explore the concept of risk, why we're so bad at understanding it and the steps we can take to improve. Stay with us. 

Perry Carpenter: Hi there. My name is Perry Carpenter. Join me for a deep dive into what cybersecurity professionals refer to as the eighth layer of security - humans. This podcast is a multidisciplinary exploration into the complexities of human nature and how those complexities impact everything from why we think the things that we think to why we do the things that we do and how we can all make better decisions every day. Welcome to "8th Layer Insights." I'm your host, Perry Carpenter. We'll be right back after this message. 

Perry Carpenter: Welcome back. Let's think for a minute about how we as humans process risk. The way I tend to imagine it is like one of those TV shows or movies where you get a character who's trying to make a decision. And then in that moment, two other characters enter the picture - an angel and a devil, and these characters usually pop up on the opposite shoulders of the person making the decision, whispering in their ears to subtly influence them or sometimes even loudly argue their cause. Yeah, I think the way that we process risk is kind of like that. It's like I wake up and start going through the day, and all the while, my mind is scanning for various choices. 

Unidentified Person #1: Scanning environment. 

Perry Carpenter: And then lots of times, there's another part of my mind that's trying to rebel against any kind of real assessment of risk. 

Unidentified Person #2: Scanning environment - yeah, right. You don't have to be so uppy (ph) about things. 

Perry Carpenter: And that's how I or maybe all of us go throughout our day - with this virtual risk angel on one shoulder watching out for us... 

Unidentified Person #1: Risk detected. 

Perry Carpenter: ...And a risk demon on the other shoulder, arguing for laziness or comfort or pleasure or shortsightedness. 

Unidentified Person #2: Yeah, just ignore that. 

Perry Carpenter: And those inbuilt arguing perspectives on risk and life are constantly evaluating everything that we do, from the breakfast that we choose... 

Unidentified Person #1: Pizza again? 

Unidentified Person #2: Hey; pizza. That don't look too old. 

Perry Carpenter: ...Or the snacks that we eat to how we spend our money to our web browsing habits and the emails that we engage with. 

Unidentified Person #2: Oh, you got mail. 

Unidentified Person #1: Something seems off with that email. 

Unidentified Person #2: Click on that link now, buddy. You don't want to miss that. 

Unidentified Person #1: Evaluate further before clicking or downloading anything. 

Unidentified Person #2: It's nothing. 

Perry Carpenter: And this is a constant struggle that, frankly, we're not very good at. This comes back to the simple fact that our minds typically don't enjoy doing the hard work of analysis that it takes to evaluate long-term consequences. And as humans, we really don't like to do more work than we have to at a given time. Add to that things like cognitive biases, blind spots and a love for short-term pleasures over our long-term good, and you've got the perfect recipe for a risk crisis. And let's face it. Sometimes it just doesn't feel good to put off the short-term thing that could be really fun for some long-term priority that we probably won't feel any benefit from for months or years or maybe even decades. 

Unidentified Person #2: You, sir, need to get out. You're missing out. Come on. Spend a little. Live a little. We can always catch up later. 

Perry Carpenter: So even when we know that we're doing what's technically the right thing risk-wise, we can still feel a bit salty about it. 

Unidentified Person #1: That gas station sushi looks a bit sketchy. Think before proceeding. 

Unidentified Person #2: You're missing out. 

Unidentified Person #1: Thank you for considering the risk. 

Unidentified Person #2: Thank you for considering the risk? How about thank you for considering my ass? 

Perry Carpenter: But that's just the way that I tend to think about risk. And, well, there's a risk that I might be overlooking something. So let's quickly cover a few key concepts and then bring in some folks to help. 

Perry Carpenter: First up, let's define risk just so that we've got a base-level understanding of what we're all talking about. And I'm going to use the Merriam-Webster definition. The No. 1 definition of risk is the possibility of loss or injury. Then it goes on to define risk as someone or something that creates or suggests a hazard or the chance of loss or the peril to the subject matter of an insurance contract or the degree or the possibility of such a loss or being a person or a thing that is a specified hazard to an insurer. A hurricane might be a risk in that example. Risk is also defined as the chance that an investment, like a stock or commodity, will lose value. You can also use risk as a verb when you're exposing someone or something to a danger or a hazard. And here's an example of that. Carl risked his life and his digestive system on that gas station sushi. 

Perry Carpenter: And so risk is the danger that some person, place or thing can be put in peril. And that peril relates to some sort of pain or loss. And that pain could be bodily, as in literally in or of your body, or structural, like your house or your place of business, or even emotional. And loss could be similar - loss of life or limb or house or business or loss of control of something, financial loss or even opportunity loss. Everything that we do in life has some type of inherent risk in it. And with that happy thought, let me bring in our first guest. 

Christian Hunt: Hello. My name is Christian Hunt, and I'm the founder of Human Risk. 

Perry Carpenter: Chris' company Human Risk specializes in leveraging techniques from behavioral science to address problems related to ethics and compliance. 

Perry Carpenter: So, Chris, can you tell us a bit about what risk means in your context and what makes the study of risk so interesting to you? 

Christian Hunt: We all intuitively understand the idea of taking risk. If I'm going shopping, I am taking risks in the things I buy, but I'm also taking risks in the things I'm not buying 'cause I'm missing opportunities. Crossing the road is a risky activity. So we are brought up in the world to take risk and manage it and handle it because if we weren't capable of doing that, we wouldn't survive for very long. 

Christian Hunt: So whatever we're doing in the world, we are trying to manage that. The flipside is that we very often don't think in those terms. We very often don't recognize the risks that we're running. We don't think intelligently about how to manage them. And so I'm fascinated by this disconnect between something that we naturally do and are, on some level, good at with something that we actually don't think about that often and, on some level, are appalling at. And when I look at what we are doing as a species, I think there's plenty of opportunity if we got our risk management right - in other words, if every single one of us understood more, if organizations were better at thinking about how they did that, societies were better thinking about how we did that - we would get much better outcomes. And I'm fascinated between those two things - of sort of saying, how do we get the best out of people, mitigate the risk, get them to be the best they possibly can using the thing that we find naturally in ourselves but also recognizing that, in many cases, we get it badly wrong? 

Perry Carpenter: Christian mentioned that risk is inherent to humanity. It's in everything that we do, and each of us has certain risks that we manage well and some that we're not so good at managing. In other words, risk isn't the same for everyone or for every organization. It's individual, which brings us to our next guest, Michele Wucker. Michele is the author of two books on risk - "The Gray Rhino: How to Recognize and Act on the Obvious Dangers That We Ignore" and "You Are What You Risk: The New Art and Science of Navigating an Uncertain World." We'll get to the gray rhino concept in just a bit. But first, I want to explore this concept of the unique, individualistic nature of risk that Christian alluded to. Michele explains this in her book, "You Are What You Risk." And she calls it the risk fingerprint. 

Michele Wucker: I coined the term gray rhino for the big, obvious things that are coming at us that give us a challenge to respond or not. And I've just come out with the sequel, "You Are What You Risk," which builds on some of the ideas in "The Gray Rhino." I got down to a much more granular level and asked, what is it that makes each one of us choose the things that we do, the risks that we take and the risks that we don't take? 

Michele Wucker: And that led me to this concept of the risk fingerprint. And, of course, like a fingerprint on a wine glass at a crime scene, the risk choices that you make leave an imprint that the world can see. They tell the world who you are just like that fingerprint imprint at a crime scene tells the detective. But there also are all sorts of components that come together that make you quite unique. There is the innate personality, the sort of genetic element that you might want to think of as the arches and the whorls and the - you know, the shapes on your fingerprint. There's the habits that you take, the mindfulness, the things that you do, which - you might, in metaphorical terms, think about whether you've got calluses as a manual worker or use nice, soft, sweet-smelling lotion. 

Michele Wucker: There's the environment around you. Your fingerprints actually sweat, which we don't think about, but they do. Or you know when you're in the bathtub and they get all wrinkly when you're cold? Your physical environment affects your risk decisions much more than you might think. I was just blown away by some of that. But also your experiences - like, a scar would alter your real fingerprint. And all of those things come together. 

Michele Wucker: And I found that there are some people who would have a shock, you know, a giant, unexpected - you know, a relative dies or whatever in their life. Some of them would respond to that shock by becoming much more comfortable with risk. They're like, well, if I got through that, I can get through anything. And then other people pulled back, and they became very, very hesitant to take risks. And so all of these things work together to create your risk fingerprint, which is really the set of influences behind the risk decisions that you make. My point is that by understanding your risk fingerprint, you can get crazy powerful insights into where you want to go in life. 

Perry Carpenter: So that's on the individual level. But, Michele, how does this work in collectives like organizations or different regions around the world? 

Michele Wucker: Every organization, every society is made up of, you know, tens, hundreds, thousands, millions of people. When you look at all of these different risk fingerprints out there and how they interact with each other, that gives you a hugely powerful tool for teamwork, for negotiation, for managing employees, for dealing with your clients. Or, you know, if you're a policymaker, a world I spent a lot of time in, you know, what are the kind of policies that you can come up with to encourage people to take, quote, unquote, "good risks," you know, to pursue opportunities and to avoid the, quote, unquote, "bad risks" - you know, the leaping before you look, the dangerous kinds of decisions that can bring down your company and sometimes bring down a lot more than just your company? 

Perry Carpenter: So we understand that we all have these subtle differences in the way that we personally and even collectively evaluate risks and respond to risks. But let's take a step back and ask a more fundamental question. Why is it that we seem to be so bad at evaluating risk? 

Christian Hunt: If you look at many of the 21st century decisions that we have to make, the environments that we find ourselves in don't necessarily lend us - lend themselves particularly well to the decision-making processes that we've used in the past. So the risk management machine, if you like - you know, our brains are very much geared up to a totally different world. And I'll give you a few examples of where that manifests itself. If you think about the volume of information that we're all exposed to today. And I'm thinking about what we get through the internet, the stimuli that we see. There's a ton of stuff that we have to process. 

Christian Hunt: Now, we are not very good at doing that, and I don't just mean multitasking. I literally mean, think about how many emails you read, how many different decisions you make in a day. It's something like 30,000 decisions, they estimate, which includes everything. Which socks am I going to wear? What am I going to eat for lunch? Am I going to stand up? Am I going to sit down? Am I going to open the window? Am I going to close it? If you add up all those decisions, there's a ton of other ones that we're taking, and we're just not built for that particular world, and so we get information overload. Sometimes we have too much information. Sometimes we don't have enough information, so we plug the gaps, and we make decisions that perhaps we shouldn't take. 

Christian Hunt: And I think what we've got nowadays is a set of circumstances where the environment that we're in, the sorts of decisions that we're able to take through technology and through some of the other societal changes that we've had, are things that we're not equipped to handle and deal with. And so when I look at what we're not good at, it is things that we're not built to deal with. And, you know, the power that you have at your fingertips when you - your keyboard - you can unleash a whole load of things. We're just not able to handle uptaking (ph). 

Christian Hunt: Social media is a really good example of something where on the face of it, that's just people. It's person-to-person communication. But it's not. You're allowing people to broadcast in ways they've never done before. We can talk about other people behind their back. That's something that we can do in the human environment. But you do that on social media, you're broadcasting it. And we think maybe if we're making some negative comments about a famous person - if we did that conversation amongst ourselves, they'd never see it. Social media - they can see it. So we get people being incredibly unpleasant on social media because clearly our brains aren't wired for that. We're not really thinking through. We're not all nasty people, but it brings out the worst in us. 

Christian Hunt: And so lots of the things in our environment nowadays, we're not equipped to deal with. Jetlag would be a really good example. That illustrates the fact that our brains are not made for getting on planes and flying to a different time zone - never built for that, and so we get tired as a result of it. Plenty of other examples where our cognitive processes let us down because they're not built for that sort of decision-making. 

Perry Carpenter: That's an interesting thought. There are so many things in our world today that are just new. Technology and innovation seem to be accelerating at such a rapid pace that it's hard to keep up, and that opens up new risks. The risks associated with not adapting and being woefully behind the times, the risk of not understanding how to use a system or to implement a new product, and those small mistakes that we make because we fail to adapt, can have ripple effects into new habits that we form - bad habits, that is. We can also create new security problems or make those problems worse. We can miscommunicate and more. I contacted Arun Vishwanath to briefly explore those issues. 

Arun Vishwanath: How do we assess risk? Well, the problem with risk assessment is that the entire technology is built to make us think that things are not risky. 

Perry Carpenter: Arun is an expert in human cybervulnerability and has been researching it for close to two decades and writing about it as an academic for that same amount of time. And add to that, for about the past decade, Arun has been involved in the intersection of technology, security and policy. In my discussion with Arun, he mentioned something very interesting about why we are overlooking so much risk as it relates to technology. It's because those inherent risks are increasingly being hidden from us, both intentionally and unintentionally. And it's doing this as computing interfaces continue to streamline and simplify. Arun spoke about the evolution of computing and user interfaces, and he introduced me to a new word, skeuomorphism. 

Arun Vishwanath: They call it skeuomorphism. We came up with fancy words for it, right? 

Perry Carpenter: Which is where you take a user interface element and you design it to look like a real-world counterpart. A good example of that would be using an envelope to represent email or a trash can to represent deleting something. You get the idea. Now, the problem with all of that is that any time technology is being simplified, certain things are being hidden, typically to hide complexity, but it also hides risk. 

Arun Vishwanath: We use these visuals to essentially tell that lay user, and even the sophisticated user, that, hey, what you're doing here is no different than doing this. 

Perry Carpenter: If we think about skeuomorphism as it relates to mail, when you delete something from your inbox, it may go to your trash. And then when you empty your trash, it shows you that your trash can is empty. But what's going on on the server? Well, in all likelihood, that persists. You've not actually gotten rid of it. So there's a risk that any information in that email, sensitive or otherwise, can still be accessed by someone or something. 

Arun Vishwanath: So your risk perception is actually reduced. Your perceptions of risk are reduced. It is disarming, whether you like it or not, and this disarming is routinized. In other words, it's consistently played over, let's say, a whole generation of us. 

Perry Carpenter: The fact that these reductions of simplification and reduction of perception of risk have become routinized creates problems, because with that consistency, the codified thought becomes the symbol. And nobody even thinks about the risk anymore. It's just built-in. And when nobody considers the risk, well, then they can become even sloppier and create more risky behaviors because their assumptions are faulty. This is a big danger for our generation and for the next generation and for the next. 

Arun Vishwanath: What you're doing is mathematically, you're exalting - and I'm using the word exalting as geometrically increasing the probability of trust with each layer, right? So skeuomorphism is one of these things, but there's other things, right? So there's the inability to estimate risk. Our inability to estimate risk is something we never look at. Technology risk is varied, computing risk, for instance, that it's virtually impossible for anyone to know what all the vulnerabilities. It's just too complicated because there are layers upon layers upon layers of technology built on top of each other. So this, again, forces us to go back and rely on the interface or areas of the interface. Everything is done indirectly. And so this creates another layer of problems, which is you really don't know what could happen because you really can't know what could happen. Nobody can, really. If you remember an early email programs, there used to be these buttons that would say file, send, edit. We still see it in a lot of programs on computers. Go to mobile devices. Look at the Yahoo Mail app. There are just buttons that indicate what should be in those places because the presumption is most of us get it. We don't even need to know what it is that has to be done. The number of indicators, because everybody's trying to compress more information in a very short space, there are just a few buttons here, a few buttons there, and the rest are all something that you routinely understand. You made the cognitive connection as far as they're concerned. If you look at Gmail, how it works in the Gmail app, for instance - which today, you know, you can have a 12-, 13-inch tablet. Somebody said, hey, you know, the primary function here is to read the email. It's basically, oh, you got this email; respond to it, because that's what you want to do. Now, notice that some programmer with an institution - and let's put those three things together. You got the institution, the interface and the individual. The institution decided that this is the purpose of my service (ph). The interface was programmed by routinizing and taking advantage of these routines. The individual is basically just reacting to each of them, not thinking for this. Now have the complete picture. You and I, as, you know, security evangelists, technologists and so on, we're attempting to interject and say, wait a minute. Stop. Think before you do this. Stop. Stop. You're not supposed to just accept that link. Oh, wait a minute; you're not supposed to just trust. What if this is not Microsoft or Gmail on Google or what have you? And that's the problem right there. 

Perry Carpenter: We'll be right back after the break. 

Perry Carpenter: Welcome back. OK, so we've covered what risk is and that each of us perceive various risks somewhat differently and that risk enters the picture when we are overwhelmed by complexity, but also that new risk can be created when we try to streamline and remove complexity. So what's next? Well, we could spend ages talking about how cognitive bias influences our individual and collective risk profiles, but I'm working on a future episode that takes a deep dive into cognitive bias. So for now, let's just stick with the simple statement that cognitive biases do, in fact, influence our individual and collective risk profiles. And those biases can, of course, make us more risk averse, risk tolerant or risk blind. 

Perry Carpenter: Oh, I know what we should cover next. I think it's time to talk about how we can understand risk, model various risks and improve our risk postures. And that brings us to a couple key risk concepts and frameworks. I'll start with some terms that you may hear tossed around on TV and at conferences where geeks talk all things geeky related to risk. And the first concept is the black swan or what's known as a black swan event. The phrase black swan was popularized by Nassim Nicholas Taleb, who is a writer, a statistician and options trader and a risk analyst, and it refers to events that arrive suddenly that are surprising and have a catastrophic impact. To qualify as a black swan, the event must be something completely unexpected, even if that unexpectedness is because everyone was blind to the possibility due to a collective cognitive bias. 

Perry Carpenter: The origin of the phrase black swan actually dates all the way back to a Latin expression from the second century, when a Roman satirist of the time metaphorically referred to something as being, quote, "a rare bird in the land and very much like a black swan," unquote. And here's the key. At that time, the common understanding and the understanding of that writer was that black swans didn't exist. They had only ever seen white swans. And so the phrase black swan was used to imply something that was impossible. And it was used in this way all the way through the 16th century. 

Perry Carpenter: But - and here's where our new use of the phrase comes in. All of those assumptions about the non-existence of black swans were shattered in 1697 when a group of Dutch explorers trekking across Australia stumbled across - you guessed it - black swans. Minds were blown. Expectations were reset. And now we have this cool phrase, black swan, that people use as a way of labeling unexpected events that fly in the face of what people believed was possible. And yeah, there will always be another group of people that can look through the rearview mirror of history and put together a narrative about how everyone should have known better. But those people are usually just know-it-alls that didn't see it coming either. 

Perry Carpenter: So now, let's move from black swans to gray rhinos. The gray rhino is a concept coined by our guest Michele Wucker. And the difference between a black swan and a gray rhino is that you can really only talk about black swans as something in the rearview mirror because, by definition, you shouldn't be able to see them until you've smacked straight into that event or are past it. The gray rhino, on the other hand, is something that you can see right in front of you looming. You can acknowledge it, prepare for it and deal with it if you're willing. Here's Michele. 

Michele Wucker: One of the challenges I found in talking about gray rhinos is that people want to talk about it like the black swan, which you can only talk about in hindsight. So they'll come up to me, say was that a gray rhino? Was that a gray rhino? And I keep saying, well, I can talk about it as an example, but I really want to be talking about what's in front of us, you know, what's over the dashboard, not in the rearview mirror. And so even just the way we talk about any of these things, the way the black swan caught on and the way people are so resistant to looking at what's in front of them, goes to some of the biases in the back of our heads that get in the way of doing things as organizations, you know, as policymakers and then as individuals. 

Perry Carpenter: But black swans and gray rhinos aren't the only risk-related concepts you can impress your friends with at your next cocktail party. It seems like there are nearly endless ways that people and organizations try to create risk equations or to model risk. We don't have time to go into all of them. But if you're interested, I'll steer you to the FAIR framework. FAIR, spelled F-A-I-R, is an acronym that stands for the Factor Analysis of Information Risk. It's all about helping organizations think through the types of risks that they face, the assets that they have, the types of loss that the organization could suffer, and the possibilities associated with each. I'll have some links in the show notes if you want to check it out. 

Perry Carpenter: And of course, there are some simple equations that are commonly used to calculate risk. One is that risk is the combination of threat, the level of vulnerability and consequence. Another is that risk equals the threat times the level of the vulnerability. Yet another is that risk equals the consequence times the likelihood. And another I came across is that risk equals the impact times the probability divided by the cost. 

Perry Carpenter: And lastly, Peter Sandman, one of the gurus of risk perception, proposed this equation for our perception of risk. He says that risk equals hazard plus outrage. But I think you get the picture. All of these are basically looking at the types of threats to an organization or person, the damage that that threat could cause and the probability or the likelihood of that thing happening. And I think all of this reinforces the point that risks or the levels of risk are highly individual. It goes back to that idea of the risk fingerprint. We may all face some of the same threats, but our exposure to those threats can differ wildly, or our mitigation strategies for each of those threats can be different. In fact, there are even some risk models that are based on someone's personality. 

Michele Wucker: In "You Are What You Risk," it goes through case studies and things about here's your innate personality. There's a tool called the Risk Type Compass developed in the U.K. that I absolutely love, that helps you to determine if you're calm or anxious when you face a risk and how impulsive or methodical you are. And it's based on Big Five personality types, a psychometric test over the years. That's a really powerful test. 

Michele Wucker: There's some questions that I tend to ask people. What's the biggest risk you've ever taken? And that really goes to the heart of what everybody should be asking themselves is, you know, what's important to you? What are you afraid to lose? What are you not willing to lose under any circumstances? 

Michele Wucker: Other factors involve the people around you, your own demographic background, the kind of company you work for. Are you in a startup where they, you know, move fast and break things? Or are you in a legacy company where the biggest risk is to not do things the way they've always been done? Who do you surround yourself with? Do you have people who have the expertise and the character to help you to find the right answer to the questions that you're facing? When you're a teenager, are you running with the crowd who's going to get you drunk and crash cars? Or are you running with the crowd who's going to science fairs? 

Michele Wucker: Look at your environment. So there are lots of things to pull together in your life. You know, ask yourself about your parents. My parents have very different attitudes towards risk and growing up. There are little questions you can ask, too, that hold really powerful clues. I was with a group of people in insurance and asked them, how long do you leave to get to - take to the airport, which just got the whole room chattering away. I mean, we're in Chicago, of course, and getting to O'Hare - don't even get me started. And the other thing that came out is this concept of risk empathy, which has really resonated with people. And that's that once you understand your own risk fingerprint, spend some time trying to understand the risk fingerprints of the people around you. So thinking about the risk fingerprints of the people around you is so important for team-building, for decision-making, for talking with clients and customers or investors and for thinking about the safety standards you're going to put in place for clients. It's a hugely important concept. It's not just about your own risk fingerprint, but about your organization's, about your community's, about the risk fingerprints of the people you love. 

Perry Carpenter: In this last section, let's think about the concept of individualized risk, that risk fingerprint and how we can move in and help people make better decisions. What does that look like? I'll turn first to Christian Hunt. 

Christian Hunt: All of this stuff, I think, comes together to say, if it is a human being that we're trying to influence, what we need to do is work out what is likely to be driving that human's decision-making, what are the risks posed by that? And then we can start to see solutions that might come in a multitude of different forms. It might come in terms of the regime that we operate. It might come in the technology that we use. It might come in the way we frame and structure something. Language - really, really important. Think about the word officer. That infers a certain authority, but it also, in the wrong context, sends a signal that you've got someone who's highly bureaucratic - little jumped-up official who's telling me what to do. And our perception of that will change according to a different environment. So this idea of changing people's perception of the situation is how you change the way they think about it. 

Perry Carpenter: I'm wondering if you can give us a concrete example of what it looks like to change the way that somebody is thinking about a particular situation. 

Christian Hunt: When we think about solving these problems, we very often think that if we need people to behave in a particular way, we have to change the reality of their experience. So if we - if something is irritating people, if they're not buying a particular product, they're not following a particular rule, then we need to redesign the product. What we can do is change their perception of that. If you can't change the reality, there's a rule that we just need people to do, there's a physical reality that it would be really expensive to change, then what we can do is, rather than deploying logic and solving the problem, we deploy psychologic. 

Christian Hunt: And the example I like to talk about is the London Underground. So this is the railway system in London, which as the name implies, is underground. And one of the things that they recognized was they looked at customer satisfaction as a key metric. So it's a publicly owned railway system. Customer satisfaction matters to them as a metric. And they realized that customers were getting irritated and the customer satisfaction ratings were very, very low. Now, normally, the traditional logic for increasing customer satisfaction in a railway would involve changing the physical architecture. So put more frequent trains on. So that costs you money. Maybe put better air conditioning in the trains. You make them more comfortable. You do a load of things in the physical world. For an underground railway, that is criminally expensive because you'd have to dig new tunnels, build new - really, really expensive. 

Christian Hunt: So what can you do to increase customer satisfaction? Well, don't change reality. Change their perception of reality. And they started to look at what irritates people. One of the things that irritated people was not knowing what was going on. So if I get to a platform and I don't know when the next train is coming, I'm very frustrated by that, that fact. And they realized that within about 90 seconds, if you don't know what's going on, you start to get annoyed. And you will wait 10 minutes, they've discovered, for another train, if you know that one is coming. Bear in mind, this is in a big city, right? So people are used to being able to move around quickly. So there's a frustration that kicks in, 90 seconds. So what do they do? They put indicators up that told you when the next train was coming, instantly removing people's concern about what's going on. You have information that's reliable, tells you when the next train is coming, and you relax. 

Christian Hunt: And that piece of solution didn't involve changing anything about the trains. It just gave people the information that was causing them the lack of satisfaction, the concern. And so there's a really smart psychological solution. Very, very cheap to put indicator boards up, to tell people when the train is coming. Very expensive to change reality. You get the same level of customer satisfaction because what was causing the dissatisfaction wasn't actually all of those things you might think. It was actually the fact that people didn't know when the next train was coming. Remove that stress, and you solve the problem. That is a behavioral solution to a problem that you might otherwise have thought you could only solve through physical means. One of the things I love about behavioral science is the moment you open yourself up to there are other ways we can solve this problem - we move out of the logical, into the psychological - is that we can suddenly see alternative ways to do things. 

Perry Carpenter: There's one other perspective that I want to bring to today's show, and this is the perspective of the chief information security officer. Because over the past decade or so, the chief information security officer has moved from being somebody who's primarily thinking about all of the technology from a pure security, keep-the-bad-guys-out perspective to more of a way of understanding how to deal with the various pressures of risk within an organization and all the ways that integrates with the different business levers and the different business pressures of an organization. And that's a subtly different mindset that we need to understand. So I've invited Matt Stamper, who is one of the authors of the "CISO Desk Reference Guide," to walk us through a little bit of the chief information security officer perspective on risk and how they can bring good risk management hygiene practices to an organization. 

Matt Stamper: If the CISO is fundamentally hiding in the office - and I'm using that word poignantly - they're going to get a very limited view of risk to the organization. If I'm a CISO, one of the things that I have to do is I have to meet with my sales and marketing teams. I have to meet with legal. If I've got a manufacturing facility, I want to understand that. If I've got folks that are managing facilities infrastructure, access controls into a data center, for example, or any other line business within the organization, let alone subsidiaries and all that, you have to do that. If a CISO isn't out there proactively getting kind of a decent view of the ground truth of the organization, they, as well as the organization, will be blindsided by risk. 

Matt Stamper: One of the most critical things that I think CISOs need to do is take all of that fact pattern that they're gathering up and put it into an enterprise risk register. And that's where you can take digital risk, you can take technology risk, you can take any other forms of risk, if you will, and translate it into what are the impacts to our organization's operations? What are the impacts to our reputation, to our finances, to our ability to comply with regulations and contractual obligations? 

Matt Stamper: The CISO, when they're not actively engaged with the business, is really doing a disservice. I think it's one of the most foundational elements of them all, is being seen in the company talking to employees, talking to stakeholders, to the board, to executive leaders, even to partners and third parties and really kind of validating, understanding and interrogate reality, try to find areas where certain types of risk just haven't been thought about or contemplated. And again, that risk register is a tool that allows the executive leadership team to start triaging and prioritizing risk, but they can make those prioritizations based on, fundamentally, their context of the business and its enterprise impact. 

Matt Stamper: It's, hey, we've got some risks here. We have some business decisions to make about the risk, and we have a number of options to address them. We can transfer that risk to a service provider. We can insure against that risk. We can mitigate it with controls, be they internal or external controls, again, with third party support. Or we can accept that risk and eventually get to some form of residual risk that is appropriate to the organization. 

Matt Stamper: The one thing that an organization should never do - and it's a corporate sin or an organizational sin - is ignore risk. You're breaking, effectively, your fiduciary responsibility to your organization when you look the other way, when you don't kind of tackle these things and address them. And risk should be prioritized. Not all risk are created equal. It's a bit of a "Sophie's Choice." I'm going to choose to love (ph) this risk a lot more and deal with it and focus on it than this other class and form of risk. 

Perry Carpenter: Well, it looks like it's almost time to wrap up today's show. I'm going to give Christian Hunt the last word, and then I'll be back to summarize with a few closing thoughts. 

Christian Hunt: Our ancestors and us have managed to make it this far, so there's an illustration. As a species, we've done pretty well from a risk-management perspective. We didn't get wiped out, so basic survival instincts - pretty good. You can always find examples if you think of the Darwin Awards of people doing dumb stuff. But in any society, you're going to have these outliers of people that are extreme examples of bad decision making. But generally speaking, we're equipped with a reasonably good set of skills that has got us to where we've got to today. 

Perry Carpenter: In the end, what we find is that risk is fundamental to human nature. Our perception of risk is extremely individualized. It's shaped by our surroundings, our culture and our lifetime of experiences. And we're not necessarily rational in how we evaluate risk, but we're still here. And we have tools and processes to help us uncover our blind spots, combat our biases and mitigate at least some of the risks posed by the various threats that exist. We'll never be able to mitigate every threat out there because to live is to experience risk and also to be an agent of risk, but we can find ways to see risks more clearly, measure risks, manage risks and thrive in our businesses and enjoy our lives. And that's what managing risk is really all about. 

Perry Carpenter: Thanks so much for listening. And thank you to my guests, Michele Wucker, Christian Hunt, Arun Vishwanath and Matt Stamper. I filled the show notes with links to the references that we mentioned today, including Michele's books "The Gray Rhino" and "You Are What You Risk." And I also included a ton of other great resources that I think you'll find valuable, so be sure to check those out. 

Perry Carpenter: If you've been enjoying "8th Layer Insights," please go ahead and take just a couple seconds to head over to Apple Podcasts and rate and consider leaving a review. That does a ton to help. You can also help by posting about the show on social media, recommending it within your social network and maybe even finding an episode to recommend to a friend or a family member. And if you haven't yet, go ahead and subscribe or follow wherever you like to get your podcasts. Lastly, if you want to connect with me, feel free to reach out on LinkedIn or Twitter or Clubhouse. I'd be happy to connect with you. Until next time, thank you so much. I'm Perry Carpenter, signing off.