8th Layer Insights 5.9.23
Ep 32 | 5.9.23

What Cybersecurity Pros can Learn from Star Wars


Perry Carpenter: Hi, I'm Perry Carpenter, and you're listening to 8th Layer Insights. Okay. This episode is going to be released on May 9th, 2023. As I record this, I remember back to last week when we had May the 4th, which many people associate with Star Wars. May the 4th is also World Password Day. And so, for this episode, I thought that it would be fun for us to take a look at the intersection between Star Wars and cybersecurity and specifically look at the lessons that we can learn from Star Wars as cybersecurity professionals. But before we get into the main part of the episode, I want to take a minute and just answer a question that came in from several of you that listened to Episode 1. And this is something I was going to try to skirt by, but I got enough questions that I think we should just answer it directly. Lots of people asked where Carl has been. For those that are new to the show, Carl is a sound engineer that I've been working with for several years. I also have another person on staff. His name is Mason. We do another show together. And for whatever reason, Carl said that he thought it was time to move on. He actually just kind of did it in a huff one day when we were in a production meeting talking about different things. He got up and said I'm out of here. But well, and then he said a lot of other stuff. It was not pretty. But just wanted to let you know that so that we can not have to deal with that anymore. It's actually hard to think about right now. But wherever Carl is, I do wish him the best. Okay. Let's get back to today's episode. There are actually a couple interesting articles that I want to draw from as we get started. The first one is an article from Blackpoint, which is a security vendor that released a blog two years ago now about how the Galactic Empire got cybersecurity wrong three times.

[ Telephone Rings ]

Perry Carpenter: Ugh, just a second. Hello? Hello?

Sounds like Darth Vader: Yes, may I speak to Perry Carpenter, please?

Perry Carpenter: This is Perry.

Sounds like Darth Vader: Perry, I'm calling for a very particular purpose.

Perry Carpenter: Oh?

Sounds like Darth Vader: I was referred to you directly by my apprentice, Carl.

Perry Carpenter: Ah, well, that explains a lot. So, what is this message, and why no scary title like Darth Carl or Carl Sidious?

Sounds like Darth Vader: No, no. It's just Carl.

Perry Carpenter: Really?

Sounds like Darth Vader: We've discovered that the simple name Carl has the power to strike fear in the hearts of anyone who stands against us. Anything more complicated than that has the effect of diluting the potency, or as we refer to as, the terror quotient. T.C. for short in our training programs.

Perry Carpenter: Ah, I guess that makes sense. Carl kind of always had that effect on me as well.

Sounds like Darth Vader: Yes. The name Carl strikes fear into the hearts, minds, and bowels of --

Perry Carpenter: Oh, okay. I've heard enough. What's the message?

Sounds like Darth Vader: We've been trying to reach you about your car's extended warranty, Perry.

Perry Carpenter: Yeah?

Sounds like Darth Vader: We find your car's lack of extended warranty just --

Perry Carpenter: I'm out here. Tell Carl he's missed. Bye.

[ Hangs up Phone ]

Perry Carpenter: Okay. That was weird. Let's get back to it. This article is from the blog of a company named Blackpoint Security Company. And the title of it is "How the Galactic Empire Got Cybersecurity Wrong All Three Times." And so what I like about this article is that they talk about a number of mistakes that the Galactic Empire made. And in that, they list things like poor protection of sensitive data, lack of audits and penetration testing, deliberate negligence, lack of intrusion detection systems, lack of network segmentation, and a few other things. They even get into lack of cybersecurity education in their staff. I'll put the link to this article in the show notes. But a lot of these are also in another article that I want to touch on, and this actually isn't an article. It's a LinkedIn post from Gary Hibbard. I'll put this in the show notes as well. This is from about three months ago. And I'll actually read it. He says, "Cybersecurity is everywhere, even in Star Wars. Stay with me. Allow me to explain. In 1977, we saw Princess Leia furtively insert the plans for the Death Star into R2D2. Listen, security classification is required, and the Empire should have encrypted that file. Almost every Star Wars movie sees R2D2 plug himself into a USB port and instantly have access to the entire ship or station. Lesson: The Empire should have implemented a segmented network with role-based access controls and locked down ports to unknown devices. 'It's an older code, but it checks out' was a phrase uttered in 1983's Return of the Jedi, allowing the rebels to land on Endor. Lesson: Change your passwords, and don't trust out-of-date login credentials. Quote: 'If you only knew the power of the dark side.' Words spoken by Darth Vader. Lesson: The only difference between an ethical hacker and a hacker is the word ethical. Many start with the best intentions but can be lured to the dark side. We need to train not just technical hacking skills but the ethics behind it too. The Death Star was destroyed when a flaw was found in their physical security. Lesson: Think like an attacker. Where are you vulnerable? It might be the physical aspects of your business that lets you down. Luke switched off his guidance system to take the final shot, which destroyed the Death Star. Lesson: Technology isn't everything. Training and trusting your intuition counts a lot. There's a reason why we say trust your gut because it's known as your second brain. Train people well, and don't be over-reliant on technology to guide you. Finally, the wisdom of Yoda that has been with us from the start. His words go farther and wider than cybersecurity. He says, 'Do or do not. There is no try.' Lesson: Commit to doing something or don't. The choice is yours. Yes, you might fail, but if you believe in what you're doing and do it for the right reasons, then success will follow. There you have it. Star Wars is a cybersecurity franchise. There are many more lessons in Star Wars, but this post is long enough. So tell me in the comments, what did I miss?" And that's the end of the post, and I think enough of an intro for us to go ahead and do the theme song and then come back and get to our interview for today. On today's show, Star Wars, Threat Modeling, and More. An interview with Adam Shostack. Welcome to 8th Layer Insights. This podcast is a multidisciplinary exploration into the complexities of human nature and how those complexities impact everything from why we think the things that we think to why we do the things that we do and how we can all make better decisions every day. This is 8th Layer Insights, Season 4, Episode 2. I'm Perry Carpenter. Welcome back. Today we are going to speak with Adam Shostack. Adam just released a great new book called "Threats: What Every Engineer Should Learn from Star Wars." I love this book because it expands on a topic that Adam is known to be really the foremost expert in, which is threat modeling. And we need to understand threat modeling. I could spend a couple minutes outlining all the reasons why we do, from the systems that we have to the new ways that we are connected to the world to all the interdependencies between those systems, but really, Adam is the expert. And he's spent decades talking about this, so I want to give the floor to him right now. So let's go ahead and hear from Adam Shostack.

Adam Shostack: I'm Adam Shostack, and I'm the author of "Threats: What Every Engineer Should Learn from Star Wars."

Perry Carpenter: Let's just dive straight into that. You've been using Star Wars as one of these ways to kind of get across some of the basic concepts of threat modeling for a while. So why don't we start with when you talk about threat modeling, what specifically are you talking about as far as the discipline or the outcome? And then where does Star Wars come in?

Adam Shostack: Threat modeling is the measure twice, cut once of software or operations. It's thinking in advance about what it is we're going to go do and what can go wrong so that we can do something about the problems before we've built out something where the problem is sort of integral. And when I say thinking in advance, I've done this for everything from literally Microsoft Windows to feed individual features we're adding this sprint, right? So it doesn't have to be big and heavyweight. But as long as we're asking: what are we working on? what can go wrong? what are we going to do about it? we're threat modeling. And I'd like to add a fourth question of did we do a good job? I think this practice is hard to get into. And so making time to reflect on what you've done, and do you do it well, incredibly important for making -- for making it part of how you work. And so, Star Wars, I mean, how can you not love Star Wars, right? It really came about while I was working on my previous book, "Threat Modeling, Designing for Security." And it's a big book, and I needed to make it accessible. I wanted to make it fun. And fun is a really big thing for me in security because when we're having fun, we're learning, we're exploring ideas, we're considering possibilities. And when we're terrified or angry, we don't work as well. And so over the years, I've done a couple of things that are really focused on, how do we have more fun as we do this? So the Star Wars thing, the Elevation of Privilege card deck, all of these are here to help people have fun with a serious goal of let's help them do the work because they want to do the work, and we create this virtuous circle.

Perry Carpenter: Yeah. I love that. So one of the phrases that I heard you say in an earlier presentation was that threat modeling and your sense of purpose, when you think about this, is to engineer a consistent lack of surprise. There's something about that phrase that I think is just amazing because it's not that bad things can't happen. But it's like, oh, when that bad thing happens, it's like, oh my God, we saw that coming. We may not have thought that it was going to come right now. But we can understand what's going on. And now what the, maybe, the net impact of that is going to be, or what's some of the ways to prevent that the furthest potential of bad for that thing happening. Now we know what we can do with that as well.

Adam Shostack: You know, so many security people think of their job as preventing bad things from happening. It's not the way the businesspeople think about it. They're like, okay, we're operating a retail store. We're going to have shrinkage of 1 to 5% of our inventory because people are going to put it under their coats and walk out. And the alternative is expensive. And so when I think about the prevention of surprises, what we want to have happen is we want to be able to say, this is what we think will happen. Is that the 1 to 5% shrinkage, or is that the Death Star then blows up, right?

Perry Carpenter: Right.

Adam Shostack: One of those we should engineer to prevent. We should put some netting in the exhaust shaft. We should put a steel plate over it. We should put some blowout baffles into the system so if the drive malfunctions, I mean, it's a really big space drive. However, the heck it works. It's the only ship in Star Wars that doesn't have those big glowing cones on the back. So clearly, it's brand-new technology. And I just realized that. I don't know where this is going to go, but it's brand-new technology. No one's ever built a space drive like that. We should expect maybe something is going to go wrong. And we should engineer for it because once you've built the -- once you've built your battle station, you don't want to be cutting out swaths of, you know, detention blocks and space lasers and tractor beams and all the other stuff that's there to try and retrofit blowout panels into the thing. You want to think about that in advance so that you can design the thing in a way that delivers on time and on budget.

Perry Carpenter: Yeah. I think that's a really good point. I want to ask, though, when you're doing threat modeling and when you're trying to think of these things, how do you get to that, I'm just going to call it an X factor or a chaos factor? Because obviously when they were building the Death Star, and they had this trench, you know, around the circumference of it, they never thought, oh wow, somebody might, you know, fly a spaceship in there and fire lasers into it. So that means that the group of people that may have been even doing some very light threat modeling around, you know, what's our exposure with this? What's, you know, what are the tolerances? What are everything else? They -- that was a concept that they had never even considered. How do you make sure that those things don't get missed? Or how do you reduce the possibility of those things being missed?

Adam Shostack: So let me do a little bit of a Jedi mind trick. Those aren't the threats you're looking for. And I'm joking, and I'm serious.

Perry Carpenter: Yeah.

Adam Shostack: The first threats that we want to think about are the ones that are easy to predict. Phishing, right? Somebody's going to send an email with malicious content. Or maybe someone will send an iMessage or a Facebook message or something else, right? We can generalize. We can model from the common attacks we see to their variance. And if we haven't done that, we don't need to think about the complex, unpredictable threats because attackers don't need to go there. They don't need to surprise us when the stuff they're already doing works. And so can you get to the small one-man fighter being a problem? There's a lovely little video, "The Death Star Architect Speaks Out." And he says, "Nobody told me I had to design for space wizards."

Perry Carpenter: [Laughs] That's awesome.

Adam Shostack: It's great. And it's true, right? The Empire believes their fire has gone out of the universe. You are the last of their kind. Or they don't believe that these things could happen. That this is just so big and intimidating that no one would ever try to attack it. Or, you know, who knows what they believe? It's a movie.

Perry Carpenter: Right. Right.

Adam Shostack: We can build -- It's great world building. But when we're designing systems, when we're developing systems, we can think about problems based on our own lives. You know, you and I had a conversation about audio quality as we were getting going based on our experience of recording podcasts.

Perry Carpenter: Right.

Adam Shostack: You know, we didn't say, gosh, is that whatever brand of microphone that you've got there going to have this problem? We don't need to go there. We can say, oh, is there popping? Are you clipping? In so many of our systems, nobody ever does that. And so the reason that the motivating principle behind this book is not that every engineer needs to be perfect, but that every engineer should know the patterns that we see year on year, decade on decade impacting our systems so that they're not a surprise when they build their systems.

Perry Carpenter: Yeah, that makes a lot of sense. That makes a ton of sense, actually, because it's so freaking practical. Let's talk about history for a few minutes. Give a little bit of the relevant background about the life that you've had over the past couple decades of really talking about this and the places that you've done that in.

Adam Shostack: So, I got my start in AppSec 25 years ago. And I now have permission to say that Fidelity was working on AppSec 25 years ago. And they brought me in to help do code reviews in their firewall group. And so I did that. And I was working on, how do we secure these big systems at scale? A few years later, I wrote a paper with Bruce Schneier entitled "Threat Modeling of Smart Cards." And it was good stuff. And it was me and Bruce being experts, thinking about the problem, sort of rubbing our chin, and saying, what could go wrong? And at about the same time, unbeknownst to me, two fellows -- I'm giving you the history, including some of mine, here.

Perry Carpenter: Yeah, that's great.

Adam Shostack: Loren Kohnfelder and Praerit Garg were at Microsoft, and they wrote a paper entitled "The Threats to Our Products." And they were organizing how you think about this because they were thinking of even a different scale than I was. How does Microsoft threat model all of Microsoft's products? It's more than one person can do. So time goes by. I'm working a startup. I'm doing another startup. It's great, and one of them doesn't go so well. I figure I need a stable job. I lucked out. Luck favors the prepared mind, but I lucked out. And on my first day at Microsoft, my boss said to me, "Adam, this threat modeling thing is broken. Go fix it." Thanks. Thanks, Eric. I appreciate that. It was a great time to be at Microsoft when investments in security were large and important. And so I went, organized a lot of the thinking. And then the threat modeling book came out in 2014, was actually the easiest way to share everything that I had learned.

Perry Carpenter: Nice.

Adam Shostack: About a year later, I left Microsoft. Was working on another startup, and people were calling me up and saying, hey, Adam, can you help us with this? Said, okay, if the phone's ringing, people want me to help them with this problem, I should go do that. And so, over the last few years, a lot of my business has been in training. And the biggest question I get is, where do I go to learn more about this threat or that threat? And as I trained engineers, people outside of the cybersecurity realm, I realized that there was no smooth learning path for someone to understand the things that you and I have learned by osmosis.

Perry Carpenter: Mm-hmm.

Adam Shostack: And so this latest book is really, I don't think it's the culmination of the journey, but it's the latest milestone in my journey to help people develop more securely out of the gate to take the things that we as experts have learned. And, you know, I refine them, I make them -- I try to make them better than when I come to them.

Perry Carpenter: Yeah.

Adam Shostack: But it's how do we get the knowledge out there to everyone?

Perry Carpenter: So when you have people coming, and they're saying, hey, you know, tell me about this threat and that threat, how much of those questions are really relevant versus you wanting to redirect them and say, now let me tell you how to think about all potential threats or to develop a mindset where you're able to ferret out different threat categories versus the threat of the day?

Adam Shostack: I think that -- I think knowing the cubby holes, knowing the patterns is really, really helpful to people. I mentioned that the threats to our product paper, which introduced STRIDE, which created STRIDE is 1999. And I still use STRIDE every time I teach course. I use STRIDE every time I threat model something because the details change from day to day, from hour to hour, there's something new happening. But if you think about the threats, if you think about spoofing and tampering as genres, they're rock and roll, they're jazz, new songs come out every day. New CVEs come out every day, but the genres don't change that quickly. And so if we teach people to recognize, hey, that's jazz, or hey, this is French cooking, that's Japanese cooking, it gives them this ability to engage with the world in front of them in smarter, more nuanced ways. And that's a really powerful thing.

Perry Carpenter: For folks that are new to some of the formalized concepts around threat modeling, rather than just a bunch of people in a room trying to say what could possibly go wrong but going through these frameworks that have shown themselves to be valuable over a couple decades, at least now, give us a breakdown on STRIDE and DREAD and any of the other models that are useful for folks to start to do some research on.

Adam Shostack: Sure. So STRIDE stands for spoofing, tampering, repudiation, info disclosure, denial of service, and expansion of authority. And these threats form the backbone of book because they're so consistently present. And at the end, we put them together into kill chains and using kill chains proactively rather than retrospectively. DREAD is something that you'll hear a lot. Jason Taylor did a fantastic job coming up with that acronym.

Perry Carpenter: Yeah.

Adam Shostack: I don't make use of DREAD.

Perry Carpenter: Okay.

Adam Shostack: And Jason has a thing called DREAD 2.0, which I think is better. And it's better because there's definitions of the numeric scales that you should be using. And that's really helpful. The reason I'm not a big fan of using it is every organization that's learned how to ship software has learned how to make prioritization decisions. And the closer you align your threat modeling output to that decision-making work, the happier you will be and the more you will get done. Because if, you know, back when I was at Microsoft, we had severity and priority for bugs. And if you add to that, you now have to rate every bug with severity, priority, and DREAD. That's a whole new thing to teach people --

Perry Carpenter: Right.

Adam Shostack: -- when the organization has a working system. And so, you know, I use STRIDE, I use kill chains. I make heavy use of a system called Data Flow Diagrams, DFDs. But the way each of these -- and I -- this is really important context for everyone listening. STRIDE and kill chains are a way to answer the question, "What can go wrong?" Data flow diagrams are a way to express what are we working on. DREAD can be a way to help us prioritize the list of things that we might do about something. And those core questions, it's more important to ask the question consistently. And then to maybe bring to bear different skill sets to answer them. For example, if I'm working on a mobile app, I might express how it works in part with a data flow diagram that shows the app and the other things it talks to, how the data flows. And I might include some message sequence diagrams to say these are the messages that it sends back and forth so we can think about that. Just another way of answering, what are we working on? And when I wrote the threat modeling book, I put kill chains in an experimental section because, you know, I was writing that chapter in, I don't know, 2012 or so.

Perry Carpenter: Yeah.

Adam Shostack: Kill chains had just come out. The team at Lockheed that had created them had published their paper in 2010. And it seemed super promising. But I didn't put it up at the front of the book because I wasn't sure. If I were to do a second edition, it would be at the front of the book.

[ Music ]

Perry Carpenter: I want to, in just a minute, have you give some of your favorite examples from how Star Wars fit into these because I -- I know you've got some really good examples about Star Wars exemplifying spoofing, and, you know, all of them because -- because you constantly give this presentation. But the reason you constantly give the presentation is because it gets the point across so well. But before we go there, I want to ask you a question that may tie into that, which is, for a lot of my listeners that focus on the human side of the problem, how do we use threat modeling in these different frameworks to think about the human exploitation piece of this? And by that, I mean kind of the, maybe it is social engineering, you know, phishing or pretexting like you mentioned before. But maybe it's also something physical or some other kind of tampering that comes in.

Adam Shostack: Yeah. So all of the things that we as humans do, we can create some models, and we can analyze those models. So we can think about the authentication activity, what Carl Ellison called a ceremony. And we can think about that and say, what can go wrong in this ceremony? The person might not have the appropriate knowledge, motivation, or skills. The user interface might present information in a way that is distracting. It might not present information that's relevant. And so when we work on human factors work, the way we typically do it is we say, okay, what can go wrong with this interface that I'm building? And then, we put people behind a glass wall so we can watch people use the interface. Or we instrument the software so we can observe that. And one of the challenges we sometimes run into is we want to optimize for either the benign case or the malicious case, right? So I remember lots and lots of debate over the dialogue box that said "Some files can harm your computer. Are you sure you want to open this one?" And on the one hand, we can -- we hope that that's a speed bump so that people will stop, they'll think about it, and say, truly, I'm not really sure where I got this file. Or there was a warning on my email that said, this is from an external sender, whatever it is. And maybe they'll be like, ah, I'm not going to open this. Or maybe there's the reality that we get habituated and conditioned to just click the darn button to get our -- that says, please let me do my job.

Perry Carpenter: Right.

Adam Shostack: And the challenge that we have for usability folks is balancing and designing great user interfaces that work when it's dangerous without putting up a speed bump, without just annoying people. And in fact, did some work with Rob Reeder, and others, Ellen Cram Kowalczyk, on a system called NEAT SPRUCE. And NEAT is necessary, explained, actionable, and tested. It's properties we want our interfaces to have. And SPRUCE stands for source, process, risk, unique knowledge that the person should bring, choices, and evidence. And you can use this to think about I'm putting a dialogue in front of a human being. Does it tell them where it's coming from? Does it tell them, we're asking you this because you might know if you're in a coffee shop? And if you are, this is probably dangerous. But we and the computer can't know that thing. So the human is bringing unique knowledge. What choices can you make? What evidence should you look at? And look, this is an oversimplification of usability work.

Perry Carpenter: Yeah.

Adam Shostack: But all models are simplifications. That the value that it brings is in the same way that we talked about the unusual threat that might exist in the Death Star or some system that people are working on. Maybe the really clever one will slip through. But I believe that if you're using NEAT SPRUCE to help you think about warnings, you're going to do a better job than if you don't.

Perry Carpenter: Right.

Adam Shostack: And here's the kicker, even if you're an expert, and the reason that it works, even if you're an expert, is that we forget things. Doctors have Apgar scores which they use to assess babies when they're born. It's the color of their skin, the breathing. It's a couple of other things. And no one can ever remember it because Dr. Apgar invented it and sort of [inaudible] something onto their own name. But hospitals that use Apgar scores, babies come out healthier. And this is with medical doctors who do roughly nothing but deliver babies.

Perry Carpenter: Right. Right.

Adam Shostack: But it reminds them of the crucial things to check when there's lots of complicated things going on and lots of human things going on. And so in the same way, the STRIDE, the niche, these, even DREAD, has this ability to remind us to think about factors that matter in the decisions that we're making. And so we make better decisions.

Perry Carpenter: Yeah. And so, like with the Apgar score, that's very, very similar to the value of a pre-flight checklist.

Adam Shostack: Mm-hmm.

Perry Carpenter: It's the, I am a human. I can get so caught up in the fact that I can just go straight through. And I can make a lot of assumptions based on my gut around this, whether this feels right or not. But in that, going with my gut and not relying on process, I can potentially have a blind spot, a distraction, a bad day, or something that just causes an even more bad day later.

Adam Shostack: Exactly. Exactly.

Perry Carpenter: Yeah. So from your perspective, when you think about your most recent book, what are some of the favorite parts of that and some of the favorite Star Wars bits that come out of that? So as I was thinking about the book, before I had written a word, I started telling people the story of the book.

Perry Carpenter: Mm-hmm.

Adam Shostack: And the very first Star Wars story I used was the question of how does R2D2 know who Ben Kenobi is to play the hologram that says, "Help us, Obi-Wan, you're my only hope." Right?

Perry Carpenter: Yeah.

Adam Shostack: That was the very first question that I used to think about this book, and I'll give you one other. And this one -- this one came to me somewhat late. But one of the threats that people always get tripped up on is repudiation, right, to deny responsibility or to not take responsibility for something. It's a weird word. And there's a scene in the beginning of Star Wars where Princess Leia is there on the bridge of the Death Star, and the bad guy says, "Where's the rebel base?" And she says, "It's Dantooine. And he says, "Dantooine is too far to make an effective demonstration. We're going to blow up Alderaan instead." And she's like, wait, what? What? You can't do that. That's -- those are not her precise words.

Perry Carpenter: Right.

Adam Shostack: Were I only a voice actor, I could aspire to being able to deliver a line like Carrie Fisher could. But what happens there is this interleaved set of lies, right? Grand Moff Tarkin is lying about what system he's going to blow up. He's threatening to blow up Alderaan at the beginning of this. And then he's like, "You don't want me to blow up Alderaan, give me another system. Name the system," he says. And she lies to him. And then he lied to her. And he blows up Alderaan anyway. So he's repudiated the promise he made. She's lying to him. But we don't form a different judgment of her because of the situation that she's in. And I think that's a really interesting entree into the idea of repudiation, right, which is, it can be true.

Perry Carpenter: Yeah.

Adam Shostack: It can be false. When I say I don't recognize this charge on my credit card bill, maybe I'm telling the truth, maybe I'm not. Whichever way it goes, I'm repudiating the charge. And we need our systems to help us cast judgment on that claim. So maybe there's a signature, maybe it was chip and pin, whatever it was, the software has a way to pull all of that information and say, Perry, here's the thing, make a decision on whether or not we're going to give Adam his money back.

Perry Carpenter: Right.

Adam Shostack: And so those are some of my favorite examples. And I have a lot more. I can keep -- apparently, I can keep going like this for 300 pages [laughing].

Perry Carpenter: Well, that's awesome, though. In your heart of hearts, what is the hope that you have for the type of comments that you're going to get back and the types of things that you're going to help people accomplish with this?

Adam Shostack: In a lot of ways, Star Wars is a story about hope for a better world, hope for a better galaxy. And my hope for this book is that it lives up to its subtitle. It lives up to what every engineer should learn because every day, you know, across our little planet and across the galaxy, engineers are making decisions that have security consequences.

Perry Carpenter: Mm.

Adam Shostack: So often, they don't know what those consequences are. They don't know what the threats are. And so the words that open the book are, "The Empire doesn't consider a small one-man fighter to be any threat. If they did, they'd have a tighter defense." That's my hope is that the people who read this book will implement that tighter defense so that the systems they build don't blow up either on screen or in their faces.

Perry Carpenter: That's a great line to end on. I love that.

Adam Shostack: Thank you.

Perry Carpenter: And so there you have it. Star Wars is actually a training series. In it, there are lessons about how we should think about designing and implementing systems, how attackers may view our systems, and what can happen when we don't take the time and effort to model our threats. The last thing we want is for something to blow up in our face because we didn't model and appreciate the threat and our adversary. And with that, thanks so much for listening. And thank you to my guest, Adam Shostack. I've loaded up the show notes with all the relevant links and references to Adam's current book on threats and Star Wars to his previous works and presentations around threat modeling. If you've been enjoying the 8th Layer Insights and you want to know how you can help make this show even more successful, there are a couple big ways you can do so. The first is to head over to Apple Podcasts or Spotify and leave a five-star rating. And in any system that you can, go ahead and leave a thoughtful review. That really helps anybody that stumbles upon the show understand what the value of the show may be for them, what you like about the show, and what you think others may appreciate as well. And another way that you can help is to tell someone else about the show. Word-of-mouth referrals are really the lifeblood of helping people find good podcasts. This show was written, recorded, edited, and sound designed by me, Perry Carpenter. The 8th Layer Insights logo and show cover was designed by Chris Machowski at ransomwear.net, that's W-E-A-R, and Mia Rune at miarune.com. Our theme song was written and recorded by Marcos Moscat. Until next time, I'm Perry Carpenter signing off. May the force be with you.

[ Music ]