Bonus: Rick Howard holds 2023 first quarter cybersecurity analyst call.
Rick Howard: Hey, everyone. Welcome to the Ciberwire's Quarterly Analyst Call. My name is Rick Howard. I'm the N2K's CSO. And the CyberWire's Chief Analyst and Senior Fellow. I'm also the host of two CyberWire podcasts, Word Notes on the ad-supported side, meaning it's free to anybody that wants to listen if you like ads. And it's short, usually no more than five minutes. It's a description of keywords and phrases that we all find in that ever-expanding alphabet soup that is cybersecurity. And the other podcast I do is CSO Perspectives on the pro side, the subscription side, I like to call it the Netflix side. It's a weekly podcast that discusses the first principle of strategic thinking and targets senior security executives, and those that want to be them sometime in their career.
Rick Howard: But more importantly, I'm also the host of this program, the CyberWire's Quarterly Analyst Call reserved for CyberWire pro subscribers. And I'm happy to say that I'm joined at the CyberWire hatch table today by two friends of mine and regular visitors to the hash table. The first guest is Etay Maor, the Senior Director for Security Strategy at Cato Networks. And how can I say this, Etay? Fellow PC gaming enthusiast. Welcome to the show. And Jenn Reed, a former CSO, now a principal partner solution architect at Amazon Web Services. Jenn, welcome.
Jenn Reed: Hi!
Rick Howard: This is our 13th show in this series where we try to pick out the most interesting and most impactful stories from the past 90 days and try to make sense of them. And so much has happened in the first quarter of 2023. It was tough just to pick three. We could have discussed international governments going on the offense to take down criminal gangs or the number of active ransomware gangs on the internet on any given day. Microsoft said this quarter that it's actively monitoring about 100 of them. And the relatively new tool in the ransomware toolbox beside corrupting victim data is extortion not to release the data. But, Etay, you have something completely different in mind. So, what is your first most impactful cybersecurity story of 2023?
Etay Maor: So, well, first of all, thank you for having me on this show. But while it's not directly a security topic, it definitely is going to make a big difference and that's ChatGPT. I think nobody -- everybody is talking about it in every possible field. And it's going to affect cybersecurity, both on the offense and on the defense. And so, that's why I chose this topic. I think there's been a lot of discussions around it in the sense of how it can be used by attackers, the different threats that it poses. Not enough I think discussions around how it can also help defenders, although I think we're starting to get to it now.
Etay Maor: And the latest development is, of course, I think from yesterday or the day before the letter that said, "Hey, how about we stop AI research for the next six months or so just to figure out?" You know, going back to if you remember the movie Jurassic Park, you know, scientists were so preoccupied
with can we do it, that we didn't ask ourselves should we do it.
Rick Howard: I think our problem is the cat's out of the bag. You cannot put that guy back in the bag, right, we've got to figure out on the fly at this point. But go ahead.
Etay Maor: No, no. I agree. I mean, I'm sure that the companies who are doing this will continue to use it. It's really interesting to see some of the discussions in the criminal underground around how attackers are utilizing it and how they're seeing the advantage of it.
Etay Maor: Another thing that I think should be discussed is also, you know, I call it the good, the bad, and the ugly, you know, the good being there's actually a lot of good stuff it can do. The bad being criminals being able to use it. And the ugly is just understanding that it's not, you know, the all-knowing, amazing solution that people may think of it, basically, I mean, it is a large language model, it's based on information that we created, and so it is biased, and so it does make mistakes. And there are certain tasks it doesn't know how to do. So, we need to figure that out as well because, again, everybody is talking about prompt engineering, right? You need to know how to use it and solve, but we need to understand its limitations and its problems as well.
Rick Howard: Jenn, have you found a practical use for ChatGPT or is it more just a toy you're playing with in a lab?
Jenn Reed: So, I think it's more for me, it's more just playing around with. I know that one of the things I found very interesting is sometimes, you know, when you actually ask it to do say a basic coding problem, it's very simplistic and, yes, it's true, but you would never code it that way. Like if you're going to put that thing in production, please don't do that.
Jenn Reed: And so, in some ways, it kind of scares me because like, you know, how a lot of developers might tend to go google something that they don't know how to do, and so then they grab that code and that's kind of scary from a security perspective, from an OPSEC side. But like when you see that same kind of prompt question for code you want to do in ChatGPT, you're like, yeah, that will work but please don't ever deploy that. But also from an IP perspective, you don't want your IP going out that way. But it's not really a programmer.
Jenn Reed: And so like from that perspective, it's a bit scary and that's where I think making sure you have the scanning on the check-in to make sure it's quality code could kind of mitigate that. But it could be kind of scary for people who can say, hey, I can write my own webpage, my own app through ChatGPT. It's a little scary from that perspective.
Rick Howard: What I like about -- and those are just kind of generic ways you could use it, you know, I do a lot of research and do a lot of writing for the
CyberWire, and I found it's a better first look going to ChatGPT than going to Wikipedia or anything to see what was that term again or explain to me how that works. And you know since their answers don't source their material, you can't use it or anything, but at least it's a first step for, you know, learning something new and maybe broadening your horizons a little bit. But, Etay, let's get to specifics about cybersecurity. What do you think is the major advantage of using something like ChatGPT in our profession?
Etay Maor: I think it's almost equivalent to using it by the attackers and I say this as a plus. It lowers the bar. So, it lowers the bars for attackers who can now either generate code even if it's not perfect. Jenn, they can generate code, generate phishing emails, generate all kinds of things like that. I found ways to use it on the defensive side that also lower the bar. If you have somebody new, a new analyst, you can literally put in their firewall or snort rules or all kinds of things that may look like, you know, very weird for newcomers. And it explains it and it goes into detail and really gives a good idea of what you're looking at. And just like you said, Rick, you know, when you put in a new term and it explains it, rather than going to Google or trying to figure it out, it does it. But it takes it also to the next level.
Etay Maor: One of the things I tried with it is I told it that, hey, I'm a cybersecurity researcher, I just had this incident in my company and I saw the attackers and I used the minor language, right? I said the attackers did T1059 and then T047. Analyze 10 attacks where you've seen these two and tell me what is most likely the next step that they're taking. And it gave me a list of the 10 different attacks that it analyzed. I said, by looking at this, the most likely next phase for them is 10, you know, and that gave me the T4 it was scripting. So, a lot of really good usages and I think it can really, for those who know how to properly use it and are getting familiar with it, it won't replace them, it will enhance their capabilities.
Rick Howard: Elianna, can you put the poll question up for the audience to see what they think about this? And, Jenn, I'm going to go to you. Have you seen the horror stories about it? Have you seen it make any mistakes that you've gone, "Oh, my God, I can't believe it said that"?
Jenn Reed: I think for me the thing that I found interesting was really just even really simple like if you were going to update an online query and it's writing an HTML page. And then you ask it to put -- to refine that. Now, these are all really basic but just thinking of like a really junior person trying to figure out maybe in high school, maybe in college trying to use the application. And then they make one modification and then it stops working. And then they're like -- you know, it's like because it didn't quite know how to make that coding modification change.
Jenn Reed: And another is still on a very technical side but it kind of shortcuts some of that. But a person who doesn't understand how maybe, you know,
Python or HTML quite works can't understand why this one worked and this one isn't. It's a great thing I think if you had a class in CS and you actually did that and the professor helped you figure out why this one works and why this one doesn't. And you can learn from that.
Jenn Reed: But you're seeing those like really simple mistakes because it's trying to figure out how things work and how to make those changes. But it doesn't always result in a code that works. I mean, look, we all do that as programmers anyway. You do it one way, you make your change and it stops working. Well, it's the same one on the side for ChatGPT. As it's refining, it can make a mistake that causes it not to work. Does that make sense?
Rick Howard: Yeah, it totally makes sense. And I was -- you and I were talking before the show. I experienced my first hallucinatory answer -- is that how you say hallucinatory answer? Where the ChatGPT interface totally made up an answer to one of my questions. I was researching Word Notes podcast and I needed a TV show reference for a term called network splicing. And it immediately told me that it came from a Netflix show called Altered Carbon. It gave me the episode, it gave me the season, and it gave me the dialogue. So, I went to look for it on Netflix so I can capture it for the show and it wasn't there. The ChatGPT interface completely, okay, lied about that, right? So, I was like, "What? It's lying to me." That's like my 17-year-old teenagers at my house. So, it's more like a human than a regular computer. I don't know, Etay, what do you think about that?
Etay Maor: No, no, I agree. And that's one of the problems, it's very confident even when it's very wrong. I tried this actually with my class. I teach a class at Boston College. And we were talking about hashing and I threw a couple of MD5 hashes in there and I asked it to unhash it, and it guessed. It was like password 1. I was like really, no, that's not -- and it started guessing these things.
Etay Maor: On the flip side of it, what's really interesting is -- and by the way, sometimes it also argues with you. A friend of mine tried to use it as a Wordle solver. So, he told it, "Okay. Give me a list of five-letter words that start with H and end with L-O. And the first word it gave was Halo, H-A-L-O. So, he asked it, "How many letters are there in halo?" It said, "Four." "Then why did you give it to me? I asked for five." "Oh, my mistake." And when I tried it, it argued like my little kids. I said, "Why did you give me that word?" And it said, "Oh, I thought you meant four- and five-letter words." I'm like, "What?"
Etay Maor: On the other hand, I tried something with it that actually kind of shocked me. I gave it a piece of code that I wrote that all it does is run an
array of numbers and extracts the lower number. But I made a mistake in there and had a bigger than rather than a smaller than sign. A mistake that I did I don't know how many times during my computer science studies 20 years ago and spent nights finding out that one character that I missed.
Etay Maor: And I put it into ChatGPT and I asked it, "What's the error in this code?" Now, finding syntax error I'm not impressed, right, because any compiler will do that for you. But it actually said, "You have a logical error. You were meant to extract the lower number but the smaller than sign will not do that." I was like, "How did you know what I meant this code to --" I think it was because of the names of the parameters that I chose. But I was pretty impressed with it. I was like okay.
Etay Maor: So, there are good and bad things and I have to tell you from a personal standpoint, I don't like how some of the universities and places that I hear about are denying access to it. It reminds me of like 20 years ago after I finished high school when schools were saying, "Don't use the internet. Wikipedia is not a referenceable source." I'm like, "You're not going to stop progress so get used to it." And that's what I do like in my course, I actually told my students on the second week, "You have to use ChatGPT in your homework, just add a comment that you used it on the ones you did. Because I want you to get familiar with, you know, how you work with it and how you're asking questions and how you refine the questions." So, I'm on board with, you know, don't fight it, use it.
Rick Howard: Elianna, can you put the results of the poll up? If it came out, I didn't see it. Oh, look. Hm, that's really interesting that everybody thinks it's a new evolution for us. Jenn, let me ask you, can you -- I know it probably isn't there yet but where do you see this going from the security profession? What do you see the most apt application of this kind of technology is for people like us?
Jenn Reed: Well, I think on the OPSEC side, I think it can help, especially from an improved perspective of finding and understanding a threat model that you have across your organization within your connected apps. And then from that be able to detect, you know, where you can improve your security posture. So, I think being able to analyze those endpoints and also the interconnectedness along with the roles really helps. I think you could see a lot of that.
Jenn Reed: And just as Etay was talking about the error logic in his code, the error logic of what you are looking for from a risk perspective because of a poorly designed role in that context. At the same time, also just finding things or recommending improvements on how you might create application code and making those recommendations. So, I kind of see it from that side because we're always trying to make coding more modular and based. But I think in doing that we lose some of that granularity. And I think having it to be able to teach people what their code is or isn't doing is so helpful when you're bringing on new developers. Which can also be the riskiest thing to do.
Rick Howard: Yeah. That's right. There's no gain without any risk, right? So, let's just keep it that way. So, Etay, we've got a question from the audience. This is from Parul Kharub, I think, is that how you say his last name? I'm totally
mangling that. He's a security advisor for Teck Resources Limited. He asks the question right, and he says, "How is the Microsoft 365 Copilot, the ChatGPT bot going to revolutionize or cause more threats?" What do you think about that?
Etay Maor: I think -- I actually can't think of a company now --
Rick Howard: First explain what that is, what's the 365 Copilot? Let's do that.
Etay Maor: So, it's the ChatGPT equivalent of a local version of it for Microsoft and their capabilities. I'm familiar with the Copilot for cybersecurity. I'm sure they can do a lot more than that. But when you look at the Copilot for cybersecurity, you can ask it to try and coordinate and correlate information from different systems and ask it questions about different threats that you have seen and logs and, you know, it's like a super organizer with the ability to generate data and not just look it up. Kind of similar to, right, the difference between Google and ChatGPT, look for data versus generate information.
Etay Maor: And to be honest, I can't think of a company today that is not trying to use ChatGPT in their products in some way, shape, or form or are planning to do that. So, is it going to revolutionize? Yeah, I think it's going to really help. Like I said before, it's not going to replace people but it's going to enhance and empower them to do a lot more faster. You know, the example that I gave before that also applies to something like the Copilot of being able to in five seconds analyze dozens of security reports and extract the information that you're looking for is something that, you know, is a weight off the shoulders of security analysts, allowing them to focus on the actual content rather than the searching and putting everything together.
Etay Maor: Just like, you know, again, if I say compared to what we've done 20 years ago or when the internet started a little bit more, finding all the books, and now we have it all available online, you can just do it, now you have all the information you can generate the results out of it. I'll just say that three days ago, my daughter actually asked me, "How old are you?" And I told her the year I was born. She was like, "You're older than Google." I'm like, "Yeah, thank you."
Rick Howard: So, Jenn, we've got a second question from -- it's in the same
vein here -- from Patrick Verste, I think he's a CSO out of the Netherlands. He works for Logical Security. And he goes, "Here's another angle for ChatGPT. Is it going to have an influence on awareness programs that we all have to run in our careers? Do you think that's going to help here?
Etay Maor: Is that for me? Rick Howard: Either one.
Etay Maor: So, I'll actually go from the flip side of it. I think it will force those who create these programs to think a little bit differently because the problem
that it introduces is I think it's going to make social engineering, phishing attacks, stuff like that a lot harder to detect. You know, there's not going to be any localization issues. So, you want to write a phishing email in Japanese. I did that, it did that in a couple of seconds. The nerd in me also asked it to write it in Tolkien Elvish which ChatGPT did that as well.
Etay Maor: But, you know, when you start to think about the possibilities that are going to happen, when you think about things like, you know, DALL-E as well, one of the ways that I find social engineering attacks or attacks through, you know, systems like LinkedIn, I click on the image, and I find it, and I see, you know, where else it was used. Now, the image is just generated by an AI. You can't find it. Which, by the way, as we know in security, no information is information as well. So, if you can't find that picture anywhere, that's also a little bit suspicious.
Etay Maor: But when you start combining that with voice synthesis and deep fakes and stuff that, you know, it's not there but we're not that far from it, I think it's going to force security awareness to take a step forward and say, okay, they've overcome some of the hurdles we've been training people on how to identify and spot, we need to start thinking how we identify it.
Etay Maor: We are going to -- I'm sorry if I'm going very long but I have to mention this as well. We are going into the phase of AI versus AI where you'll try to use AI to was this written by AI or is this picture AI-generated. I tried it with ChatGPT and asked it to write a short paragraph -- a couple of paragraphs on the evil characters in Game of Thrones. Then I copied-pasted it into it and asked it was this generated by an -- this text, is it AI-generated or human? And it said, "I'm looking for patterns and I can't tell." It just generated it. So, that's where I think we're going to with that.
Rick Howard: So, I think we're going to be talking about this for a long time. I agree that we've just lit the match on this and we're all trying to figure it out. But we're going to have to leave it there. It's time to go on to our next topic. And, Jenn, that's on you. So, what are we talking for your most impactful new story of the quarter?
Jenn Reed: So, this really kind of goes back to I think February. And so the LastPass DevOps engineers compromised the laptop that led to the exfiltration of his vault and thus access to data that they had stored in the cloud. And so, for me, this is like an ongoing topic that we always face as CSOs and leaders in cloud and security. And my experience is that you always have this debate, right, with the developers, DevOps teams about what they need and what they're asking for. And so, they always want as much as possible to go as fast as they can, right? And so, they will go to try and get an exception to really -- so they can move faster versus sometimes really understanding what they need and how long they need it. And --
Rick Howard: So, Jenn, let's back up a second just for those who don't know, what is LastPass and why it was significant. Why was it hit in the news?
Jenn Reed: Yes, so LastPass is a widely available password vault plus multifactor authentication that provides both of those services.
Jenn Reed: Yeah, we use it here. We use it, yeah.
Jenn Reed: I use it personally too. So, but with LastPass, one of the things that people want to use it for is so that it can generate either one-time use passwords or can generate passwords that it'll save so you don't have to remember your passwords. And what that comes with a master password to allow you to access your vault from different locations. And so, with that, you can store passwords, keys, and it even encrypts it so that only you have access to it, even if it's stored locally on a device.
Jenn Reed: And in this particular incident, there was an initial compromise but then of the individual's endpoint, which was his laptop, via a third-party video device. It actually installed the key logger. And so, via the key logger, it was able to obtain his master password to his vault. And then from that, they were able to then connect to that vault in another location and export all of his secrets, keys that he had stored into that vault. Now, this is a feature that you want to have if you ever want to be able to take your data out of LastPass and put it into another key vault. But because of that, they were able to then leverage that to get access to a production system that actually stored data for all of their customers and then be able to export that.
Rick Howard: So, DevOps engineer because he had some system administrator privilege when his keys get compromised, the bad guys could just go find customer data. That's the bottom line for people like me who were scrambling when that was announced. Okay. Are we in trouble? Okay. So, anyway, I interrupted. Go ahead.
Jenn Reed: Yeah, of course. And so, I think the interesting thing here is it was kind of a multistep process because there are a couple of things that could have happened that would have been additional mitigations, that, you know, you always try to face and work with developer teams, ops teams to try and limit the blast radius. And so, one of the things is, you know, the vault could have not been on his laptop. The other thing that could have been done is he would not have had in his corporate vault the actual access keys for production. Right?
Rick Howard: Let's throw some zero trust at this. Yeah, absolutely. Right?
Jenn Reed: Why would he need production access keys as a DevOps guy? He might have initially had them in order to set up the DCICD pipeline and then never cleaned up, right? And then there are the things that could have been done so that he didn't need access keys using, you know, a token instead so
you only need it when you need it and servicise that, once again, zero trust. And also, you could think about how do I really centralize some of our production activities so my DevOps team only has access to say dev and a pre- prod environment, and all actions that actually can happen in production can only happen from a location that has a larger parameter around it that reduces human access. And when they do it, they only get temporary access.
Jenn Reed: So, there are a lot of things that could have been done but it feels -- you know, it's always, you know, doing Monday quarterbacking of what happened, right, what you could have done better. But they are things that in the moment when you grant access and because someone got an exception, we're always reaching for the least privilege, and the developers and DevOps teams are asking for more access. So, how can you balance that? Does that make sense?
Rick Howard: Elianna, let's throw the poll question out for the audience. And, Etay, let me come to you. When all this happened, there were a lot of people raining down on LastPass saying we should just abandon that product as clearly, their security practices are bad. Is that your take on it too? Or was there -- did they do things right? So, customer data was fairly protected here.
Etay Maor: Yeah, it's a good question. I don't jump on board on these things like immediately as they happen. And if you look at many of the, you know, breaches that have happened, if we would have said this on every breach that we've read about, you'd have a hard time finding services online that you'd subscribed to.
Rick Howard: That's true.
Etay Maor: Yeah, I mean. How many companies have we heard of and not have heard of that have been breached because of human error, because of mistakes like the ones that Jenn just described? Now, it's everywhere. So, I don't think -- you know, as long as I wouldn't see something that is inherently bad in the product itself and where I think there was a lot of negligence and, you know, the company didn't do what you would expect them to, I'm not on board with just saying, "Oh, these guys are bad, let's move to the other ones." Because, you know, whatever is the other product or service, they can be breached just the same.
Etay Maor: And all from, by the way, all from basic stuff, you know, what Jenn has described here is basic security practices. And it's amazing, you know, I have this negotiation ransomware negotiation screenshots from a victim and the ransomware group. And after the negotiations, once the victim paid, in that case, it was $3.8 million, they actually gave him a list of things that they think he should do in order to protect the system.
Rick Howard: Those bad guys, they're so helpful. Okay. I really appreciate it.
Etay Maor: Customer service first, right? And so, one of the first things they said is -- and they didn't say zero trust but they said, "Give minimum privilege and only access for those who knew --" like they said it in something that's an obvious translation from Russian. But basically, they said zero trust without saying it. And all the things that they said were very basic stuff. It's not like -- they didn't use -- they don't use zero-days, they don't use customers, they just, you know, do the basics.
Rick Howard: So, what does poll look like Eleanna? What is everybody saying here? Jenn, when the news hit about this, we were all worried about, you know, our own customer end, that's what we're all worried about here at N2K. But what you're describing here is really just basic security practices that some bad guys took advantage of, right? That's because LastPass-- yeah, go ahead. I'm sorry.
Jenn Reed: No, I was just saying exactly.
Rick Howard: Right. Our data was protected, it's still encrypted, all right, but they were able to grab it and take it, and they could try some brute force attacks if you didn't have, you know, a decent key, you know, management, and things like that, which you typically do when you use these kinds of services. All right. So, let me ask you this, Jenn, are -- because of the intent of it, are you recommending the other practitioners that they should abandon companies like LastPass and 1Password and do something else? Or is it just mostly this is a bad thing that went wrong with DevOps engineer?
Jenn Reed: Really it's a bad thing that went wrong with the DevOps engineer, right? And it's like it's really about how can you -- it gets back to the basics
that the basics always matter. And this is why those basics always matter. So, because humans will make mistakes, it's inevitable. But we have to understand what that mistake is so we can, you know, improve our processes.
Jenn Reed: And so, I mean, password vaults are great, you know, you want to follow the best practice of don't put it on the same endpoint with that, outside of your MFA. You also want to restrict what you have in there and not give it access that it doesn't need. So, the really basics. And it just should be used to say this is why we have to do what we have to do. And you set this as an exemplar for why it's important for your DevOps team to understand why these rules exist and see that. Because it's not that they did something wrong, they made a mistake, but at the end of the day, I still use LastPass. And it's because my passwords were never compromised, right? They might have gotten the store but it's encrypted with my master password. And so --
Rick Howard: I was going to say that's my hot take, you know, that a lot of people criticizing LastPass but they had multiple security controls in place. One of them failed, right? But the one that would matter mattered and the bad guy didn't win here, right? So, Etay, are you on board with us? Jenn and I are saying
it's not that big of a deal. But I don't know, what do you think?
Etay Maor: No, I'm completely with you. That's why I said I wasn't on board with the "hey, let's abandon them and go somewhere else". I completely agree with you.
Rick Howard: So, Jenn, we've got a couple of questions from listeners. This one's from listener WhosYourDaddy. I love that name. Right. He or she says, "When there's time pressure, what are some ways to better help to explain the risk to the business?" So, I think you were hinting about that one on the intro of this. But what do you mean about that? Or what do you think the answer to that is?
Jenn Reed: Well, part of it is once you get in the basics but before you get into that situation, it's really having a threat model of your system, and that should include both systems and applications and people, and understanding their roles and their access in what the data flows are. And I think one of the ones that threat modeling techniques that have been around a while is the PASTA one that really integrates. And that's -- what? Process, attack, simulation, and threat analysis?
Rick Howard: Nice.
Jenn Reed: Yay! But, yes. But to really understand from a business perspective what the risks are, before something happens, right, and so that you can actually understand where those touchpoints are and actually understand what the business risks are. So, when someone's asking for an exception or you do an internal audit of your systems, you can actually -- you know in advance what that risk could be or what it touches on so that you're not scrambling at the last minute to just say, hey, there was this incident where that happened and that could happen to us.
Jenn Reed: The question that the business might ask at that point is, well, does that really apply to me? When could that really happen? And if you've gotten the work to have a threat model that you're updating and working with your teams so you have a clear understanding of what the impact of that decision might be. What do you think, Etay?
Etay Maor: Yeah, I agree. Actually, you know, it's interesting to mention that as in the previous role that I was in, we did, you know, incident response and breach simulation to companies. And it's actually amazing to see how the process you've just described is not implemented and it's left to the day of. And you really don't want to invent these things when they happen. And I'll just add one more thing, I don't know if it's included, I haven't heard of PASTA, I'm familiar with the process, I haven't heard that acronym, I love it.
Rick Howard: It's my new favorite acronym. Are you kidding me?
Rick Howard: It helps to remember it. Sorry.
Etay Maor: I love it. But it's also amazing to see how different teams and stakeholders within the company talk about the same things in a different language. And when you put them under pressure and there's a breach that something's happening, we actually saw during live evaluations like this that they were saying the two teams were trying to communicate and trying to say the same thing in different languages and it just didn't work. So, I'm completely on board with that as well.
Rick Howard: We've got a really interesting question from username AllGoodNamesAreGone. I love that also. And he or she says, "Especially in this situation, Jenn, how do you create a culture or if a mistake is made, like that DevOps engineer, it can be reported without fear, all right, and you don't hide it so that the problem, you know, keeps getting worse over time? How do you address that in your organizations?"
Jenn Reed: So, I think there's two things is making sure that you reward people for actually bringing things forward and not shaming them for bringing things forward. And when someone brings things forward, having that conversation not be adversarial. You can be inquisitive without being adversarial, more of like tell me more. And why did you think that that was a good idea? You know, without making it seem like they're not -- but just so you can understand their thought process.
Jenn Reed: And then also lead by example. So, when you make mistakes as a manager or a leader, let them know that you did, you know. And so, own your own mistakes so they can see that if they make a mistake and they come to you, right, that it's going to be okay because you own your own mistakes as well, and just reward that behavior.
Jenn Reed: Not that you want people to be making mistakes, of course, but you want to create a culture where people are open about their mistakes when they make them. Number one, so somebody can help them address it and report what they -- even the things they're not sure about, it's okay to report that. Don't worry about my time being wasted. I would rather you say, "Hey, this email, it looks fishy to me." And then, oh, it turned out not to be we brought a new SAS staff on and that's just -- it's emailing everybody. So, that's fine, you know. But just making it so that you're not afraid by making it just kind of an open-door policy. What are your thoughts?
Rick Howard: It's all good stuff. But we're going to have to move to the next topic, Etay. So, nothing from you. We don't want to hear from you, Etay.
Rick Howard: All right. So, it's time for my topic and I'm talking about the US National Cybersecurity Strategy. On March 2nd of this quarter, US President Biden released his 2023 National Cyber Strategy. And before I get started, it's
pretty easy for me and the pundits on this panel to assume that Monday morning quarterbacking that Jenn was talking about, you know, that role and criticize the completed work of some government agency.
Rick Howard: And I want to say before I do that that there are some really
good ideas in this strategy, ideas that have the promise that we'll talk about in a second. And I can't imagine the work that took to get Kemba Waldenm who is the acting National Cyber Director, the woman who just replaced Chris English, and her staff, what they had to do to get all of this work done and get everybody agree that this is what we're going to do as a strategy. That was monumental. All right. So, my hat's off to them for that achievement.
Rick Howard: That said though, okay, it's tough for people like me who have been around for a while to not be cynical about these things. By my count, this is the fourth presidential strategy document in the last 15 years from presidents George W. Bush, Obama, Trump, and now Biden, okay, and it's tough to know reading through the current strategy document if we accomplished any of the goals outlined in the previous strategy documents before we moved on to this one. Or at least have some kind of acknowledgment that we either don't need those old strategies anymore or we'll continue to build on them.
Rick Howard: And, you know, to be fair, this newest document admits upfront that it's building on five existing presidential executive orders. One space policy directive, how about that? A space policy directive! You know, space, who knew? Right? Two national security memorandums and two already passed laws and two presidential policy directives. And it replaced, it says in the document that it replaces President Trump's strategy document so we got that going for us.
Rick Howard: And before you all ask, I obviously had to look at the difference between executive orders and presidential directives and security memorandums. According to the US Justice Department, they are essentially the same in terms of force and impact. The difference is that presidential directives and security memorandums can be classified and are not generally available in full to the public where executive orders have to be listed in the federal register and are thus available to anybody who wants to read them. Who knew any of that? I had no idea that's what was going on with that.
Rick Howard: So, one more thing about strategy versus tactics as it pertains to this document, strategies are things that we want to accomplish, they are the what we want to do. And then my ultimate cybersecurity first principle strategy which is reduce the probability of material impact due to a cyber event over the next three years. That's a clear statement of the things we want to accomplish but it's silent on the how part, the tactics. So, you have to read my book that's coming out in time for RSA, Cybersecurity First Principles, a Reboot of Strategy and Tactics, or listen to all the podcasts that we do and you can
figure out the difference between strategy and tactics.
Rick Howard: I say all that because in this document, we have both strategy and tactics in there, it's tough to separate them all. And Kemba Walden, the acting National Cyber Director says that her office will release more detail on the tactics side sometime in June. All right. So, I'll throw that to you, Etay. What's your first take on this new strategy document? Do you think it's worthwhile or just more government people wandering around, trying to get things done?
Etay Maor: First of all, I welcome stuff like this because there's always the complaint that, "Hey, where was the government when this happened, why don't we have a plan, why is there no strategy, why are we trying to invent things on the fly?" So, I actually -- I welcome this. I did read the -- not the whole document, but I did go through that. I was listening to the announcement on TV when it happened, to the press release. I personally, I like what I saw in there. But then it ultimately goes to -- and you mentioned several others that have been in the past. I didn't know that each president of the last four had one. It goes through, okay, how do we put that into practice, not just in tactics but actually materialize the good things that are said here? That's the real challenge and that's actually, you know, the proof of something that is it worthwhile and is it happening. So, I want to see these things happening. I like what I saw. But --
Rick Howard: We will see. Elianna, let's put the poll question up. Jenn, the same question to you. Are you excited about this or not that we have this thing?
Jenn Reed: Undecided, I guess.
Rick Howard: Cautiously optimistic. Yeah. Jenn Reed: I mean, so --
Rick Howard: I got her now blushed.
Jenn Reed: It's -- you know, it kind of goes back to what does it really mean in principle, I guess. Because we seem to have a new strategy for every president.
Rick Howard: That's my take, yeah.
Jenn Reed: Yeah, yeah. At the same time, it's like things usually continue on and it takes so much time to understand, well, what can this mean for this agency, this department, this working group. And it is a tactic but it's also a strategy and it's what's the system. I guess I always fall back to my basics of all of those have their own threat models for each app and how does this strategy, how can we improve those, or do we not even do that? To, Etay's earlier comment that a lot of organizations don't have them. And so it's should
we be more prescriptive or should it be more prescriptive on these are what basics look like.
Rick Howard: Well, let's talk about a couple of them. Elianna, put the answers up and see what everybody else says. So, it's about 50-50. So, that's interesting. Okay. Let's talk about a couple of the specifics here, right? So, for strategy pillar one, it's called Defend Critical Infrastructure, they list a supporting strategy of enabling regulated entities to afford security. In one specific example where this might be of big help is the US election apparatus for city, county, and state organizations who are infamously underfunded. So, if the government could find a way to get every organization up to some acceptable baseline of security, that would be good.
Rick Howard: I don't know. Do you guys mind any of that? Am I selling -- do you think that might happen? The government is going to find money to bring everybody up to a baseline standard.
Etay Maor: That's a good question. I don't know. I just -- I do want to comment before I go into this, just one more thing about what you said before if that's okay?
Rick Howard: Yeah, go ahead.
Etay Maor: I think it's extremely difficult in our field to do the tactical part because things change so fast. And, you know, everything that comes out is so late versus -- I mean, this came out ages ago because that was before ChatGPT so how are you going to -- you know, take those tactics now into consideration? And so, it's really -- I guess what I'm trying to say is I think that actually strategizing is really important and you really might need to leave some of the tactics open to stuff that is happening in the field and happening all the time. So, I just wanted to put that out there.
Rick Howard: So, what I'm hearing from you, Etay, is that basically your strategy is throw your head in the ground and hide.
Etay Maor: No, I didn't say that. I'm just saying I think there should be a clearly defined strategy and approach to risk. I'm not the guy who has I guess the answers today but I think the tactical part of it changes extremely fast, much faster than what lawmakers and policymakers can keep up with. And so, they might, unfortunately, need to leave some of that I don't know if blank but open to interpretation based on some of the current events and current threats that we're seeing in the field and have an overarching approach to the strategy and risk mitigation. I think, I don't know.
Rick Howard: Jenn, any thoughts there?
Jenn Reed: Well, the question, I think for the election infrastructure, I think the issue there is we don't have a unified way to actually understand for each state
or each district what the minimal viable product is of how they should run those. And so, until you know what that is, what's the best strategy to mitigate that becomes the risk of and that's why, you know, I was thinking about the threat modeling because that doesn't always have to be systems, it's people and process. And so, I think just understanding what your people and process is and what those are can be really good for elections.
Jenn Reed: And then, from that, how can we improve those things with technology? Because I think there's so much we don't know, especially at the state, city, regional level that we think predominantly about physical security, and then we use air gaps for system security. But there's a lot more that's in play there and really having that and saying, "Hey, here's where we can constantly improve." Because it's always a constantly improving conversation. And where can IT and moving things say to the cloud might help? It has its own risks as well but those can be mitigated. So, understanding what the mitigations can be and when you implement them should be part of that strategy.
Rick Howard: Well, I just wanted to be clear too, strategy is important because we're all agreeing on what we want to do and the how we do it may be hard and may be time-consuming, and may be expensive. But, first, we have to figure out what we want to do. And the second one I liked here is that I think is really promising is this is from strategy color number 2, Disrupt and Dismantle Threat Actors.
Rick Howard: They list two supporting strategies in the document but first is increase the speed and scale of intelligent sharing and victim notification. And this has been one of my pet peeves for years, you know, automating the mechanisms that allow us to do that, right, would be fantastic. We invented information sharing back in 1999 and we are still doing it pretty much the same way we did back then, sharing PDFs, spreadsheets, and blog posts. I think we can do a lot better here, right? And so, having that as a strategy I think is an interesting idea. Etay, what do you think?
Etay Maor: I agree. No, it sometimes depresses me to see how much coordination and correlation there is between threat groups and cyber-criminal gangs and how they share information between themselves. And I'm like, "I wish we would have done it the same way." I mean, I've seen cybercrime groups from countries which are at war with each other collaborate on attacks. And so, yeah, definitely information sharing and collaboration is something that I'd like to see more. And the first thing you mentioned there and the first pillar is disruption of -- what was the exact naming? Disruption of --
Rick Howard: Disrupt and Dismantle Threat Actors.
Etay Maor: Yeah. I love that. But that's getting harder and harder, you know. Russia actually made it legal to perform hacking by law. And so --
Rick Howard: And what I think that means is that we're going to unleash the hounds, the offense of cyber hounds from the Department of Defense to take down infrastructure from cybercrime organizations. I think that's what they mean. They don't really say that in the document but they kind of imply it everywhere. And right there, they're taking the lead from the Australian government's announcement of them doing that the last quarter. We talked about that on the last Quarterly Analyst Call. So, we can talk about that forever. There's a lot more interesting things in this strategy, you should all read it. I'd be more interested to see the tactics description that's going to come out in June so maybe we will revisit that when that happens next quarter.
Rick Howard: We saved some time here for some general-purpose questions, right? And so, let me just bring them up here. These came in from before the show started. And the first question is from my friend Joe O'Brien. Here is the question. Etay, I think this is going right to you. How does SSE or S-S-E impact an organization's risk postures? This is a positive thing or just some new gimmick that we have out there these days?
Etay Maor: No, it's not a gimmick, it's a real thing, and it's actually -- first of all this, you know, I work for a company that has --
Rick Howard: Yeah, you work for Cato and that sells SSE services. So, okay, now that we got that out of the way, why do you think it's a big idea?
Etay Maor: So, when I fully understood the impact of what SSE is and actually that's the reason I joined Cato. I think it's a big deal because it really is a game-changer in the way that we approach security today. I still feel sometimes that we're, you know, using on-prem tools in a cloud fight, and that's just not fair. Not just on-prem tools but on-prem thinking and approaches. And, you know, that just doesn't hold water in these days when you're trying to fight these threats. And, you know, as a security guy, I hate to give it to the networking guys but it all starts with, first of all, understanding everything and having complete visibility to everything on your network. And that's the networking part, right? And then the whole mission of SSE is converging that working and security. So, once you can see everything, you can actually help secure and get a better security posture.
Etay Maor: And actually, interestingly I realized not too long ago, it aligns perfectly with the OODA loop concept, the observe, orient, decide, and act. That's exactly what you see in a proper SSE implementation is you are able to see everything, contextualize it, which is orient, make a decision so have a policy, and then act upon it, enforce it. Something that is extremely difficult when you have on-prem solutions from different vendors, you know, as cool as it is maybe for Hollywood to show security analysts sitting in front of six screens and, you know, doing security stuff, that analyst is crying that day, maybe that night as well because they're trying to correlate so many things rather than, you know, doing security. We've turned security analysts into
integration engineers. So, that's kind of what this solves.
Rick Howard: The reason I love SSE is it flips the model. Okay. Because I've been doing this a long time so have you two, right? But in the old days, we'd have to manage all the security infrastructure wherever it was, data centers, headquarters, buildings, remote sales offices. We'd organized the network, you know, by least T1 lines to connect all that before it got to the internet. And it just became too complex when we introduced the cloud, we introduced mobile platforms and a bunch of SAS services.
Rick Howard: What SSE does, if you get the right SSE vendor, your first hop from wherever you are, Starbucks, your house, headquarters, the data center is through an SSE vendor with some security stack that gives you some -- and they manage all the blinking lights that you don't have to do it anymore. All I have to do is manage the policy. That simplified SSE so much that I think it's the way of the future. I don't know, Jenn, did I sell you on this idea or are you a believer in SSE architectures?
Jenn Reed: Well, I think it is a great innovation for all the reasons that you guys have lined out. But I also don't think it's an easy button.
Rick Howard: Oh, really?
Jenn Reed: Sorry. You still got to keep the basics from the application rules and services perspective. You know, because what's a service or an app is authenticating, making sure once again those individual components, macro services, or people have the least privilege for what they need.
Rick Howard: You mean someone still has to check the DevOps engineer and make sure that he's not storing in the local vault?
Jenn Reed: Yes. Yes. So, but --
Rick Howard: Darn it, I thought I was out of that business.
Jenn Reed: No. You still have to do the basics. And I think that's key regardless of what we talk about is that these innovations are really helpful solving particular aspects of the problems from a zero trust perspective. But these things still back down to roles and permissions things use. And those still have to continuously be evaluated as it's still applied. Can I improve it? Can I reduce the amount of permission here? Can I break this apart so that I can limit the scope of each service?
Rick Howard: I've got a second question here from James Kimmel, the CSO at Bumbas. He asks, "What are good practices for employing managed security service providers in a small business?" So, I'm well aware of that, N2K is a small business. We're wrestling with that as we speak. Anybody want to take that one? There's no answers for small businesses, that's what I'm hearing.
Jenn Reed: No. I think there are. It depends -- it's so many questions. Are you all in the cloud? Because there are MSSP and solutions that are designed for that as far as like the corporate environment. But there's also questions I think that are you looking for something that's more of an MDR or are you looking for something that is traditional MSSP that then leverages an MDR solution? And then are those -- do they as companies, from my perspective, having been at an ISV is do they have the security basics and independent audits themselves, do they have at least ISO 2? Do they have at least an ISO 27001 to independently audit their procedures?
Jenn Reed: And so, because there are companies that do have those. Because how do you know who to trust? You can go to a really expensive vendor but if you're a small business, how do you manage that? But it's asking some of those kind of fundamental questions and to really understand that. What are your take, do you think?
I actually will revert back to the question that we had before. I think what we discussed before actually answers a lot of that when you actually enable to off- load all those responsibilities and issues to a provider, you know, that's something that an SSE solution actually does and helps with instead of, you know, there's a lot of problems where without even going into the products themselves, where do you get the human capital to manage all of it and what happens when those experts leave the company or leave you.
So, you really want to have something that is easier to manage and doesn't create, you know, these -- how do you call them? Security gurus or security champ-- security product champions within the company. And kind of you can offload all those responsibilities and just focus, as Rick mentioned, just focus on, hi, here's the policy that I need to use. Where I want to do it? Figure it out, you know, we have the systems to do it. Just figure it out, here's the policy.
Well, I can tell you where I land on this in my own personal philosophy, all right, because I've managed big company security programs and small company security programs. I'm currently managing the N2K program and, you know, we're a startup, you know, there's two guys and a dog that do all the tech. And there's one security guy, it's me, all right.
So, I will tell you what I've come down to on this is maybe we don't need an MSSP, first, because we just don't have the resources to pay for it and to track what they're finding for us. Maybe what we should be focusing on is resilience. All right. So, the thing that could kill N2K is some devastating ransomware attack, right? If I can make sure that our systems continue to function while we are dealing with that, that is a much better strategy. And that probably doesn't involve a managed security service provider. I don't know. Jenn, what do you think?
Once again, it depends I think on the business and this ball minces itself. If
you're, you know, a FinTech or a highly-regulated industry startup, it might be a very different conversation than, you know, N2K perspective. But just as you were saying, you know, your threat map for what you guys are currently doing, it doesn't make sense from that perspective but as things grow and change, that might also impact that. So, understanding what could be compromised and for you, the biggest threat would be the ransomware so how do you mitigate that and what's the most effective way of doing that? So, that makes a lot of sense.
So, we're at the end of this thing. And I'm going to give you a minute back on your time. Ladies and gentlemen, thanks for coming to this. We really appreciate it. Etay and Jenn, thank you for coming and lending your wisdom to all this. And for all the listeners, we'll see you at the next CyberWire Quarterly Analyst Call. Thanks, everybody.