CSO Perspectives (Pro) 2.6.23
Ep 98 | 2.6.23

Current State: OSINT and CTI

Transcript

(SOUNDBITE OF MICHAEL GIACCHINO'S "JYN ERSO AND HOPE SUITE")

Rick Howard: You're listening to one of the many themes from the 2016 movie "Rogue One," one of my favorite movies from the Star Wars saga, because the filmmakers birthed the entire movie from one minute of dialogue taken from the original 1977 "Star Wars" movie, when General Jan Dodonna tells his fighter pilots about how to blow up the Death Star. 

(SOUNDBITE OF FILM, "STAR WARS") 

Alex Mccrindle: (As General Dodonna) The battle station is heavily shielded and carries a firepower greater than half the star fleet. Its defenses are designed around a direct large-scale assault. A small one-man fighter should be able to penetrate the outer defense. 

Rick Howard: Back in Season 1, I did a show on intelligence operations and asked everybody, how do you think Princess Leia got the engineering plans for the Death Star's weakness in the first place? Well, Jyn and Cassian and a host of Dirty Dozen commandos went out and collected that intelligence, and that's what "Rogue One" is about, which is a long way around the horn to tell you that for this show, we are talking about cyberthreat intelligence again, or CTI, only this time we're talking about the current state of it and where it might go from here. So hold onto your butts. 

(SOUNDBITE OF FILM, "JURASSIC PARK") 

Samuel L Jackson: (As Arnold) Hold onto your butts. 

Rick Howard: This is going to be good. 

Rick Howard: My name is Rick Howard, and I'm broadcasting from the CyberWire's Secret Sanctum Sanctorum Studios, located underwater somewhere along the Patapsco River near Baltimore Harbor, Md., in the good old U.S. of A. And you're listening to "CSO Perspectives," my podcast about the ideas, strategies and technologies that senior security executives wrestle with on a daily basis. 

Rick Howard: Landon Winkelvoss is a co-founder and VP of content at Nisos, a commercial company that does threat intelligence as a managed service for things like cyberthreat intelligence, corporate intelligence, corporate security and fraud. But I wanted to bring Lance on the show because he's had years of experience working in the intelligence field, first as a criminal analyst and intelligence officer at the U.S. State Department and later as a civilian providing intelligence services to corporate America. I started out by asking him, what are the current challenges facing corporate intelligence teams today? 

Landon Winkelvoss: I mean, let's look at intelligence as a holistic discipline. It is very much an imperfect art. So when you think of the challenges, at the end of the day, what you know as an organization, what you know as enterprise is your own environment, right? So, of course, whether that's a cyberthreat, cybersecurity program or a physical security program, you're going to protect your walls, right? You're going to build your defenses. You're going to build your technologies, your controls, around what your perimeter is because that's what you have legal obligation to do, to stakeholders, to your employees, to shareholders. And so that's what companies have to work with. But increasingly, you have to go outside of those perimeters to really identify threats. And that gets companies nervous, and that's certainly where they hire companies like us and companies like you've worked for to, ultimately, identify those threats and tell them how they're relevant. 

Landon Winkelvoss: I hate when people say that you don't know what you don't know. I mean, that is very much how organizations and executives really kind of look at, you know, different threats and risks to business. They don't know what's outside of their perimeter, and they need experts to figure that out. 

Rick Howard: What I expected happened 10 years ago - and it just hasn't happened - is that business organizations would use their internal threat intelligence teams to handle business intelligence requirements because when it all first started 20 years ago or so, it was really just looking at bad guys and try to - keeping them out and try to understand the bad-guy environment. But that didn't really tie to business operations, to business functions. 

Landon Winkelvoss: Correct. 

Rick Howard: Right. Yeah. 

Landon Winkelvoss: I think that that's one of the misconceptions. I think that - what there really is between a CTI program and an open-source intelligence program, right? I think misconception No. 1 is that, you know, CTI or cyberthreat intelligence only supports the security operations team. I think that gets to what you're very much saying. Boards don't necessarily want to just listen to what the latest advanced persistent threats are doing. They want to understand how they're impacting specifically their organization to actually manufacture business loss and risk. They understand risk, right? So cyberthreat intelligence teams, if they're just reported and buried in the security operations team, I think that's almost doing the organization a disservice. 

Rick Howard: What's interesting about this, I think, and, you know, I've done cyber intelligence many years and - as well as you have. But business leaders don't really know what to do with that outfit. They've never really had one working for them or - and they don't really know how to tell them what to do. 

Landon Winkelvoss: Look; you're absolutely dead on, right? I mean, business leaders don't know what - they think intelligence is something for the national security apparatus team themselves. 

Rick Howard: Yeah (laughter). 

Landon Winkelvoss: Well, business leaders are increasingly saying, well, I'm not going to rely on government to wait and respond to me. So therefore, I have to have more visibility outside of my perimeter to what threats are doing and how they're actually going to be impacting loss, and they're ultimately becoming a lot more smarter. I mean, it's certainly a crawl-walk-run approach, right? And you have to understand what the problem is you're solving, what we need to do to address it and then, of course, actually start going and collecting (inaudible) - to what, you know, is comfortable and, ultimately, address those types of threats, you know? I think if you to put this in terms of risks to actually monetization - like, the most mature security teams that we work with, they have a dollar loss that they are accountable to prevent, right? Now, of course, look; let's be honest. Like, that's probably Fortune 50 security teams. But everybody ultimately has a, you know, a north star they want to get to. When you get to that level, that's when, you know, intelligence teams really, really shine to the board. 

Rick Howard: CEOs and, you know, the senior staff of the companies could use intelligence teams to be much more than tracking the latest vulnerabilities. It'd be things like task the intelligence team to go find the most impactful system that makes the business money. Do we know what those are? That's kind of above and beyond what a typical cyberthreat intelligence team does. But in my mind, that team should be the experts on all of that stuff. 

Landon Winkelvoss: Yeah, absolutely. And I think that you're starting to see that. One of our clients - they have a person that you identify that's exactly what you're doing, that's tasked to do that. And they kind of sit in between cyberthreat intelligence, the SOC and physical security team, which is in charge of executive protection. And they actually do do that. And they actually look at where - OK, where are our business lines that we need to go into? What are the risks of doing that? What's the market entry of which we need to go do that? What are the risks? And they're almost, you know, business experts as much as they are, you know, risk management. 

Rick Howard: Yeah, exactly. 

Landon Winkelvoss: I've been starting to see that. Yup. 

Rick Howard: After the break, Landon and I will get into the tactics, the how, of running an intelligence team. We're going to discuss the traditional military intelligence lifecycle modified for the commercial sector. Come right back. 

Rick Howard: Cyberthreat intelligence came out of the old military channels, right? And so in the early days, many of the cyberthreat intelligence people were prior military. And that's what I meant - that kind of system is kind of unique to business leaders who have no connection to the military. Do you find that senior leaders use the intelligence lifecycle? Meaning they tell the intelligence team, here's some things we want you to go figure out, and they let them go through that cycle to answer those questions? Or... 

Landon Winkelvoss: Oh, I mean, absolutely, right? So, I mean, like, let's kind of peel that back quite a bit, right? Let's look at the intelligence cycle as a whole, right? The first thing is planning and direction. Within planning and direction, you have to identify those prior intelligence requirements and those business functions that ultimately - what they want to do to be successful. Within the security apparatus, you're either going to block better or you're going to enrich and maximize pain where it's more costly for the adversary, right? 

Landon Winkelvoss: And I think that planning and direction and working with the business leaders to ensure that they don't have loss in that regard, usually that's the planning and direction stage that is so critical there. And of course, that's always moving and changing. And I think that even on the enrichment part, like, if you want to think about - and this is more advanced security teams. They really even go as far as to say, OK, how can we maximize pain and make it more costly for the adversary, right? So that's, like, just getting into network disruption, getting into blowing your cover, getting into attribution, sharing with law enforcement policymakers, sharing with researchers, you know, warning victims. 

Landon Winkelvoss: You can't do any of that without actually talking to stakeholders, talking to legal, talking to what your outcomes are going to be because intelligence - in the end of the day, you're selling outcomes, right? And you're selling outcomes to that business that ultimately maximizes it, reduces risk and ultimately lets them do their job and make more money. So that's, like, very much the planning and direction phase. 

Rick Howard: You mentioned priority information requirements. That's an old military term from intelligence analysts. But the one above that I always use is commander's intelligence requirements. In the business world, I would call them CEO intelligence requirements. These are bigger picture things. These are 10, 15 questions that the boss has about the business, about threats to the business. 

Landon Winkelvoss: Correct. 

Rick Howard: And they shouldn't change that much. They're, you know, they're not changing every day, right? And then from those, the intelligence team would break each one of those down into smaller problems, priority information requirements, and attempt to start answering those. Is that... 

Landon Winkelvoss: One hundred percent. I mean, at the very level - I mean, let's talk about, you know, look, what do CEOs care about, right? CEOs care about sales - I mean, they care about a lot of things. But I mean, at the very prior, if we're keeping it simple, they care about sales. So everything that filters down from that, you know, let's call it CEO's intent, commander's intent, is very much a strategy. It's either a growth strategy or a profitability strategy or some combination of both. And within that, of course, you have, OK, are we going - how - what are our - how to elaborate on our products or build more products. 

Rick Howard: So now we have these questions that the boss wants, and the intel group has broken them down into smaller, simpler questions. What happens next? 

Landon Winkelvoss: So after they ultimately have a plan - the planning and direction, then they have to go to the data collection phase. 

Rick Howard: Right. 

Landon Winkelvoss: And this is mostly - you know, obviously there's going to be a internal telemetry type of discussion as well. 

Rick Howard: Could be. Yeah. Yeah. 

Landon Winkelvoss: But then, of course, then there's - within any cyberthreat intelligence team, that's a - very much an external data type of discussion... 

Rick Howard: This is telemetry from the network? 

Landon Winkelvoss: This is telemetry. You're right. 

Rick Howard: Well, that's what I'm saying because you get those questions and you - I think the first step for an intel team is to - do I have the data at my fingertips to answer those questions? And if I do, then I go answer them. But if I don't, I got to go... 

Landon Winkelvoss: Correct. 

Rick Howard: ...Seek sources that will... 

Landon Winkelvoss: Correct. 

Rick Howard: ...Help me answer them. One of those two. 

Landon Winkelvoss: That's exactly - I mean, that's - you're hitting the nail on the head. 

Rick Howard: Right. 

Landon Winkelvoss: And of course, a lot of this comes down to cost, right? I mean, look... 

Rick Howard: Sure. 

Landon Winkelvoss: ...If you have $1,000,000 to go collect all this data and hire five analysts, great. Wonderful. Right? If you have less than that, you might have to have some kind of approach where you're buying into maybe a couple feeds and ultimately having a group of analysts, internal or external, that ultimately have that data to go answer those types of questions, so. 

Rick Howard: Well, you mentioned that just social media feeds is one source of intelligence. Most organizations that I know don't have the resources to track that down. So that is a requirement for the boss's question. And one thing you might do is seek a commercial intelligence firm to go get that for you, right? 

Landon Winkelvoss: Well, absolutely. And I think that's bringing up a good point where I think there needs to be a fundamental definition change of open-source intelligence. I listened to one of your previous listeners, and they said, you know, can't trust open source as your primary course. 

Rick Howard: Oh, yeah. That was - that's a good point. There was one - yeah. I remember you saying that. 

Landon Winkelvoss: He's not wrong. If you define open source as social media and what is considered the open web, I would probably agree with that, right? If you redefine open source as threat actor engagement or actor engagement, if you have regular open source, like the data listing tools that you speak of, and you also have that network, that telemetry, that external telemetry, that very much, in military terms - I mean, that is very much the online portion of human intelligence, signals intelligence and GEOINT, if you're able to collect that and aggregate that. And that can certainly help on solving business problems. So I think there needs to be almost a recalibration of what truly open-source intelligence means. 

Rick Howard: The guy who was mentioning that was Bob Turner. He's now the advisory CISO at Fortinet, but he was also the CISO at the University of Wisconsin at Madison before that. But he and I argue about this all the time, right? I wouldn't put it in tiers of intelligence sources, like primary and secondary. What you really should do is say, how much confidence do I have in the source? Even if the source is really good, you may not be 100% confident. You know, may be just 95%, but a Twitter feed - maybe that's 35% confident. You know, and you need to evaluate those source - reevaluate those sources all the time, right? 

Landon Winkelvoss: Hundred percent agree. Agreed. 

Rick Howard: So we get the - now we have the data, and we either use commercial stuff or we have it from internal. 

Landon Winkelvoss: That's correct. 

Rick Howard: Then what? What happens next? 

Landon Winkelvoss: Then you have to process, right? Then the processing stage. And this is where you really need an analyst workspace, work environment to ultimately be successful. At the end of the day, an analyst needs to be able to do what they need to do, and they're really going to do what they need to do in two main mechanisms, monitoring, or they're going to be investigating, right? 

Rick Howard: I always found it useful at this stage to design the intelligence product that the intelligence team is working on, as opposed to just collecting data and sifting through it. What is the ultimate thing they're trying to produce? They're trying to produce a report for some leader to make a decision with, right? 

Landon Winkelvoss: A hundred percent. I mean, that's exactly it. That's the final stage - right? - that analysis and production. That's the final wheel, if you will. And they're also producing a report that somebody is reading, that somebody can say, OK, I can go, you know, make a decision on this. 

Rick Howard: Right. 

Landon Winkelvoss: And that's kind of the name of the game. 

Rick Howard: Yeah, because the difference between reports where you don't make decisions and reports where you do make decisions is the difference between reading the newspaper... 

Landon Winkelvoss: Right. 

Rick Howard: ...And having an intelligence team, right? 

Landon Winkelvoss: Yeah. I mean, and I think that's a problem a lot - that's a - frequent we see in the threat intelligence space. I mean, you know - you probably know this from your past life. I think there's a lot that - you know, sometimes, threat intelligence is just that editor-in-chief model, where is just kind of like a - you know, OK, this is interesting that there's this vulnerability, or there's this zero-day that's out there, or this actor is doing this. But how is that - what do I need to do, you know, to reduce - how does that matter to myself? 

Rick Howard: So that completes the cycle. We go around the threat intelligence cycle. We've done all that work. But one of the most important pieces of that is going to the leader who's making the decision with the report and finding out if it's working for him because if it isn't, you should stop and redesign that report or figure out if that's really needed anymore, right? And that's a step that most people don't even get to, I think. 

Landon Winkelvoss: That gets very much the outcomes I reference in that planning direction base. What are the outcomes that we want, right? I mean, if there are - great, you want to put in controls. Awesome. Check. That's going to go to the engineering team, probably, or the security engineering team. I mean, are we at - is this at such scale that we need to have a disruptive output, whether that's cease-and-desist letters, blowing their cover somehow, working with law enforcement - right? - doing those types of risk mitigation measures that are having outcomes? That is the hardest part, and you'd be surprised how many organizations, very, you know, large organizations can't do that very well or are just at the beginning stages of doing that. And that's really what - that's really where great security leaders like yourself, Rick Howard, have probably been involved - have been involved in those very - those types of decisions with executives to ultimately fuel that outcome. 

Rick Howard: We were talking about particular questions that the CEO would ask the intelligence team. And what I'm coming around to after thinking about it for about two years being here at the CyberWire is that the intelligence team is really the perfect group to do cyber risk forecasting for the organization. 

Landon Winkelvoss: Like, think about, like, what you solve in the cyber landscape, right? You're solving takedowns, account takeovers, incident responsiveness, vulnerability management, security engineering, fraud, brand protection, data leakage, security ratings, threat hunting, asset discovery, shadow IT, vendor due diligence, restructuring advisory, IOCs, breaking news and forensics investigations. I mean, that's a lot to cover. 

Rick Howard: That's a lot. That is a lot. 

Landon Winkelvoss: That is a lot of problems to solve. So when you ultimately think about who is in the best position to solve a lot of that, you know, for the business, it absolutely is the intelligence team. Totally agree. 

Rick Howard: We were talking about building intelligence products that will help leaders make decisions, right? What I think my bosses want me to tell them - OK? - is, where do we prioritize resources? Do we improve our defenses over here, or do we use that money to do something else? And the way you do that is being able to tell them about what the risk is to the business, right? And I've been doing this for a long time. You've been doing this a long time. And the models we've had to do that in the past aren't that good. You know, I've said this in previous episodes, but I've gotten away with using heat maps, you know, where we list all the things that could possibly go wrong on the X axis and the Y axis and how bad they're going to be. And because I'm pretty good at - with a spreadsheet, I can color code it. So all the stuff high and to the right is red. And go into a board meeting and saying, all this red stuff is really scary. You should give me a gazillion dollars to, you know, fix it. And sometimes, that works for me and has worked for me, and sometimes, it hasn't. But we never really give the leadership a chance to decide if that's part of their risk profile. Are they OK with those risks, or do they want me to do something specific? That's why I think that question - being able to calculate the probability and material impact for our organization - should be squarely in the intelligence team's purview. 

Landon Winkelvoss: It should be in this intelligence team's purview. What is reality is it often is not, and it's not necessarily the intelligence team's fault. 

Rick Howard: Right. 

Landon Winkelvoss: I think you mentioned two things. Like, they're very different probability, but then you ultimately have actually what's happened. And I - so I've seen security leaders walk in, and you just - you've probably done this. This is - you just - you're a red team - penetration test. And you ultimately say, OK, here's all the bad things that could happen. You even sometimes orchestrate it so the person that you spear phished is that critical, you know, procurement person who's working a big deal with a partner, right? That - if that information got out, that's going to be catastrophic to the business. Let's say you orchestrate the red team to do just that and show that vulnerability. Great. Awesome. And you walk that in. Great. Then - I've gotten this question back from CEOs when we're presenting these types of things - what's the probability that can happen, and what are you - where are we in terms of our peers of rating that? And I don't think the security industry has really nailed that question. 

Rick Howard: That would be an understatement, especially in my career. I've been wrestling with forecasting cyber risk for the past decade. And like I said, I've used heat maps and fear, uncertainty and doubt, FUD, to convince boards and senior leadership teams to fund my special security projects in the past, but I was never satisfied with the approach. Still, I kept plugging away at it. I've read all the books, and some of the authors are now good friends of mine, but it wasn't until last year that I had a breakthrough and figured it all out. We did three entire episodes of "How to Do It" back in Season 10. The method uses a combination of superforecasting techniques and Fermi estimates and is held together mathematically with Bayes' algorithm. Essentially, make a best guessed estimate about the probability of material impact to your organization due to a cyber event, and then keep refining that estimate with new evidence over time, like an outside-in analysis of the probability that any organization, not just yours, but any, will be materially impacted by a cyber event. 

Rick Howard: And by the way, it might surprise you how low the odds are here. I know I was when I did the analysis. It does grow significantly based on how big your organization is in terms of revenue, but in the general case, the odds that any organization will be materially impacted by a cyber event is quite small. And then you do an inside-out analysis where you estimate how well your organization has deployed the first-principle strategies and tactics we've outlined in this podcast. And Landon is right. When you demonstrate through a red team exercise a specific weakness in the defensive posture and the CEO asks you the probability that this kind of thing would happen in the real world, you absolutely had to give the boss a real number that is just precise enough to make resource decisions with. And that's where the intelligence team comes in. The world is much bigger than collecting news about the latest attack campaigns from the likes of WICKED SPIDER. The intelligence team's real value is assessing the probability that WICKED SPIDER will materially impact your organization. 

Rick Howard: So in terms of cybersecurity first-principle thinking, cyberthreat intelligence, or CTI, and a subset of it, open source intelligence, OSIN, are tactics security practitioners can use to support the strategy - the what we are trying to do of intrusion kill chain prevention. Now, not every organization has the resources to pursue that strategy and the necessary tactics. It's a big lift, in terms of the people, process and technology triad that will be necessary to run a good team, but by the time your organization grows large enough to get noticed by the roughly 150 nation-state groups tracked by the MITRE ATT&CK Wiki and the estimated 100 cybercrime groups tracked by various other security vendors in the world, trying to prevent the success of specific cyber-adversaries like WICKED SPIDER will have a much bigger impact on reducing the probability of material impact to your organization due to a cyber event than the passive measures you might get from other strategies like zero trust, resilience and automation. I mean, somebody has to destroy the Death Star, but even the larger organizations that have these teams will likely not have all of the collection resources they need to answer the CIRs, the CEO's intelligence requirements. And that means they will most likely be using third-party vendor intelligence teams, who supplement their collection effort. It's complicated, but that's the game. And as your organization grows in terms of revenue and people, the strategy of intrusion kill chain prevention and the tactics of CTI are the logical next step in protecting your enterprise. 

Rick Howard: And that's a wrap. I want to thank Landon Winkelvoss, the co-founder and VP of content at Nisos, for helping us learn more about cyberthreat intelligence. Next week, specifically for my good friend Steve Winterfeld, we're going to look back to all the prior research that got us to think about cybersecurity first principles. 

Steve Winterfeld: Rick, nobody cares about the history. Let's move on. 

Rick Howard: You don't want to miss that. The CyberWire's "CSO Perspectives" is edited by John Petrik and executive produced by Peter Kilpe. Our theme song is by Blue Dot Sessions, remixed by the insanely talented Elliott Peltzman, who also does the show's mixing, sound design and original score. And I am Rick Howard. Thanks for listening.