CSO Perspectives (Pro) 7.27.20
Ep 15 | 7.27.20

Security operations centers around the Hash Table.

Transcript

Rick Howard: Hello. Hello, everybody. Rick Howard here. In this episode, I want to introduce you all to the CyberWire's Hash Table. While we were on break, I asked a bunch of my old friends, respected peers and just really smart people if they would be willing to routinely come on the show, sit around the CyberWire's Hash Table and discuss topics pertinent to the network defender community. The benefit to all of you, the listeners, is that you get a broader view instead of just listening to me blather on about the theory of intrusion kill chains, zero trust and the like. You will get to hear from practitioners in the field tell you how they are actually implementing those theories or, even better, calling me out on how wrong I am when it actually comes down to implementation.

Rick Howard: As an example, I had Helen Patton - she's The Ohio State University CISO - sit down with me at the Hash Table to discuss zero trust. I told her that I thought it was imperative that we identify the material information in our organization, the data that - if it would get out to the public or if some hacker destroyed it - would have a negative material effect to our company and then limit access to that data only to the employees who absolutely need access to it and no others. 

Rick Howard: Helen laughed out loud at that. She told me that she wished it was that simple at the university. She told me that she has university researchers who go through various modes of zero trust requirements, depending on the stage of their research. When they are doing general purpose research, the researchers want many people to have access to it. But once the researchers get an idea about how to possibly solve some problem, they need to reduce the set of people who can actually see the data for fear that somebody would steal it and publish before they did. The restriction gets even more small when it is time to get a patent. But once the patent is achieved, then they want to open up everything again, open up all the access to everybody. That is a complexity that most of us don't have to deal with. And having people like Helen come to the Hash Table to give her insight for these kinds of things will be invaluable. In this episode, we are talking about security operations centers, and I think you will learn a lot from our Hash Table guests about the subject. 

Rick Howard: My name is Rick Howard. You are listening to "CSO Perspectives," my podcast about the ideas, strategies and technologies that senior security executives wrestled with on a daily basis. 

Rick Howard: In the last episode, I made the case that the key bricks in our first principle infosec wall - like defensive adversary campaigns, zero trust, DevSecOps, cyberthreat intelligence and others - should all be managed from a central operations center within an organization. For this episode, I have invited the CyberWire's pool of experts to sit around the Hash Table with me and discuss how practical that really is and to discuss the skill sets involved for SOC analysts in this ever-changing environment. 

Rick Howard: Don Welch is an old Army buddy of mine. He and I taught computer science together at the military academy back in the 1990s. He's currently the interim CIO for Penn State University and has been the CISO there and at Michigan State in a previous life. But as a former infantry officer, he brings an adversary mindset to cyberdefense. 

Don Welch: When you're talking about combat, you have an adversary who is thinking - adversary who you are trying to outwit for your advantage. And you have - you know, in the case of the infantry, you have terrain and an environment in which you take your resources and try and allocate them to have an advantage against the - against your adversary, whether it be offensive or defensive. In cyberspace, we're trying to outthink our adversaries. We have a limited amount of resources. We have this technical environment. And so the principles are the same. Obviously, the kinds of resources and the terrain in which we do battle is very different. But, philosophically, still, you're - as a leader, you are trying to stay up with or hopefully outthink your adversaries. 

Rick Howard: Bob Turner is an old retired Navy guy. As an old retired Army guy myself, I will try not to hold that against him. He is currently the CISO at the University of Wisconsin at Madison, but he brings the same adversary mindset to the problem that Don Welch does. He has devised a defensive playbook for his team to follow, as he anticipates what the adversary will try to do to get into his networks. He uses an open-source intrusion detection system in his security stack called Suricata that his team uses to decide which defensive plays to run. 

Bob Turner: We have two different sequel plays. One is run using Suricata just to kind of, you know, see what's going on out there. We like that one the best because we can actually replay that and make sure that it wasn't injectable. And then we'll run a - the automated play with the Palo Alto and, you know, look at the stuff that is going east and west, as well as the stuff that went north-south that we picked in and out with Suricata. And we will also check with the system owner and let them know, hey, we saw this; can you tell if anything happened? And then we'll - you know, we'll check logs, and we'll check signals with the other distributed IT organizations. And just to make sure that, you know, it didn't happen, we also run tests. It's one of the things are our intern group, our student worker group, does for us - is they actually know how to go through and check to make sure that things were either injectable or not. 

Rick Howard: Don and Bob's kill chain thinking is one side of our first principle infosec wall. On the other end is our strategy that does not tie back to how the adversary operates. This other strategy is about reducing the attack surface, and it is called zero trust. And many see CISOs I've talked to in the past year have started their own journey down the zero trust path. Here's Kevin Ford, the state of North Dakota's CISO, describing his current zero trust situation and where he wants to take it in the near future. 

Kevin Ford: So we do have zero trust methodologies built into our data centers. Without going into too much detail, yes, we have it. We use a microsegmentation philosophy, where we have zero trust baked into security appliances that exist within the data center. You know, eventually, I'd like to expand on that. I, like, you know, having mutual TLC everywhere so that we're authenticating at that level, too. And then also, you know, application-based and port knocking is also a great technology, and I've seen some technologies even that go beyond port knocking and do some pretty slick things. So I think it's a really, really promising security philosophy, and I think it's quickly maturing. So I'm very excited to see where all this different technology goes. 

Rick Howard: For zero trust, though, it is one thing to try to limit employee access based on role. But the frustration that some CISOs have is that zero trust provides no context about specific attacks from the sense of a classic insider threat. Here's Helen Patton. She's The Ohio State CISO I mentioned at the top of the program. 

Helen Patton: Well, zero trust is certainly one of the things in the toolbox, right? Because forget insider threat, especially as we've got more and more remote, more and more cloud, more and more mobile. I don't see how you'd get away from zero trust as a valid strategy for defense. The challenge I have with insider threat is that they look like a - it's wolf in sheep's clothing. 

Rick Howard: What I've been hearing from CISOs is their frustration about zero trust strategy that provides no context about this insider threat. They produce generic controls designed to protect against a generic adversary. Once insiders get access, though, either as legitimate employees who have a grudge against the organization or who have been turned by some outside organization, whatever they do on the network appears normal. Here's Helen again. 

Helen Patton: They look legitimate. They are legitimate in certain contexts, but in certain contexts they're not. And being able to do contextual role-based access is - as it relates to motivation is really hard. I can do contextual role-based access based on location or job or state of end point or those kinds of things, but I can't - really can't do it based on - today I'm a good guy, and tomorrow I've been turned and I'm now working for the other side. And, you know, I'm not sure that I'll ever be able to solve for that. 

Rick Howard: When I mentioned to Helen that the original zero trust white paper written by John Kindervag back in 2010 suggests that a mature zero trust program assumes that the bad guys are already inside your network and suggest that network architects design their environments with that in mind. 

Helen Patton: I think that's a nice idea with no practical reality. 

(LAUGHTER) 

Helen Patton: With all respect to John. 

Rick Howard: And that, my friends, is where theory developed in white papers and essays and podcasts smashes at great velocity, head on, against the situation in the real world. 

Rick Howard: All the Hash Table members that I talked to about their SOC operations have embraced the idea of automating their repetitive SOC tasks. I wouldn't call it a DevSecOps adoption across the board, but most believe that more SOC automation is a no-brainer. It turns out, though, that being a CISO of a university might give you a bit of an advantage because you have a ready-made pool of students who are looking for real-world network defender experience to draw from. Some are even studying computer science and learning how to code. Here's Bob Turner again, the retired Navy guy in CISO at the University of Wisconsin at Madison, talking about his student interns. 

Bob Turner: So I have a SOC team which is basically divided into two distinct organization. One is incident response, and they are the - you know, your standard incident responders. Most of the full-time staff are Tier 2 and Tier 3 responders. And then we have what I call - is our secret weapon. We have about 20 student workers that are working in the SOC, helping with not only incident response but also the second group in there, which we call testing and cyberdefense, which is really vulnerability management and the tools of the trade. We have students pretty much from the computer sciences world, but we have folks that are - one's a materials engineering student. We had another one who was a legal studies major. We've had English majors in there. We've had folks that were inside of the business school arena. You know, it pretty much crosses the whole spectrum, but mostly from the comp sci disciplines. 

Bob Turner: We pay probably just a little bit more than they would be paid as a student worker, say, in the division of student life or one of the vendors inside of the union organization or elsewhere on campus. We do that specifically because we think that this work is important, but it also shows them that this is an area where you might get a little bit more than just the average, say, IT worker. 

Rick Howard: And Helen Patton brought up an interesting point about her student interns and how they are plugged into what is happening with the current campus culture. 

Helen Patton: Yeah. And they're super creative. And (laughter) the nice thing is they're clued in, usually, to the students on campus that would do silly things. So they're white, black and grey hats all in one. It's all great. 

Rick Howard: With all the CISOs I've talked to over the years, most agree that the function of intelligence is an absolute requirement for a SOC. Intelligence is the fuel that drives the entire operation center. For many, though, they have to get creative to perform that function. For Don Welch over at Penn State, his team hasn't grown big enough to require a dedicated intel team, but they work hard to collaborate and share with their peer universities. 

Don Welch: The intel function is really spread somewhat across our team. Higher ed is, I think, different than most industries in that we collaborate much more and much more openly than others do. And we're fortunate to be a member of the Big Ten, and the Big Ten has over a 100-year tradition of universities collaborating on things other than athletics. And so our security people get together in person three or four times a year, have monthly calls. We have, you know, an email list. And we share threat intelligence pretty openly. We're also a member of the REN-ISAC, which is the Research and Education Network ISAC, so we get a lot of information sharing there about the threats. 

Rick Howard: But he is pretty fed up with the low signal-to-noise ratios he gets from open-source and commercial intelligence feeds. 

Don Welch: So we've tried some threat intelligence tools and feeds, and we haven't found them to be quite worth it, as opposed to what we get through the various sources that we have access to through those higher ed collaborations. 

Rick Howard: At Ohio State University, Helen Patton relies on her vendors she has chosen for her security stack to track down known adversaries, and this is a great solution for organizations that don't have the resources to track adversary campaigns themselves. She seeks vendors who can do that for her. Where she is stretched, though, is dealing with ongoing attacks directed specifically at her university. 

Helen Patton: Well, again, we're partnering where we've got vendors who are giving us our threat intel feeds, and they'll take a look at that. But we're getting attacked by everybody everywhere (laughter). I mean, you know, just to give you a public example, we all know that COVID researchers are highly attacked at the moment by various nation-states. And we're a research university; we have COVID researchers. So the question isn't so much around what the campaign of the day is so much as is it an MO that we've seen before? So just trying to identify what are the vectors that they go through, what's the kill chains that they like to use and then being able to make sure that we've got standardized controls that would address those pieces of the pie - otherwise, it's not scalable. Otherwise, you're just playing whack-a-mole. 

Rick Howard: As a state CISO for North Dakota, Kevin Ford has an additional SOC intelligence requirement because his organization works for the state government. His SOC, by definition, has a close relationship with state law enforcement and routinely shares intelligence with them. 

Kevin Ford: We have two intel teams. We have a threat intel team, which works with our state and local intelligence center, and they're a group that works mostly with law enforcement. And what they do is they help us speak out to the law enforcement community as well as DHS and the FBI around the things we're seeing on our network. We have, like I said, a very big network, and it's very popular with certain elements of the cyber community. So there is a lot of interest in the things that we're seeing. So they do that as well as they help ingest bulletins, so on and so forth, from DHS. 

Rick Howard: But he also has a traditional intelligence sharing mission. 

Kevin Ford: Because we're state, we work with both the MS-ISAC and the election infrastructure ISAC, as well as a number of other ISACs. So they help us bring those in and ingest them and turn them into things that our SOAR can ingest. 

Rick Howard: Security operations centers have been evolving since we've started seeing in-house versions of them pop up in the late 1990s and early 2000s. The skill sets to manage them and to work in them have been evolving, too. Here's Don Welch again. 

Don Welch: Well, I think they're moving up the stack. You know, we think about technology overall. You know, when you and I were back just starting out in this field, even VAX minicomputers had operators. You're not quite as old as I am, but I think you were still in that era. And, obviously, you know, now we have one system administrator for, you know, 500 virtual machines, and so we don't need as many people down on the - you know, at the bottom of the stack. And the skills in IT overall are moving up the OSI stack, if you will, even above, you know, the application level. And it's now getting people to use, you know, the software correctly and so forth. 

Don Welch: And I think it's the same thing in security. So a lot of these lower-level skills, we're figuring out ways to automate them and make them better. People need to understand some of the basics of what's going on, but they can focus more on that - the high-level cognitive skills that are necessary. 

Don Welch: So as I've said, you know, we've got more people who have a programming background and understand how to automate things, but they have to know what they're automating, so they need that basic security background. They've got to understand all those fundamentals, just like when I first, you know, went to school. We had to program in assembly language, even though we - if we were lucky, we'd never do it again professionally. But when you understood it, you know, then you could understand more about, you know, how to make a program efficient and so forth and those tools. And I think, you know, that's the case, but the skill level we need is much higher. We need people to understand within that context of the kill chain how to break that up, what the enemy are after, how to work with people. 

Rick Howard: I agree with Don. The evolution of SOC skill sets have been moving away from only deeply technical towards a mix of technical and a collection of the traditional soft skills, too. 

Don Welch: So when I first came in, we had a very technical team that was focused on the alerts and on the technology and on our security stack, and now we have far more people who spend their time talking to IT people around the university, helping them solve their problems, helping them understand why we need to do this, figuring out, especially, you know, for us, where we have research projects, where we have microscopes, you know, that are running MS-DOS because these microscopes don't wear out, you know, so forth, and all the different kinds of technology that we have around the university. One size doesn't fit all, but people who go out and help people understand how to, you know, fix their own problem and that's - you know, so there's customer service skills, leadership skills, people skills, organizational skills, as well as technology. 

Rick Howard: According to Don and many CISOs, the skill set of SOC analysts have expanded over the last 30 years. But when you are hiring, what should you be looking for? Here's Kevin Ford again, the CISO of the state of North Dakota. 

Kevin Ford: The skills I look for really is, can you learn? That's the biggest thing, right? We say that in IT right now, the half-life of a IT analyst is about 18 months. And what that means is within 18 months, if you learn nothing, half of what you knew is now invalid. And I think the half-life is actually a little shorter with a security analyst just because there's so much new stuff. The thing you have to know to be good at security is just everything about everything, right? You have to know everything about every system. Yeah. No. Right? 

Kevin Ford: So I look primarily for a person who's great at learning. I want someone who has great critical thinking skills and good logic skills in addition to, you know, all of the alphabet behind their name. You know, I balance the two pretty evenly. I want someone who's good at learning who also knows kind of what's going on in the security world broadly.  

Rick Howard: And he is absolutely right. When I interview SOC analysts and intel analysts for a potential job, after I work through the basic job requirements, I always save one question for the end - what computer systems are you running at your house? Because if they are not running a Linux box somewhere, they're not smart enough or curious enough to be on my team. It's not that I think that they have to be an expert on Linux to be part of it. It is just that having a Linux box at your house demonstrates intellectual curiosity and the ability to figure things out on your own. 

Rick Howard: But at this point in SOC analyst evolution, being able to learn on your own is basically table stakes. Kevin has his own special sauce that he is looking for. Now, Kevin cut his teeth as a young network defender at NASA as a master information assurance and security specialist. Here's what he says. 

Kevin Ford: I think the other one is - and I took this from my time at NASA - I'm always looking for, particularly in incident responders, the, quote, unquote, "steely-eyed missile men and women." You know, I'm looking for the astronauts. I'm looking for the people who won't buckle. And generally, those people are the people who are - will have a conversation with you and be very genuine. They won't be afraid to tell you they don't know something. They won't be afraid to tell you, hey, you know, I'll try to do this, but no guarantees because I want people I can trust, not a bunch of yes men and women when we're doing incident response. I want people who aren't afraid to fail. 

Kevin Ford: So that's something I really try hard to instill in the team - is that, you know, don't be afraid to fail. We need to try things, and one of my metrics is hold-your-beer moments. So however many hold-my-beer moments we've had within the SOC in a week, I take more as good, as long as things - you know, as long as smoke's not coming out of the machines (laughter) and something's on fire, right? But if we've tried some pretty interesting things and failed, OK. Well, at least we tried, and now we know. And we've learned lessons, you know? The more lessons you've learned by the time you have, you know, your big event or your big breach, the better off you're going to be. 

Rick Howard: But after all of this, I asked each and every one of these experienced network defenders, these elite CyberWire Hash Table members, these thought leaders in the industry to describe what they thought was the ultimate first-principle purpose of their SOC. In other words, with all the deep thinking they have done about intrusion kill chains, defensive adversary campaigns, insider threats, cyberthreat intelligence, zero trust, SOC automation and SOC analyst skill sets, what exactly did they want their SOC to do? 

Rick Howard: Now, to a person, they all said some version of this - they want their SOCs to detect and respond. But when I pressed them about first-principle analysis of the SOC's purpose, they agreed that the ultimate purpose was much bigger than that. It clearly included detection and response, but what the SOC should ultimately be striving for is to reduce the probability of material impact due to a cyber event to their organization. We all agree that the SOC evolution has not quite reached that aspirational goal, but it is something that we, as network defenders, should all be striving to achieve. 

Rick Howard: And that's a wrap. Next week, we will be discussing incident response from a first-principle lens. In the meantime, if you agreed or disagreed with anything I have said in the last two episodes about SOC operations, hit me up on LinkedIn, and we can continue the conversation there. The CyberWire “CSO Perspectives” is edited by John Petrik and executive produced by Peter Kilpe. Mix, sound design and the original music by the insanely talented Elliott Peltzman. And I am Rick Howard. Thanks for listening.