Threat Vector 9.5.24
Ep 33 | 9.5.24

Building Bulletproof Security

Transcript

David Moulton: Welcome to "Threat Vector," the Palo Alto Networks podcast where we discuss pressing cybersecurity threats and resilience and uncover insights into the latest industry trends, I'm your host. David Moulton, Director of Thought Leadership. Today, I'm speaking with Chris Tillett, a leader in product management in research and development with extensive experience in developing innovative solutions that address complex cybersecurity challenges. Chris has spent years at the intersection of technology and security, where he has a deep understanding of how human behavior and organizational dynamics can impact cybersecurity strategies. His work in R&D has provided him with unique insights into the delicate balance between implementing robust security measures and maintaining operational efficiency. [ Music ] Today we're going to talk about human behavior, organizational friction, and the importance of balancing security measures with operational efficiency. This topic is crucial because in today's fast-paced digital environment, security measures are only as strong as the people who follow them. Organizational friction often arises when security protocols disrupt workflows, leading to resistance and potential vulnerabilities. Understanding how to balance security with efficiency is key to ensuring that organizations remain both safe and productive. Here's our conversation. Chris Tillett, good morning.

Chris Tillett: Good morning, Dave.

David Moulton: Going to start out with a softball question for you today. Why does security suck so much?

Chris Tillett: That's a multifaceted question. But quite frankly, at the end of the day, it comes down to human behavior. We're lazy by nature. We want things easy by nature. We like to flow like water, right? So when you look at something that's like security, it requires you to have rigor. It requires you to have discipline. It requires you to go in and implement something in a manner in which it actually causes friction in the organization. So if you don't have the right budget to be able to go do that, if you don't have the right plan in place, or if you're just reactionary, you end up basically causing friction in the organization. You don't really enable the organization. And then the organization doesn't provide the sponsorship that's needed to get the appropriate program in place. So basically, because of the fact that we're coming at it in a fearful way, versus coming in a this is our plan. This is what we're going to secure. This is how we're going to go secure it. This is what we need to enable the business. So when we look at it from that perspective, we can make security really, really well. And it doesn't have to suck. But because of the fact that humans are lazy and we just want to do the easy thing and we just want to get the project done and over with, that's why, ultimately, security sucks. That's why a lot of times you go into an organization and the basics aren't even done. They don't even know what assets they own.

David Moulton: What is it that makes it impossible to get right?

Chris Tillett: It's not impossible to get right. The desire is there. But in many cases, there's too many things in the way. People start off with good intentions. They want to do the right thing, and then there's all these other organizational roadblocks that sometimes create the reason why they can't.

David Moulton: So I show up for work. I just want to get my job done. I'm working on something. I'm logged out, logged out automatically sometimes for my own benefit. But I don't perceive that as a great benefit. I'm frustrated and slowed down. I'm looking for ways to get around it. So kind of my human nature. Right? I'm looking for something that's frictionless, delightful, even, not necessarily describing security here. And that seems like that's tricky to find that right balance. What else is causing security to just fall down?

Chris Tillett: I also think it's they're over rotated on the tech side and not looking at the actual humans that are in the program. So one of the things when I talk about security operations is, who are the humans that you have? What are their skills and capabilities? Also, what's realistic for your organization? You can have all these lists from an RFP that you want to actually do. Are you actually going to accomplish that? It sounds so good, but when you actually come to actually implementing it, is that 18 months away? Is that 24 months away? Or are you just buying the feature and hoping that later on, you'll actually turn it on? So, unfortunately, hope is not a strategy. But in many cases, you see organizations design their security programs around hope. We're going to put this one agent everywhere, and we hope it never gets turned off. We're going to use this cloud provider, and we hope they never go down. We can't afford to do business continuity planning, so we hope that the Northeast AWS data center never goes down in Reston, Virginia. It's a lot of hope.

David Moulton: That's a lot of hope. You're reminding me of a book I'm reading right now. It's called "Alchemy," and it's a pushback against using economics models, logic models, to solve everything. And the author talks about this idea of logical versus psycho-logical. Breaks those into two terms but one idea. And one is the context of the human condition, and one is the narrow view of if all of these things are true, then this model works. And we accept that. It's logical. It's safe. But then when reality hits and it doesn't work, we just scratch our heads and go, oh, we must not have done a good enough model. And I think we looked at the wrong thing. And I hear you when you're talking about security being tough to implement. We're hoping for something. It's not really taking into reality all the context that we're operating in as an organization, as a skill set, as the workforce that wants to get something done and is looking at that little bit of friction, those little cuts every single day, every single week that add up and going, you know what, if I just shift away from this control -- maybe it's a browser that's locked down to one that's not -- and life is so much easier. I'm just going to move in that direction. Hope it'll catch me. And then you open up that vector.

Chris Tillett: Yeah. Or the other thing is, like you said, I'm always the person of I'd rather ask for forgiveness than get permission. So the question that comes down to, why do we make it so difficult to get permission in corporate environments? Because we have that economic model. It's a top-down model versus a innovative model, where it's like, look, if I need to use a different web browser, how come I can't account for that? You mean to tell me that one web browser is going to bring a threat vector and ruin the whole organization? That's how we designed our program? That tells me right there that we're too simplistic in our thinking.

David Moulton: Incredibly brittle.

Chris Tillett: Yes.

David Moulton: Incredibly brittle.

Chris Tillett: And it's also very short sighted.

David Moulton: So today we were going to get into this topic of MFA versus conditional MFA. And before we go into the conversation, I want you to take a second and talk to our audience. What is conditional MFA? Let's just get that baseline.

Chris Tillett: So based on conditions, I'm going to now say, hey, listen, David, I appreciate the fact that you want to log remotely into this machine over here. You're going to need to prove that it's you, so that it's not some attacker that's using your account or your credentials to be able to do that. So it's allowing you to say, hey, look, I'm going to press that button on my phone. Yes, that is me. And I'm trying to log in on that machine. If you take a systems administrator and now all of a sudden you start saying, I need to do that for 1,500 machines -- think about the water strategy. They're going to find another way around that control.

David Moulton: Okay. So going back to that context or understanding what a human, what a user, and what an employee is going to do, and then looking at their behaviors, looking at the network that they're coming in on, looking at the data or the applications that they want to access, and putting those things together and saying, sometimes we're going to be a little bit more friction, sometimes we're going to back that off, make it a little easier for you, back to that actually paying attention to what's going on, and then able to scale that out, I assume, at speed, so that you can still use MFA and get some of its benefits, but you can kind of dial up, dial down the amount of security. Did I play that back right?

Chris Tillett: You played it back right. And the thing is it's also based on the fact that you have to look at making sure that you're still enabling people to do their jobs. So if you go back, if you turn identity into old school firewalls, if then else if, you know -- if you turn it into like the way we used to run firewalls, everything from Port 80 goes to this web server, and so everything for these identities, they have to do this, people will find their ways around and over and through because of the fact that it's just going to create -- it's a rigid policy that, like you said, is brittle in its nature. It's going to only block out the people that it's supposed -- it's not going to block out everybody. The people that can get around it will.

David Moulton: Okay. And I have some ideas on why an organization may choose to use conditional MFA, but I want to hear from you. What are some of the things that you're hearing from customers or that you're talking to them about with, you know, this type of concept of dial it up, dial it down on your MFA?

Chris Tillett: Unfortunately, a lot of them are not wanting to dial it up or dial it down. What they're wanting to do is just apply it like a, you know, peanut butter on a sandwich and just hope it works. And so the thing I always ask them about is, I'm like, well, what if the fact that you have a systems administrator that you're doing that to, and they are running a script on 1500 machines, what you're guaranteeing is that they're going to use a shared service account. That's what you're going to guarantee.

David Moulton: So you move the risk around and reduce your security all in one fell swoop.

Chris Tillett: And then the thing is, once they figure out that they can do that, they're going to share that with the rest of their friends, because they have the ability to go in and create those accounts. I know there's a government -- I'm not going to say which state it is, but there was at one point where every single systems administrator had a local account on every single machine.

David Moulton: Ooh.

Chris Tillett: Yeah. That's gone. Thank goodness. However, it's not hard for them to go and rebuild that again if you create friction for them. They were able to get that reined back in, but it shows you how quickly it can go right back out.

David Moulton: We were talking to an oil and gas company recently. And one of the problems that they ran into was the account that they were sharing around with their sysadmins was then compromised by an attacker, the same account. And then they had legitimate traffic, legitimate work going on, but they couldn't figure out internally what was, and an attacker all on the same set of credentials just rolling around. They had some lack of visibility as well. So it was just a combination thing for this IR team to figure out who's real, who's supposed to be here, and what's going on, and how do we shut this down. And it was just this lack of discipline. They're trying to get things done moving quickly. And like you said, it spread very quickly. It was a year later that they finally figured out how far and wide this had gone.

Chris Tillett: Yeah. And like in the example I gave with those systems and service accounts, you know, that organization, the reason why they made that decision is they were focused on enabling the organization. And they wanted no friction. So they were doing a great job, but they realized that -- that's one of the reasons why they went down the conditional MFA route, because of the fact that they were like, we've got to rein this in. Fortunately, they didn't get breached from it. They were able to rein that back in. But you can see how if they were to increase friction from here on out, it's not hard to go back. It's really not hard to go back. So they have to be very balanced in their approach with conditional MFA.

David Moulton: So Chris, if an organization does implement conditional MFA, can you talk about how that impacts the security posture?

Chris Tillett: I think it's a great product. I think it's a great thing to do, especially on mission critical systems. Hey, look, if you're connecting in or if you're now risky -- the reason why I want to do conditional MFA is based on your risk. If you're doing a lot of risky transactions, I'm going to conditional MFA you, period. Right? If, however, I'm seeing that you're just doing your job like the rest of your peers are, I'm going to get out of your way. All I want to know is if your behavior is different than what it normally is. And it's for systems and network administrators in comparison to your peers as well, because systems administrators, their behavior can be wildly different. You think about somebody who works at Palo. I'm on the East Coast. I could be helping the team in Tel Aviv do an upgrade. And then I not touch those systems in six months. So it's going to look like I'm risky, but when it looks to for the rest of the team who's been working with that group in Tel Aviv, it's like, no, he's just doing his job. So do we conditional MFA me at that point? No, because based on two factors of this is my job, and my peers are already doing this work, I'm not going to bother Chris. However, if all of a sudden, I'm doing XYZ and then PQR and then ABC, and then they look at the rest of my team, and it goes, none of y'all been doing that, conditional MFA that dude. We need to go address that.

David Moulton: Yeah. Absolutely. So you're looking at that behavioral side of things, and then you're looking at the risk of what's the access and comparing that, basically, to your peers, and saying, like, you know, here's a normal band. And you're outside of that band. So we got to -- you know, we're going to hit you with a conditional MFA hammer, verify, and then -- are organizations enabled to also go back and say, okay, this user actually does have legitimate use, and part of the conditional is what's going on there, but then we're going to change what our rule sets are to start to account for some of these wild variants or these new things that are going on? It seems like that could be tricky.

Chris Tillett: Fortunately, no. I have yet to see that on the market just yet. So what you're looking at is -- the real Holy Grail of it is taking conditional MFA and combining it with UABA and taking advantage of both technologies at scale to be able to enable this in an appropriate manner. [ Music ]

David Moulton: So let's go back to a part of our conversation where we were talking about the thing that I want to do, and then security gets in the way. So the user experience. And then the thing that security teams want to do, which is reduce risk and shut down those threat vectors before they become a problem, how does an organization strike a balance between those two, it seems to me, opposing goals?

Chris Tillett: They are opposing. The thing is it has to be balanced in there. So essentially, what you have to be able to do is understand where your risk is. You understand your assets. You understand what your high value targets are and say, listen, everything that we're going to do to these high-value targets, it's just -- it's too -- the organization will suffer too greatly if we don't have these controls in place. But where you don't need controls like that in place, have the ability to do those types of analytics. And then at that point, only bother the couple users that are risky. So to me, I want a conditional MFA the two or three sys admins that are maybe a little bit rogue, not the other 1,500 that are doing their job. So it's a way -- you have to really balance that out, but that requires you to actually think, what am I actually trying to do? I had an organization that they asked if we could look at their passwords. They had no password policy ever. So I'm like, well, why do you need to know what your passwords are? You already know they're bad.

David Moulton: Right. They're as easy as possible. I assume they're "password."

Chris Tillett: Like, you can just make that assumption. So why do you need tech, why do you need some cumbersome piece of tech when you can just use the human nature and go, well, how about we start in implementing a policy and some controls? And the reason why the behavioral analysis works is the behavioral analysis allows us to see which controls aren't working. So, for example, I was doing a demonstration to a customer. And I was showing them how a laptop was able to VPN into the network. And it wasn't even on -- like, it didn't have any of their same naming convention. It's using an operating system and web browser they had never seen before. And I said, so it shows you that basically, this VPN control did not have active directory security groups in it. And I'm not kidding you, a Director of Operations for Security said, you can do that?

David Moulton: Ooh.

Chris Tillett: I stopped my demo at that moment, asked what VPN provider they had, and grabbed the code and said here.

David Moulton: Right. Let's just fix this immediately.

Chris Tillett: But see, the thing is, but that's what the behavioral analytics did, is it said, hey, wait a minute. This is odd. So it actually helped them to see that they were missing a control in place. So that's why you have to have both. You can't just turn policy on, like, rigid that way, because then people will find their way around it. That's why if you balance the two -- and that's a constant tuning, right? Like, look at the microphones and the video setup. We didn't just walk in here perfectly, have everything set up. We had to modify the camera. You had to move the cable and all this other stuff to get things in tune.

David Moulton: That's right.

Chris Tillett: If I get up and walk out, you're going to need to do it right again.

David Moulton: Of course.

Chris Tillett: Yeah.

David Moulton: So let's get into some of the things that you're seeing with MFA or conditional MFA where employees are gaming the system. I think those are interesting conversations and things that you've seen out there that our audience will be interested in.

Chris Tillett: Well, I've seen the service accounts. I've actually seen that being used. And I've seen local accounts that are being used to get around the actual issue. So there are many cases where service accounts have been in use for years. So if there's no behavioral monitoring of your service accounts, there's all kinds of service accounts that have full admin access in many cases, and people just hop on and ride on that channel. And then they're able to do their job. And so I've actually seen it where you had that. And then they've actually shared it. We were doing behavioral monitoring of systems administrators, and we found six were all sharing that same account based on their peer grouping.

David Moulton: So back up here for a second. This isn't a middle level employee doing a low-risk job. This is somebody who knows security or should and is going in sort of a "rules for thee but not for me" type of a mentality. All right. So, you know, look at yourself first. Maybe that's one of those spaces. And apply some level of rigor to your analysis.

Chris Tillett: But in their defense, I'll tell you what they did. What ended up happening is that they actually needed to run a script, and they needed to do something to keep the systems up and running. So with the conditional MFA and things like that, the script was getting stopped. So they needed to do something to act quickly to keep the organization up and running. So they felt the fastest way to accomplish it was this. So in their defense, I understand why they did it.

David Moulton: So back to context, back to understanding the human trying to get something done.

Chris Tillett: They're trying to get something done. So yes, you know, was it the wrong way to do it? Absolutely. However, it got done. The organization kept up and running. And what was great is we were able to identify that service account and get it shut down. So at the end of the day, it all worked out. But if you're not continually behavioral monitoring in that way, that stuff will start happening, and then they won't get shut down. And then now you're on the hunt for the next service accounts, you know?

David Moulton: So Chris, I've got two more questions for you. First one, what can organizations do that actually foster a culture of security to embrace these bits of friction in their lives? How do you put the value back into it so you change the mindset so that they are secure and not looking for those in the rounds to policy or controls or intent?

Chris Tillett: When I go way back when we did not have cyber security and Black Hat was -- people were being arrested at Black Hat, you know, 20-some-odd years ago, security and IT were together. And I think we've kind of made the mistake of saying that the fox can't watch the hen house, and they have to be separated. But that's also created friction organizationally for most organizations. I think there needs to be a more -- when you look at everything that's there from a cloud, you know, environment all the way down down to on prem environments, this idea that we're going to segment these teams and keep them separated and then they've got their rules of engagement and these rules of engagement, that is actually -- we're managing it like a financial organization like you brought out, like that book said. And so in reality, these are people that all have to keep these systems up and running. So to me, if you're not communicating with those teams as a security organization, and you're just going in and applying the security solution that you know works, because that's what you did at your last job, that's how you get in those environments. When you go in and actually -- when I go in and I go back into operations, I actually go into sometimes, if it's not a cloud environment, I'll go into the data center and actually trace cables. I want to see the different machines that are out there. And sometimes you can go, and you start talking to the people, and you talk to help desk admins and help desk analysts and things like that. You learn a whole lot, and you realize, okay, I can't put that policy in place here yet. This organization -- it would destroy this organization if I put that policy in. It sounds good. It worked over there. But maybe that was a military organization, and they're more understanding of rigid type things. This is a creative professional organization. You got to be a lot more chill with the way you go in and approach it. And you want -- what you want is you want the users to actually care. And when the users actually care, and they know that you care about them, that's actually how you get things secure.

David Moulton: So you're telling me a story. I spent my first 20 years doing design. And often, we were segmented away from engineering. We were segmented away from business. The business would say, we want something. Engineering would build it. And then we were supposed to color in the lines and make it look nice. And then it started to shift, and we started to have a team that had a business analyst, a designer. And I'll say that design is how it works, not how it looks. But, you know, fight me on that. And then the last piece was engineering. And sometimes we would conceive of an amazing thing. You couldn't build it. Right? Smell-O-Vision is not a deal. Business loved it, and I thought it was a cool idea. But, you know, engineers said no. And then the same thing was going on where we were looking at, if we designed early, then the folks that use those tools, those mobile applications, enterprise pieces of software, it actually made sense to them, because we were bringing to bear the different lenses and blending them rather than making it a production line of, like, you put on the wheels, and we'll put the windshield on and, you know, somebody gets it at the end and goes, this is not a bathtub. And, you know, like, sorry, that was the brief. So I like that idea of moving security left, embedding it, putting security into the culture. I think that's a really good point.

Chris Tillett: And I think I actually applaud, you know, the fact that now that the SEC has kind of gotten involved in that, so freedom for large organizations, they're now saying, listen, this is a -- if there's a material -- something that materially impacts the organization, now they have to report these things. So now the board is more aware of security. So I think you're going to start seeing this more meld together. I think they're going to start seeing it. And the CEOs are going to say, look, enough is enough. I need you guys to work together. I need availability with security. I think it's going to be forced upon us. It was really interesting. It was like in the early 2000s that we started to say, no, no, they have to be segmented away. And they got to be in this little room with the security operations center with these dumb charts on the wall. And I remember, like, that was not the approach I thought was a good idea.

David Moulton: It's weird.

Chris Tillett: To me, like when I was dealing with the Unix admins, and I would say, well, you know, I want to do this with the firewall. And they're like, yeah, but if you do that, it hurts me this way. And if you do it that way, it doesn't hurt me. And I'm like, well, all right. Well, we just won't do that then. And I would look at the risk, and I'd draw it out on a whiteboard and go, yeah, he's right. It's not that big of a deal. It's like sometimes they're telling you the better way to do it. You just got to listen. I don't have all the answers, you know.

David Moulton: So Chris, I'd love to ask my final question. What's the most important thing that somebody who's listening to this conversation should take away from it?

Chris Tillett: Always think outcome. What are you trying to do? And do you have the people in the organization that you want to be able to do that with? Are they capable of even configuring these things? If not, how you want to accomplish what you're trying to do? I've asked people, well, what are you trying to do? And they're giving me a laundry list of features. And I'm like, well, how would you do that? What is that feature going to do for you? And then they can't even answer it. So why are you -- why are you going down this road if you don't even know why? You're like -- it's like, I'm going to buy a minivan. Well, do you have kids? No, but I'm going to go buy a minivan. Well, for what use case? Sometimes we plan for the most insane use case, when, in reality, we probably could get away with, you know, a two-door car and be fine. Or we could just Uber and have it -- ride the train. So it's like, what is the use case? Once you know your use case, how do you want to apply it? So at the end of the day, that has been the only way I've been able to have any measure of success in this industry is the fact that I just go, well, what are we trying to do?

David Moulton: Focus on those outcomes.

Chris Tillett: And I start from outcome, and I whiteboard it all the way back to the wire. So outcome to wires. And once you do that, you know where you're at.

David Moulton: Love it. Chris, thanks for coming on "Threat Vector" today. As always, I enjoy these conversations. It's been good to see you.

Chris Tillett: Great to see you. Enjoyed it. Thank you. [ Music ]

David Moulton: And that wraps up today's episode of "Threat Vector." Chris's insights into the importance of balancing security with operational efficiency and understanding human behavior were eye opening. The stories he shared, from conditional MFA to the realities of service accounts, highlight just how critical it is for organizations to not only focus on technology, but also the people and processes behind it and the context that they're operating in. For those of you listening, I hope that this conversation has sparked some ideas on how you can approach security in your own organizations, whether it's fostering a culture that embraces security or simply rethinking how you apply security controls to reduce friction without compromising safety. Remember, security doesn't have to suck. It just requires a commitment to understanding both the risks and the human element involved. If you like what you heard today, please subscribe wherever you listen and leave us a review on Apple Podcast or Spotify. Your reviews and feedback really do help us understand what you want to hear about. And I'd love to hear from you. You can email me directly at threatvector @paloaltonetworks.com. I want to thank our executive producer, Mike Heller, our content and production teams, which include Kenny Miller, Joe Benecourt, and Virginia Tran. Elliot Peltzman edits the show and mixes the audio. We'll be back next week. Until then, stay secure. Stay vigilant. Goodbye for now.