Threat Vector 8.1.24
Ep 28 | 8.1.24

The Future of Cybersecurity with Nir Zuk

Transcript

Nir Zuk: Machines will do what humans do, just they're going to do it much faster and in a much more scalable way. So that's the idea behind using AI in the SOC: to detect attacks and stop them. [ Music ]

David Moulton: Welcome to Threat Vector, the Palo Alto Networks podcast where we discuss pressing cybersecurity threats and resilience and uncover insights into the latest industry trends. I'm your host, David Moulton, director of Thought Leadership. Today I sit down for a conversation with Nir Zuk, founder and CTO for Palo Alto Networks. Here is our conversation. Nir Zuk, welcome to Threat Vector.

Nir Zuk: Thank you for having me here.

David Moulton: It's a pleasure and something I've been excited about. So today we're going to talk about the future of cybersecurity. And I've got a few questions and I figure I will ask those and let you just get into it. Sound good?

Nir Zuk: Let's go to the first one and then we'll see.

David Moulton: Yeah. So in your opinion, what is the biggest single cybersecurity challenge that organizations are going to face in the next, say, three to five years?

Nir Zuk: I think the biggest challenge they're going to face -- and by the way, are facing now -- is that trying to keep our adversaries out, just not working. It's too easy to get into the organization. While the focus of everyone for as long as I remember has been trying to keep them out, the focus has to shift, it's just not working anymore. You can't keep them out.

David Moulton: So how can they start to go about addressing those problems?

Nir Zuk: I think there needs to be more and more investment in the assumption that they're in, that someone is already in, that we've been breached or at least there is some kind of a foothold that an adversary has within our organization. So we have to assume that's the case and now we need to go and find them and stop them.

David Moulton: So this is a little bit of a mental shift of not keep somebody, assume they're already in, and reduce the amount of damage and/or speed up the injection time, right?

Nir Zuk: Correct. So I'm not saying stop investing in keeping them out -- don't make their lives too easy -- but we should shift more and more of our resources, both on the budget side and the people side, to this, to solutions and processes that assume that they're in and now we need to go and find them.

David Moulton: Okay. So looking a little further ahead, how do you envision the cybersecurity landscape evolving over the next, say, five or 10 years? And what are the threats that organizations should start preparing for now?

Nir Zuk: So I don't like to talk about specific threats because nobody knows.

David Moulton: Okay.

Nir Zuk: And I think that what we've seen over the years is that anyone that's tried to predict the kind of threats that there are going to be has been wrong. So we're not going to be the next one that's wrong. On the other hand, I think that we should stop thinking about trying to find specific threats, like a specific virus or a specific malware or a specific exploratory specific whatever, and focus more on what I said before. Let's assume they're in, find them and stop them. And you do that based on their behavior. So you look for behaviors that don't make any sense within your organization and you stop those. And those behaviors are irrespective of specific threats. It doesn't matter what we used to get in or what used to move laterally or what used to exfil and so on. It's really about the behavior that you see in the organization within your infrastructure, within your network and your endpoints and your applications and cloud deployments and so on. And once you start shifting your mindset to, okay, they're in, I'm going to find them, I'm going to find them based on their behavior, it doesn't matter how they got in, it doesn't matter how they move laterally, it doesn't matter how they exfil things out.

David Moulton: So it's moving up a layer where you're not thinking about a specific TTP but more of a category of behaviors? And when you say that, you mean human behavior but also machine behaviors or patterns that indicate that something is amiss?

Nir Zuk: Everything, entity behavior. So behaviors of users, of machines, of applications. For example, we detected the SolarWinds attack based on the behavior of an application. So it's the behavior of everything that you have. So you collect as much data as you can from your network, from your endpoints, from your cloud deployments, from applications, from servers, from identity and access management systems and so on. And you understand what makes sense and what doesn't make sense. You look for the things that don't make sense.

David Moulton: So, Nir, in that kind of a model, do you move beyond security tools being the place to collect data and start to look to other enterprise tools, say, a Workday or a log into sales applications, and all of those become part of that feed of data that's part of that behavioral analysis?

Nir Zuk: Yes. Just, there's an easy way to do that without actually collecting the data from these applications that you mentioned. I think that as we move more and more towards using enterprise browsers -- meaning we move more and more of the security into the browser, with a dedicated browser such as the one the Prisma Access Browser -- there are other solutions on the market that do that. From the browser itself, you can collect a lot of information. But today, you would need to go to the different applications that you mentioned and collect.

David Moulton: Okay.

Nir Zuk: And once you collected that information -- because, remember, in the browser, you see the actual behavior of the application. So once you collect that very deep information from the browser, you can start using information that's related to your CRM application and your ERP application and your AI applications and so on as part of the analysis for understanding what's good behavior and what's bad behavior within the specifics of your organization.

David Moulton: That's a big shift.

Nir Zuk: Yes.

David Moulton: But I think it's one that, given the failings that we're seeing today, starts to make sense for companies to move in that direction.

Nir Zuk: Yeah. And we actually started seeing the beginning of it working. If you look at what we do with our XIM solution and its ability to collect the data, analyze the data, and bring down the mean time to detect and mean time to respond to a manageable number of around a minute, then you understand that it's possible to do that. And I'm sure there will be other solutions on the market competing against our products doing that as well. It's proving itself to work and it's, I believe, the only thing that works.

David Moulton: Yeah, it seems like speed is the ultimate feature, the ultimate luxury, inside of security, and not having to go to a lot of different places to get the datal having that browser be one of those things that speeds that delivery up is maybe be a critical shift in enterprise behaviors.

Nir Zuk: Yeah, I think the critical shift is the understanding that a lot of data needs to be collected. The understanding that that data will be looked at by machines, not by humans. Because humans cannot look even at the amount of data that's being collected today into the seam, nevertheless looking into the amount of data that's required to do what I'm talking about. Of course, the scale that's required is not something that's humanly possible.

David Moulton: So you've seen a huge number of changes in the cybersecurity industry -- next generation firewalls, XDR. How do you see AI fall in with those?

Nir Zuk: I think that AI is something that's required to do what I'm talking about. Meaning, look, today, at the SOC, at the security operations center, you have usually hunters which look at data and hunt for attacks. They look at data that's collected into the seam, it's not really data. We're talking about logs mostly. Are they're not doing a very good job. Every now and then, they find an attack. It takes them forever. Meaning, if you look at the mean time to detect, by the time the attack is found, it's very high. It can be measured in days and weeks. It takes them forever to respond to the attacks. Nevertheless, we don't have a better way of doing it in the sense that machines are not going to do something that humans cannot do. Okay, you can't expect machines detect attacks, doing things in different ways than humans do it. Machines will do what humans do, just they're going to do much faster and in a much more scalable way. So that's the idea behind using AI in the SOC: to detect attacks and stop them. It's take what the humans are doing and just make it into machine learning based.

David Moulton: Massively speed it up.

Nir Zuk: Massively speed it up. Massively make it more scalable. Meaning, you're able to look at more data and process that data much, much quicker.

David Moulton: So if we move to a moment where AI has this ability to speed up the things that we're good at but slow, that AI is fully integrated in the security operations, what would the human tasks be? Is there oversight? Or are there new jobs? Talk to me about that.

Nir Zuk: I think the role of the humans is to do what the machines cannot do. I don't think machines can replace people, certainly not anytime soon. The autonomous car advocates have been talking about how autonomous cars are going to be out there next year, right? There is this famous -- I don't know if you saw it on YouTube -- Elon Musk video someone cut. Every year, he's said that next year there will be autonomous driving. And of course, it's not there and it's going to be a while until we see it. And the reason for that is that machines are still not as good as people. And I think the day where they are, if the day ever comes, is very, very far away. And that's good news, that's really good news for the people in the SOC. The people in the SOC, the analysts, the engineers, the hunters, they all need to know that with the use of AI or machine learning, the way engineers call it in this case -- because it's machine learning based AI -- they are going to be left with the things that machines cannot do. Which is the more interesting, high-end work. [ Music ]

David Moulton: So years ago, I was listening to Don Normal talk about autonomous. And he asked a simple question. When you're sitting at an intersection and you wave somebody through that wants to walk across, or you're standing at an intersection and you wave the car through, how do you get to that point where an autonomous car can wave to a human one way or the other? And there was no good answer for that. So I think there's always going to be a moment where a human is able to do a thing that a machine just doesn't have the capacity for. And we expect that type of interaction today. So I think that's a positive, that there's always going to be a place for the gray matter leading the machines and then the machines scaling and speeding things along. Do you worry, though, with an AI-driven, scaled environment, that we might become over-reliant on artificial intelligence?

Nir Zuk: No, no. I think that if you build the processes right, both we as a vendor for our customers and our customers, and you make sure that humans are a part of the process, then I'm not worried about it. I think the bigger challenge we have than worrying about relying on AI is that it's very difficult to understand why AI is doing what it's doing.

David Moulton: Talk to me a little bit more about that.

Nir Zuk: So when AI makes a decision that something is bad, usually that decision is based on millions, billions, sometimes more than that, data points. So for a human to go in and look at those built-in data points and say, oh, I understand now why the AI made the decision that it made, is very, very difficult.

David Moulton: So a human can't disentangle a billion data points that are coming in and a similar-looking set was fine but this one isn't. That becomes a bit of a mystery or a black box.

Nir Zuk: Correct. And that means that humans need to start relying on AI without understanding why AI did what it did. And that's tough. It's tough for humans in general to do it, especially security-conscience humans such as those that you find in the security operations center and generally infosec. And also, it can lead to trouble if the AI is wrong.

David Moulton: Of course.

Nir Zuk: So certainly there needs to be more work done around being able to explain to humans why AI did what it did. And I think we're not there yet. And also, we need to do more work at making people comfortable with AI.

David Moulton: Is this discomfort with AI giving you a decision, you not understanding it, more acute insecurity? I would see it being one that, no matter where that decision was made -- AI in financial markets or in medicine -- I'd want to understand the decision. How do we move to a point where we have acceptance and a culture that has trust?

Nir Zuk: So like I said, we need to educate people, convince them that they can trust AI. We need to show them that they can trust AI. And we need to do a better job at having AI explain why it did what it did.

David Moulton: From an offensive side, we need to protect against AI-powered attacks. What are your recommendations there?

Nir Zuk: So I haven't seen that many AI-powered attacks. I think a lot of us talk about it. In reality, I think on the attack side, AI is currently limited to mostly phishing. So AI is really good at scale doing what humans do when they want to phish, which is to understand who is your victim, what are their social connections, what is it that they're probably going to click on, and then generating an email or an instant message or whatever that they're going to click on, right? So when humans want to phish, and if it's a really high-end, targeted phishing attack, they're going to do the research and find out, okay, I want to target an IT employee. Here are the IT employees. Okay, I found an IT employee who likes fishing and he's friends with this person, so I'm going to send an email from that person, that appears to come from that person, with something about the recent fishing gear. I don't fish, so I don't understand anything about that.

David Moulton: I like your example of phishing is about fishing. It's very meta.

Nir Zuk: Exactly. So humans do that, right?

David Moulton: Yeah.

Nir Zuk: AI can do it at scale and we've seen AI doing it at scale. Okay, understanding who your employees are, which ones need to be phished, how do you phish them, what needs to be in the email, who the email needs to appear to be coming from. I haven't seen much beyond that. Yes, you can go to one of the LLMs out there like ChatGPT and ask it to generate a piece of malware. That malware is going to be useless. You can't use it today. You can ask it to generate different -- it's not useful. Humans are still doing much, much, much better job than what AI can do. And I think that's going to be the case for the next 20 years.

David Moulton: Okay.

Nir Zuk: Okay, so I'm less worried about the use of AI on the attacker side, other than phishing. I think that there's automation on the attacker side and other things which are not necessarily AI, maybe they are positioned as AI but not necessarily AI. And there's more and more sophistication on the attack side at learning your organization, which might use AI or not, and understanding how to navigate inside the organization once they got in, that requires us to be quicker on the defense side. And we talked about it, right? We need to switch to AI defense, on the defense side.

David Moulton: What about threats targeting AI-driven security systems or data lakes that are needed for those systems, is that something that you worry about?

Nir Zuk: Not the AI systems that I develop and I know about. Because the way we develop our AI defenses, we don't use any external data source. Meaning we don't use an LLM for AI defense. Our defenses when it comes to AI are based on machine learning, supervised, unsupervised, deep learning and different types of machine learning. We curate and we scrub all the data that we use to teach the AI how to distinguish good from bad or how to understand what's normal and what's not normal. So we have full control of that, of the data. And of course the data is guarded and sits in a place that cannot be modified and so on. So the AI that we use -- and I hope the AI that our competitors and peers in the cybersecurity market use -- is driven by data that's curated by the vendor and not by just general data found on the Internet. In that, it is very different than what we see in terms of the vulnerabilities related to publicly-available AI systems where the data sources are, first, not very clear; and, second, can be influenced, right? If you have an LLM that takes its data from the Internet, like ChatGPT or one of its competitors, and you can influence the data sources, then you can convince the LLM of everything.

David Moulton: Right.

Nir Zuk: When you control the data sources and you don't use an LLM, you use machine learning, and you control the data sources, and those data sources cannot be influenced, I'm not worried about someone from the outside being able to influence the way the AI works. [ Music ]

David Moulton: So it sounds like it's a security hygiene and an intentional choice to build your security apparatus, your security AI, and its data sets in a protected and mature way versus one that allows for Reddit to be consumed?

Nir Zuk: Exactly.

David Moulton: And you get wild and crazy answers and/or you could bias the data set so that it believes just about anything?

Nir Zuk: Yeah. I would recommend to all vendors to make sure that they don't use any external data sources; that they don't scrub and check each and every data point within those data sources.

David Moulton: Yeah, that makes sense.

Nir Zuk: In their defenses AI. It's an opening for trouble.

David Moulton: Yeah. As SOCs become more automated, what are the new skills that cybersecurity professionals need to be developing to remain relevant and effective?

Nir Zuk: They need to do what machines cannot do to be effective. Meaning, they need to be able to investigate attacks that machines are having a hard time to investigate. They need to hunt for attacks that machines cannot detect. If I were a SOC analyst or SOC engineer, a hunter, I would learn machine learning, or AI, I mean, specifically the machine learning types of AI. I would learn how to take all the data that's being collected by my vendor into the data lake that the vendor is using and hopefully is giving access to its customers, I would learn how to use that data, how to augment that data, with my own data, right? So if I'm an airline, I would add ticketing data to the system. If I have manufacturing facilities, I would take some sensory information that I collect from those and add it to the data and then learn how to do machine learning on the data that I added, that I augmented, the data lake, and the data that my vendor collected -- cloud data, applications data, and so on -- together to detect attacks. Because I can't expect the vendor that I'm buying the machine learning from to build machine learning models that understand my data as a customer.

David Moulton: So the vendor gets you 70% of the way, you need add the next 30%, that context, and be very, very good at what your machine, what your AI can't do. And that combination, the actual intelligence and the artificial intelligence, the gray matter in the computer, ends up being a really great partnership.

Nir Zuk: Yeah. So if you're a level one, maybe even level two SOC analyst, you'd better go and learn how to do those things.

David Moulton: Yeah. Nir, I've got two more questions for you. So, first, with the rapid pace of technological changes, how can organizations ensure that their security strategies remain adaptable and future proof?

Nir Zuk: So, first, I would say that switching to a mindset where you assume that you've been hacked and now you're going to find and stop the adversaries is much more independent of the specific things that you do as an organization. Meaning, if you have a system that can learn your organization and then find out what's -- and I'm going to quote that "normal," because "normal" is a tough word. But what's "normal" within the behavior within the organization of users' applications and systems and so on, and find the abnormal, then your security doesn't have to be specific to what you do. If course, you still need to augment with your own data and then let machine learning learn what's normal and what's not normal. So that's the first thing. The second thing that I would recommend -- and that's very self-serving coming from Palo Alto Networks -- is to simplify your security infrastructure. I think that we've come to an era where we were 30 years ago and somehow diverted from it of being able to work with a very small number of security vendors. Thirty years ago, you had two security vendors, a network security vendor and an internal security vendor. Everything was firewalls and antivirus. And then we started diverting from that and you needed more vendors for this and more vendors for that. Mostly because the early-on vendors screwed up and didn't get into these spaces themselves and they let others do it. I think we're back to a point where you can have very, very few, a handful of security vendors to achieve your cybersecurity goals. And I think you're going to do yourself a big favor as an organization by simplifying your security infrastructure with a few vendors rather than complicating it with more and more vendors, okay? And probably the last recommendation -- which I hope is answering your question, I'm not sure, but I think it's an important recommendation -- is to do what we've always recommended, which is put security into all your planning. Meaning, security should not be an afterthought. When you build something, when you create something -- a new IT system, a new service to your customers, whatever it is -- make sure that security is part of it from the get-go, don't wait until the end.

David Moulton: Don't tack it on.

Nir Zuk: Don't tack it on, it works less well. With the idea of collecting data and analyzing the data independently of what you do, it's easier to pile security on top of something that's existing, but it's not optimal. It's always better to make sure that, during the planning phase, at least you collect the right data for security.

David Moulton: So, Nir, last question for you. What do you want audience members to remember from our chat today?

Nir Zuk: I want them to remember that they need to start assuming that they have been breached and now go and find and stop the bad people as quickly as possible. Okay, stop spending all your money or not even half of your money on keeping them out. They're in.

David Moulton: Remember that.

Nir Zuk: Remember that.

David Moulton: The attackers are already in.

Nir Zuk: They're already in. Go and find them and stop them.

David Moulton: Nir Zuk, thanks for coming on Threat Vector today. It's been a fascinating conversation. Appreciate your time.

Nir Zuk: Thank you for having me.

David Moulton: Thanks for joining today. Stay tuned for more episodes of Threat Vectors. If you like what you heard, please subscribe wherever you listen, and leave us a review on Apple Podcast or Spotify. Your reviews and feedback really do help us understand what you want to hear about. I want to thank our executive producer, Michael Heller. I edit Threat Vector, and Elliott Peltzman mixes the audio. We'll be back next week. Until then, stay secure, stay vigilant. Goodbye for now. [ Music ]