Fueling the Business with Cyber AI & Automation with Kieran Norton
David Moulton: Welcome to Threat Vector, the Palo Alto Networks Podcast, where we discuss pressing cybersecurity threats and resilience and uncover insights into the latest industry trends. I'm your host David Moulton, director of Thought Leadership. [ Music ] On today's episode, we're joined by Kieran Norton, a principal at Deloitte & Touche, LLP and the US cyber and AI automation leader for Deloitte Risk and Financial Advisory. With over 25 years experience, Kieran brings deep expertise in cybersecurity and technology risk management. He leads Deloitte's AI transformation efforts helping clients enhance their cyber capabilities and adopt AI technologies while managing the associated risk. Kieran has played a pivotal role in evolving security strategies to support digital transformation and drive key business outcomes across various industries. Today, we're going to be discussing automating your SOC with the help of AI platforms. Here's our conversation. [ Music ] Kieran, what exactly is an AI Native Security Operations Center, and how does that differ from a traditional SOC?
Kieran Norton: Yeah, great, great starting question, and cut me off if I run on too long. So the traditional SOC which is, you know the way most companies operate today has been in existence for a long time. They follow the same model, right, which is largely, you know, giant telemetry, telemetry on endpoints, collect the telemetry, run it through, you know, a system to identify unusual activity, which then flags an alert. Send the alert to an analyst. Analyst looks at it, adds context, does their research and determines is this a problem, not a problem and, you know, escalates accordingly. You know that worked for, for quite a while pretty effectively. But given the volume of data that's now coming in from the, you know, significantly larger attack surface, right, so back, you know, 20 years ago, when we're thinking about network flight recorder is a fairly simple, you know, set of data coming in compared to what you have today, right. So if you look at everything coming from cloud, you look everything coming from all your different security products and tools, you're talking about stitching all that data together, et cetera, that old model is starting to show some real kind of issues, one of the primary ones being alert fatigue, right, which is just too many alerts coming in to the SOC, for the SOC to manage. You know, most companies struggle to keep up with their SLAs on alerts as they come in just because the volume. It's also a bit of a challenge from a staffing perspective because it can be a burnout job for sure. And once folks get more experience and skills, they'll typically, you know, take a job at a higher level someplace else. And we're not responding quickly enough, so the, the dwell time is, you know, I think most recent report I saw 207 days on average. You know, the SLAs are typically not being met as I mentioned. And so we have a speed, we have a volume issue and a complexity issue in the sources of data as well as all the security tools that we've proliferated across the environment trying to solve a problem. So when we talked about AI-native SOC, when we think about that, that's really changing the model around so that machines are doing the lower level work, and the analysts are doing the higher level work. So the analysis of the comparison we use is if you think about, you know, early on when you know, we were first building airplanes and flying and so forth, you needed pilots to do everything because it was an unusual circumstance, a lot of different variables and so forth. Over time as we understood flying and pushing, you know, airplanes, et cetera, more effectively, we started drawing flight control. We started putting procedures, and we started, you know, really making standard processes around what is now commercial airline industry. So today we have the ability for automated pilots to handle much of the, the flying process. And we still have a pilot there to handle any, you know, kind of issues, abnormalities, things that are unusual, require a creative response that you can't really plan for in advance. But we need to get the same place with the SOC. So instead of analysts working on tickets and responding to every single ticket, we need machines, autopilots, to do a lot of the volume for us and take care of those things that are relatively straightforward that can be automated, that can be addressed through leveraging AI and all other things. And then the analysts are now, you know, tuning the machine and making sure the machine is operating as effectively as possible to respond to those alerts and automate things out of existence. And they're also handling the more, you know, significant scenarios that require a creative response. That's the model we need to get to.
David Moulton: So I was talking to Nir Zuk back in July about this exact problem, and he said speed scale. Those are going to be the things that AI and machines absolutely outperform any human on, and we need that. But then in the areas where we cannot codify it, we cannot turn it into a repeatable practice, that's where humans have to be the very best. And then they start to breakdown that set of problems into smaller bits that they can hand over the machine and continue to move. And I think that's what I'm hearing from you here on this AI-Native Security Operations Center and its capabilities are that partnership between the AI, artificial intelligence, and the AI, actual intelligence. And when you put those together, that's your cookies and cream. And you know, that's a good, good set of flavors.
Kieran Norton: Yeah, that's right. That's right. Yeah, and that, you know, I think most companies would recognize the fact that they probably do have those challenges. So this is a pretty prominent issue I think for a lot of our mutual clients. And so it's definitely kind of top of mind. I think it's for, you know, for a CISO, soft transformation is one of those applications of AI that they're not going to regret. They can make that decision today. They can start taking action on it. They can transform the SOC, and that's going to be a no regret decision, even though the world of AI is changing very quickly and there's new developments every, you know, every week. It's really, I think, the idea whose time has come, I guess I would say.
David Moulton: Yeah. So the best time to be in an AI-Native SOC is yesterday and/or today.
Kieran Norton: Right. Yeah, yeah, correct. Correct.
David Moulton: So, Kieran, let me get a follow up on this. Can you provide a example of a specific challenge that an AI-Native SOC can address more effectively than a traditional SOC?
Kieran Norton: Yeah, certainly. So if you think about cloud security as an example. Misconfiguration is pretty typical, right. A lot of the telemetry to tell you there is a misconfiguration is readily available. It's coming straight from the cloud environment. A lot of the configuration parameters are relatively binary, right. You know that, for example to, you probably don't want an open S3 bucket unless it's on a list of exceptions. So in that scenario you can detect the fact that that has just been created. You can respond by closing it. You can send a ticket to the development team to say, hey, you've opened up an S3 bucket without proper permissions. You need to fix it. Here's your exception process if you don't want to, if you have a need for it. Follow this process here. And then go on with your day, right. So you can automate that entire process. And rather than that going to an analyst who has to look up the bucket, find who owns it, try and figure out whether not it should be open or not open. Then go ahead and close it. Or if they don't know, they can, they can close it. Maybe they file a ticket for an, you know, investigation and et cetera, and it stays open for some period of time. AI-Native SOC, that simply gets closed. You log it, you record it and say, hey, we had this in, so then it's been closed. Here's, you know, it goes into the historical record, so you can do further analysis going forward. And your analyst is totally unaware, right. We've automated that one out of existence. There's, you know, obviously thousands of those kind of scenarios. So that's really the goal is to automate away the, the binary, the obvious, the, the straightforward scenarios, and then the analyst is spending their time adding additional, you know, playbooks to address the new threats, the new risks of things that are new in the environment. And they're spending their time, you know, creating more value for the SOC as opposed to running on a hamster wheel just chasing alerts.
David Moulton: Yeah, it sounds a lot like when I run my Roomba, right. I don't really care what it picks up. I just need it to go through, get the dog hair and, and be out of my mind so I can focus on something else. And something that you're talking about with this, you know, S3 bucket, fairly obvious, don't do that. I don't need an analyst interrupting their day to go no, don't do that. Here's the rules. Automate it out of existence and let the AI be the thing that logs it, runs it, actually, maybe even reports on how often that happens. Perhaps you move upstream a little bit on your training or your UI so that doesn't get, you know, repeated. But you're right, that doesn't need to be the thing that, the AI, the actual intelligence in the room, spends their time on.
Kieran Norton: Exactly, exactly. And that's right. You can tie it into your software development lifecycle and then you start, you can start measuring teams on their performance from a security perspective. You can start, you know, tweaking your training to address the, the most often seen misconfigurations in your environment and so forth.
David Moulton: So, Kieran, I'm curious, do you think that with that model in mind that the AI will be able to come up with its own playbooks that it recommends to your stock, to your environment to enhance your security?
Kieran Norton: I think eventually, right, today there's some work to be done by human AI to develop those, those playbooks and, and add to you know, the repository based on what the analyst is seeing in the environment. But those recent studies show that AI is better at prompt engineering than people.
David Moulton: Yeah.
Kieran Norton: So, so we're going to get to the point where AI is going to be able to develop those runbooks and say, okay, you know, here you go, just press here. And even today you can get recommend- recommendations right for a playbook based on what the AI is seeing, right. So it's an evaluation comes to triage and says hey, you know, you analysts may want to use this playbook to solve for this issue. Right. So we can already do that, so it's not a big leap to get to the point of, of you know, developing these things automatically.
David Moulton: It does strike me as one of those things that's interesting of having the AI prompt you on a playbook, and then you can infuse it with your creativity or your strategy. So let's shift gears a little bit. How does integrating AI into a SOC enhanced threat detection and response capabilities?
Kieran Norton: So you know, the, the core is around obviously what has been going on for a long time which is analyzing the data and looking for anomalies et cetera. That is similar. But in this case, like all things AI-related your AI is as good as the data that you feed into it. And so 20 years ago, we didn't have the repositories of data that we have today to really do analysis and improve the effectiveness of AIML, models to detect threats et cetera. And so that's really advanced. And so you know, one of the reasons obviously we, we work with power around XIAM is because the product has a sophisticated AIML engine that does a lot of the, not only the data collection and the data stitching and getting into a consistent data model even from a whole variety of sources, but it also has AIML detection capabilities that are far advanced. You can even bring your own AIML if you happen to be a shop that has that level of sophistication. And so that's really the heart of the, the value and the detection reducing the number of alerts that, you know, are actually going to go to what analyst detecting and automating, et cetera. I think the other part around it is that it's the collection of data from across the entire environment and automatically identifying that as a single incident, single event, single case, right. So if you have, for example, you know 15 telemetry, sources of telemetry in your network, and all 15 happen to see that activity, the same bad activity, you're probably going to see 15 different alerts related to that same event. And as an analyst, you have to go through all 15, right. So leveraging AIML, it'll actually aggregate that data down to a single incident, give you all of the information as an analyst and say, okay, here's what we've seen in the, all parts of the network that core, that basically correlate back to a single action, right.
David Moulton: Right.
Kieran Norton: And that has a lot of value in and of itself.
David Moulton: Yeah, you're not getting little slivers and piecemeal data and evidence pulled together over time. You're getting one casebook, essentially.
Kieran Norton: Yeah, correct. So I mean, do you want 15 alerts, or do you want one, right.
David Moulton: I want one, always.
Kieran Norton: And do you want to go find all that context yourself, or do you want the machines do it for you, right. [ Music ]
David Moulton: You know, as you're describing this ability of better data, it reminds me of watching the Summer Olympics this year. And the athletes have better nutrition. They have better coaching. They have better abilities to get all of those inputs, and their outputs are tremendous. I don't know if you watched, but I'm, I'm just blown away by the -
Kieran Norton: Too much as a matter of fact.
David Moulton: -- yeah, by what they're doing, and maybe that's a model that we can look forward to, as we're saying, we've got the repositories of data. We've got the ability to chew through it and in real time and to start to get those outputs. Much like the Olympics shows us are, are possible with the compute and with the analysis. Once we've integrated this AI capability next to those, those massive troves of data. How about the measures? What should a organization look at as a way of seeing whether this is having the right impact? You know, what things should they not look at that are a distraction?
Kieran Norton: I mean, I would say historically there's been a lot of reporting on metrics that don't necessarily tell you something. So as an example, how many events were blocked at the firewall, right. Okay, yes, you're going to see there's activity there and you should know how much activity is going on. But when you report that back to management and the folks who are trying to manage the risk, what does that tell them? It doesn't tell them much. It just says the firewall is doing what it's supposed to be doing, but people report that metric because it sounds good, right, it, you know, it's one of those things, they're say, well, we blocked, you know, four billion attacks across our external network. Okay, it's good that you blocked those things. But that's not telling you how you're actually doing. That's just telling you what you've, what you know you blocked versus what you don't know, right.
David Moulton: Yeah.
Kieran Norton: So I think a lot of security metrics over time have been reporting the wrong things, right.
David Moulton: So it ends up being like splashy metrics but not a lot of so what in those. Like four million out of how many billion?
Kieran Norton: Correct. Yeah and it's, and look, it's human nature to want to make things look good. So you, it's easy to come up with, you know, metrics in the organization that you can report and make looks like things working, right. And in fact over the years, you know, I've seen, you know, clients get themselves in trouble with that kind of, you know, perspective and view. So I think the metrics that matter are more around the actual operational processes, the SOC. So like we talked about total time, we talked about the volume of alerts and how long it's taking to, you know, address those alerts and how many alerts per analyst and, you know, how many new playbooks are being developed, et cetera. Tracking those measures starts to tell you actually, if you're the real, if you're realizing the value of an AI-Native SOC.
David Moulton: The 15, the one example that you gave then multiply that by every day or the fact that it's probably more than 15, and you're saying on aggregate, suddenly you're down to 12 that you deal with on a given day. That's a pretty great world. Maybe you get it down to eight. That's tolerable, right. Like, that's something that you're, you're a small SOC. I think that's what we do here at PAN when I've talked to the team that that runs our SOC is that they're dealing with a handful on any given day. But it's not in the, it's not in the hundreds, it's not in the thousands, it's not the hundreds of thousands in any given month. If we're taking a different approach here, we're talking about things that we want to give to the machine, speed, scale, those sorts of things. Are there tasks, are there things that should always remain under human control despite these advancements in automation?
Kieran Norton: There's going to be a human in the loop in a lot of things. Again, if it's a, if you detect something that's relatively binary, then you can automate that with a high degree of confidence, right. But there will always be incidence issues, et cetera, that require an analyst to look at it and say yes before you pull the trigger. So I think that's going to continue to be the case for quite a while. So human loop I think will be the norm, but again, I think what the goal is to present the human with fewer choices to make every single day about what's happening in the environment. And when they make a choice to execute on that choice as effectively and as quickly as possible. So in, you know, going back to your prior question on metrics, over time, we're going to see the number of, you know, incidents being reported going down because that that would be an indication that, you know, assuming you're not, you're not playing with the, the system on the front end, that would be an indication that you're, you're refining what's coming in and you're getting down to the things that matter over the course of time. So you know, obviously the alert problem going down would be associated with that. You'd want to see the time to remediate reducing over time as well. So going from X number of hours to to Y number of minutes, right, would be an indication that, you know, that it's working.
David Moulton: Yeah.
Kieran Norton: And again, there will be situations and instance that occur where humans definitely have to be involved, and they're going to have to drive certain decisions, et cetera. But you want to really save those for the human analysts and make sure that you've enabled the human analysts as effectively as possible.
David Moulton: Kieran, how does an AI-Native SOC improve the overall security posture of an organization?
Kieran Norton: I mean, typically it's going to, it will help from a couple of lenses. One is that you're going to get better security outcomes, right. So better detections, better response time, mitigated risk in the form of reducing the amount of time that attack is live going on the wire, et cetera, right. You're also going to see advantages over the course of time. As you identify something new, you add an automation for it or the AIML picks up a new, a new vector, and those things start to get nailed down as well, right. So it's called a virtuous cycle, but you should be seeing and remediating and automating every single day until over the course of time, you're going to see a significant amount of improvement. I think from a, you know, from a CISO standpoint, there are things like operational advances to be gained, costs, you know, mitigation, right. Today's model of running a SOC is it's fairly people heavy. You can have those people redeployed doing much higher value tasks, right. Getting more value from the folks you have probably improving their lifestyle in the process. And so typically you can also maybe do some technology consolidation, right. So you start to see cost savings on both the technology and the operations side. And obviously, I think most companies in a position today where they'd like to find cost savings, and they're always dealing with, you know, resource constraints and labor shortages et cetera, from a cyber standpoint. So there's a lot to be added there as well. And then ultimately, I think you also get the advantage of a, you know, being able to assure executive leadership as the posture of your organization. One of the things I hear frustration about from boards is their inability to get a clear view of how things are working within the cyber function because it's such a complex answer because there are so many systems involved. The complexity is so high associated with trying to pull that together and present just the leadership in a way that they can understand. AI-Native SOC capability will give you much better insight, much better, you know, ability to measure the risk of threats, et cetera, and you can show progress over time to give them a real view of what's going on.
David Moulton: Can you share with the listening audience the real world case studies or examples where those improvements have been observed?
Kieran Norton: Sure. So we, we have a client that was seeing over 100,000 alerts a day from their cloud security posture management tool.
David Moulton: Whoa.
Kieran Norton: Right, 100,000. So that happens, you know, typically when you set it up, you turn it on and you turn on all the, all the let's just call them the signatures for lack of a better term. And so through a combination of applying AIML to the log aggregation, consolidation, stitching, et cetera, as well as automating the highest volume most frequently identified alerts in the environment, obviously the ones that are, you know, tend to be a bit riskier as well, obviously, you know, we weren't able to carve down the time significantly from an ability to respond, right. So we saw a 12X improvement in mean time to resolve and a five time improvement in the number of alerts resolved per day, right. So it's just a, a tactical example and there's --
David Moulton: Right, and as you're talking about that 12X improvement, you know, from 100,000 when you turn the noise level all the way up, when you, when you open that up, there isn't a human anxiety measurement. But I can only imagine what that team suddenly felt when that was turned on and they had to deal with it to when it was tuned and they're going now I have confidence in the system. And I've got attention that I can put somewhere else.
Kieran Norton: Yeah, we, we call that alert fatigue. That's the terminology we use for it. Yeah, alert PTSD could also be another term you might use for it.
David Moulton: Right, right.
Kieran Norton: But yeah, that's a lot of times is when we, we get involved is when a client calls and says hey look, we're just, we're getting a flood of information, and we can't get
David Moulton: Yeah I, I would never discount the quality of life improvements that can be made through tuning in, in good software. You know, 20 years as a software designer, hard to, hard to discount that, you know, even as I've moved into telling stories on podcasts as opposed to designing interfaces. So Kieran, let's, let's slide into AI-driven threat intelligence as a topic. And I'm, I'm curious, how does AI enhance the collection and utilization of threat Intel in a SOC?
Kieran Norton: So a key component, I mean today, again, we've been using threat Intel for a long time. It's not a new topic certainly. I think one of the core abilities are a lot of the, the tools out there have today is to take, you know, signatures et cetera, right, or take TTBs and convert those into, you know, direct identification response to defined attacks. In an AI-Native kind of world, you're using Intel and data from your own environment, right. You're refining what you know about your environment, and you're responding with higher degrees of context. So yes, it's good to know that there is a, you know, a TTB associated with a particular threat group and, you know, there's detection signatures for your environment. But actually knowing what's anomalous in your environment, what's happened over the course of the last seven years of your environment, responding to the context that you see, threat modeling your own specific technologies, et cetera, is where we can make a significant gain in AIML, and gen AI can help with that.
Dave Moulton: So Kieran, if I, if I playback what I think I've heard, you're saying that over time each AI becomes tailored or customed to the environment and the organization that is protecting it's bias towards the risks that that business is most interested in thwarting. Is that right?
Kieran Norton: Yeah, that's right. That's probably stated better than the way that I said it. So, so thank you for that. Yeah, I mean, clearly the data science techniques like, you know, limiting bias, et cetera, can be applied. There's certainly bias in the way we look at our environments and the defense of those environments today, just based on experience, right. So, there's some advantages to be had just in in managing those processes a little bit more effectively, a little bit more analytically. But that's right. As the, as the platform learns your environment and tailors itself to your environment, the accuracy of the intel and your ability to use that intel is going to improve.
David Moulton: Yeah. And I think that given that there are all kinds of sources that come with their own value, AI can start to help you tune what to listen to and wait and what to ignore. Although it can probably cherry pick out those moments when maybe those lesser sources, those noisy sources, do have the nugget in them that is particularly useful to you. [ Music ] Let's shift into this idea of zero trust. What role does zero trust solutions play in the AI-Native SOC framework?
Kieran Norton: So it's actually, you know, sort of the inverse, right. An AI-Native SOC can allow you to support, create, defend your environment leveraging zero trust principles, right. Its core too is your trust implementation or is your trust model. And so without it, without, you know, a SOC that's, you know, capable of more advanced analytics, response automation, et cetera, getting to a zero trust model is going to be exceptionally difficult because of the level of effort and the cost and everything else that goes along with it. So they're tied together from the standpoint if you're trying to achieve a goal of zero trust, an AI-Native SOC is going to get you there faster, more effectively and with lower costs than trying to do it by piling technology on top of technology.
David Moulton: So these end up being complementary rather than just this idea of Lego bricking your way into something that ultimately becomes kind of brittle in your pursuit of that zero trust principle or outcome that you're looking for. Kieran, how does an AI-Native SOC enable scalability and flexibility in handling some of these evolving cyber threats?
Kieran Norton: So we've talked about scale. I think agility is an important part of scale going forward. So as the threats change and you see new things happening in the environment, you see new attacks et cetera, you need to adjust your system to respond to those things and you need to be able to develop new, your response procedures and processes to address a threat automated out of existence, central like we've talked about. So to do that you have to have a platform that is constantly evolving in itself, both in terms of the, the data it uses and the, the basically the threats it's identifying going back to the conversation around intel and some of the other components in an earlier discussion. And then you also need the ability to constantly add new capabilities on top of it to address what's new and from an evolving threat landscape standpoint, right. So, so scale is both in volume, but it's also in agility in the sense that you need to be able to adapt quickly, and it may be that there's a, you know, particular threat that you're seeing for a short period of time and then goes away. That's fine, right. But you need to be able to adjust on the fly to respond in closer to real time to address these things as opposed to, you know, put it through another engineering cycle and, you know, four months later you'll have a new set of detections deployed in your environment. Does that make sense?
David Moulton: It does. Kieran, when you're thinking about keeping the benefits of scalability and flexibility or agility, what are the key factors that an enterprise should really weight towards?
Kieran Norton: So from a scalability standpoint, it's a lot around ingestion and being able to add new data sources, you know, quickly, right. So as new applications, new environments spin up, you need to be able to get those connected into the, basically the SOC platform readily. You don't want to have to figure out the data model every single time, right. So again, the scalability comes from the fact that you've got a large data model that's in place that can normalize, rationalize, and pull the data and start doing datafication as soon as it's about onboarded. So that's a key component. When, when it comes to you know, playbook development and expansion of automation and so forth at the core, you, you want to be able to cycle through that and, and manage your playbooks in an effective fashion. So now that capability is available, something like XIN, right, and you know XOR is a, is both a incorporated part of XIN, but also as a, because it's its own product. Now what you need to do is when you're building out those playbooks, you need a structure that kind of makes sense. You're still doing like the smart things you do from a code perspective, right. You still want to organize your code in a way that you can easily make changes, you can adapt, and you can get in at a module level and sort of manage it at a module level. So your, your runbooks need to be built in a similar fashion if that makes sense.
David Moulton: It does.
Kieran Norton: You don't, you don't want to create 7,000 runbooks that are all, you know, individually managed because now you're going to have a scale and agility issue. You want to create core runbooks that cover a number of scenarios that address a number of issues and threats. And then you're going to, you know, make variations or additions to core to sort of handle what's outside the, the core components. So that design becomes pretty important for, for scalability standpoint.
David Moulton: So let's talk about the integration of the existing tools, because no SOC is going to say, you know, we don't have older tools or things that we rely on that we're willing to walk away from. How can organizations integrate with their AI-Native SOC platforms, those existing security tools in their infrastructure?
Kieran Norton: So I would, I would say that, I mean, first I would encourage people to really evaluate whether or not they need all the security tools they have deployed. A lot of our clients have purchased a number of security tools over the years to solve for a number of point problems, and they don't necessarily talk to each other very well, and there's definitely overlap. In a lot of cases, they're not deployed very effectively. So personally, I would rather see 10 tools deployed fully and effectively in an environment than 40 tools of which, you know, 20 are not really fully deployed or used. We see this every day, right. There's just logs being generated that aren't being looked at in various tools and systems, and the value is not being extracted. So I encourage people to really examine that and their assumptions about what tools are returning value for them. Because if they don't know what the tool is doing for them and they can't point to the value it's creating, then that tool is probably up for consideration as do you really need it, right. Wouldn't you be better off having, you know, five teams on five tools or 20 teams on, 20 teams on 20 tools? I mean we see this kind of constantly. So, so setting the soapbox aside, obviously, you know, a lot of the technology products have the ability to integrate with a lot of different data sources. And so wherever there's a built-in connector adapter, you absolutely want to go that route for, you know, for those that don't have a built in, you can create a custom parser in order to, to analyze that data and pull it into the same data model, et cetera. And so again, you want to, you want to make those integrations as effective as possible. And you want to cover as much of the scope of the environment that is valuable/possible as well. Again, do you need logs from your, you know, routers from 10 years ago? Maybe, maybe not, right. It could be the environment's changed a lot and that data is not going to be very helpful. Well, then don't, don't bring it in because you're not going to get value out of it. But could you use, you know, the last five years of firewall logs? Yeah, probably because there's going to be a lot of, like, there's going to be a lot of information there. It's going to make your system smarter about how to defend you today. So it's also making those choices about what comes in and pulling in the things that are going to have value and enhance the, basically the end result as opposed to just throwing everything in there because from a compliance perspective, you want to be able to check the box.
David Moulton: You know, as you talked about this idea of interrogating whether or not you need some of the data, it reminds me of an integration that I was working on some years ago with a payroll company. And they had 12 different applications, they wanted to bring them all together in a SAS. And we ran into a really complex area of the UI, and I asked, well, what is this for? And the team I was working with didn't know. So they reached out to payroll clerks and experts in the business. That group didn't know. They reached out to more of the customers. That group didn't know. And it had been degraded forward through several iterations of their software, and no one thought let's get rid of it. They were going, well maybe it will mess something up. So we said we can come back and add it in if we've made a terrible mistake. But instead of slowing ourselves down, instead of including it and moving forward when we don't know who uses it, we don't know how it works, we can't find documentation, we can't find anyone who is, you know, raising the alarm that if we cut this, that's going to be a problem, and we cut it. And we waited in silence, and we moved forward. We had a smaller, leaner, you know, SAS product without that. And I think what you're talking about there is that idea of FOMO or somebody else had put that into place. And now you're worried that if you get rid of it, what if something bad happens. But there is a cost to dragging that forward where you're going. Now you've got 10 year old data off of a set of systems or telemetry that you really don't need. Maybe it doesn't match. Maybe it doesn't give you any value, but it does cost you. So I like that idea of really being thoughtful about what you, what you move forward with with those existing tools for sure.
Kieran Norton: And that's why we talked about SOC transformation, right. Not SOC, you know, rebuild.
David Moulton: Right.
Kieran Norton: Taking the same things we're doing today and just doing them with, you know, more advanced technology is not going to get you farther ahead. You have to change the way, you have to change some of those decisions. You have to make different decisions, and you have to change the way you operate in order to see the advantage. So oftentimes to your point, we'll see that clients have been using a particular technology, it's been feeding into their SIM for the last, you know, five, seven, ten years. And when we go and ask is that required, do you need that data, we get an answer like well, I don't know because we've, we've always had it. It's like alright, well, maybe we need to reexamine the question see if that is really something that's contributing to, to value today. So you can always, to your point, David, you can always add them later, right. But what's, what's really key is figuring out, you know, what are the, what are the sources that are going to provide the greatest value most immediately. Start with those and then you can add more over time.
David Moulton: Yeah, I think the, the way I'd put this is if we were to build the SOC that we need today versus improve the one we have, that's the difference of a transformation versus that, you know, iterative evolution. And sometimes it's best to, to, to cut it and build for what you need rather than, you know, optimize what you, what you currently have in place. [ Music ] Let's shift away from thinking about looking back at what we need and what we don't need and get into some of the future of AI and cybersecurity as a, as a topic. Kieran, what are some of the future trends in AI and cybersecurity, particularly in the context of a SOC that you see coming and are important?
Kieran Norton: Well, I certainly think a development and use of a bespoke models is both something that's happening today, but it's going to continue to occur I think more frequently going forward, right. So once you start down the path of building, you know, agents, right, to, to sort of again take the place or, you know, remove the need for human involvement in, you know, routine process tasks, et cetera, you're just going to see that concept expand. So today you can obviously do that leveraging. You know, you can, you can do some things leveraging platforms like XIM. And in addition to that, you might be adding additional eight bespoke agents to help your team take care of various tasks. So, for example, one of the ones we're working on currently is, is also involving a copilot, it's really designed for a ride along for a tier one level analyst so that when they get the recommendation from the core engine, so when XIM says okay, here's the recommendation. Here's the recommendation around the playbook. If they don't understand the playbook, the ride along copilot is there to say okay, well, you know, they can ask, what does a playbook do? What do I need to know? Can I, you know, can I get further information in on this, et cetera. So it's going to make them more effective in determining whether or not they're going to execute that playbook, right. So that's one example. That's a here and now. But I think as we go forward, we're going to see more agents deployed more widely from a cyber perspective. Ultimately, the vision that we're working towards, and I wear the hat of owning our internal development use et cetera, AIs that applies to our cyber business. The direction we're going is to chain together a series of agents that allow us to perform services for clients faster and more effectively with better outcomes, right. And I think that will be true within cyber functions for those organizations that, you know, can afford and can build the team and have the right resources. They're going to start developing agents that are going to take the load of some of the routine activities in their business and address it. So third party risk is a, a great example where, you know, most regulated entities have a whole program about managing the risks associated with their third parties, and a lot of that program is focused on, you know, taking in input from a third party, be it a SOC two report or something else. Comparing that to a set of requirements, right. And then you know, figuring out where the gaps are from a security perspective then following up with that particular vendor, say, hey, look, these things aren't in your, you know, we don't see these things in your SOC two. We don't see this in documentation reported. Can you tell us what you're doing about X, Y and Z? Historically, a very manual process. A person looks at the documentation. They look at the requirements. They identify the gaps. They go through that whole process. Well, we're solving that with a dedicated agent. It's going to do all the initial analysis of the documentation and comparison to the framework of identification of gaps and ultimately coming up with recommendations. And it's going to pass that back to the analyst, who's then refining, tweaking, you know, based on their experience, based on knowledge of what the vendor's organization and business does, et cetera. And then speeding up their ability to move through what is, you know again, historically both manual and, you know, required from a compliance perspective process.
David Moulton: So we've talked a lot about some amazing capabilities with technology and the AI and AI cookies and cream kind of a world that we see. I'm wondering if you're a SOC analyst today or your desiring to be one, what are some of the things that you would recommend for that person to lean into as far as skills or training or areas of interest, that would bolster their success in this combo world where we've married together man and machine towards our security goals.
Kieran Norton: Today I'm going to say Python.
David Moulton: Python?
Kieran Norton: Right. Yeah, no Python, right. Because that is, that is the language of the, of the cloud. That is the language of a lot of the, you know, automation and capabilities and security tools. You know, Python is really becoming a core competency. It's how you, or you know, in many cases, writing code to address the, achieve the outcomes you're looking for, right. So that's kind of the, that's kind of the, the short answer. The long answer is, you know, focusing on the principles. So as you can tell by, you can see my gray hair. I've been in the business for a long time. And so while new technology trends come up and new technologies do bring nuance, threats and nuance, you know, kind of issues that we haven't dealt with before, a lot of the patterns have existed for a long time. So rather than focusing, you know, how do I solve for this particular threat? Understand what the threat, you know, what is that threat actually originating from? Where is it originating? What is it taking advantage of? What is it exploiting? And understand the problem at that level because you're going to see that threat pattern somewhere else. And so focusing on understanding the pattern is going to make you more and more valuable over time because pattern recognition is a human thing. It's not just a machine thing. And you're going to get better at addressing, you know, threats in the environment, responding and doing these kinds of things as an analyst.
David Moulton: I think that's so well put. Folks can't see me, but I got a touch of the, I think it's called distinguished gray here and there.
Kieran Norton: Is that what we call it?
David Moulton: Yeah, I think so. At least that's the, my wife has given me that, smiles and tells me it's distinguished. So I believe her.
Kieran Norton: My, my wife challenged me, putting that my, in my last passport application, putting that my hair was brown. She's like really? Brown?
David Moulton: I mean, in the right light.
Kieran Norton: Yeah, exactly. In the right light 10, 15 years ago, yes, completely true.
David Moulton: Years ago I was learning a program you might have heard of. It was called Adobe Photoshop. And once I got proficient in it, I thought of myself as a designer because a designer could use Photoshop. And I don't know, five, six years into it, I started to realize that knowing Photoshop was a skill, but it didn't make you capable of making design decisions that mattered and it was that underlying foundation of understanding design principles. And so when you're talking about understanding Python, that's a great tool to have. It's the Photoshop of the moment, if you will. But understanding the foundations and principles of security and pattern matching and how an attack can occur. Those are going to be the things that transcend the fad of the moment and give you the ability to provide that creativity and value that the human has in this relationship between speed, scale and agility from the machine and creativity and the understanding of the adversary because they are generally human. That we're always going to have some value in that space. So I think it's, it's a really good piece of advice. Understand your foundations. Kieran, I love to wrap these interviews up with a question. It's the same question for everyone. What's the most important thing that a listener of today's conversation should take away from it?
Kieran Norton: I would say there's sort of two, two points. The first off, and we didn't talk a lot about this during the discussion, conversation so far. But I do want to mention it which is, there's a lot of concern, you know, historically we might call it thought, around using AI and gen AI specifically. And a lot of organizations want to adopt AI and gen AI from a business perspective, are not doing so for concerns with security risk, data loss and so forth. We actually know how to tackle those problems as an industry that has professionals. We've been dealing with those risk domains for decades. Yes, there's nuances, right, that you have to adapt to and you probably need, you know, some new capabilities in your software development lifecycle and some new capabilities, government data, kind of analysis and testing perspective, right. So I encourage everyone to think about that because as an industry, I think the security, you know, security professionals have a history of saying no and have a history of being very protective and while I can appreciate that, I think we have to recognize that this is the direction of technology and we need to, we need to lean in to make it happen. Okay, so again, I'll put that soapbox aside. I think in terms of using AI in the business of cyber, right, and defending organizations, we need to think about what can we do today, where can we get value today, you know, and make investments that we know are going to be non, no regret decisions. Because it is changing so quickly. You know, kind of the technologies are changing rapidly. The capability of the technology is changing rapidly, et cetera. But there are areas where we know we can start today. There's solutions today, we can make progress today, and we're going to have no regrets about those things later. So SOC transformation obviously being one. We're leaning heavily into that for that exact reason. You know, securing code from development through to, you know, cloud deployment and what we say is, you know, code to SOC, right. Because on the back end of the cloud you're monitoring for security threats, you're responding, you're reacting, et cetera. Tying that whole entire process together, automating it as much as possible, seeking an advantage to be able to do it more quickly, lower cost, better outcomes, et cetera, that's another, you know, area where you're not going to have any regrets about improvements, right. So, so again, just focus on those things you know you won't regret, you can make progress on today. And do that in parallel while watching what's happening from a an overall direction and identifying when you do get to the point of okay, now we're going to start building bespoke agents. Now we're going to start doing these other things. Because it is going to take a little time for some of that to shake out. The skillsets are pretty tough to find right now. Those kinds of challenges.
David Moulton: Makes sense. No regrets. Make those decisions immediately, and don't resist becoming era of AI with your knee jerk reaction to say no first until I have it 100% secured.
Kieran Norton: Yep, that's right. [ Music ]
David Moulton: Kieran, thanks for the great conversation today. I really appreciate you sharing your insights on the AI-driven SOC, automation and some of the future trends that you're seeing in this space.
Kieran Norton: Sure, David, glad to be here and I appreciate the, the time and the conversation.
David Moulton: For our listeners, if you're interested in going deeper on this topic, I recommend you read Deloitte's stated generative AI in the Enterprise 2024 report. It highlights organizations evolving journey and scaling and extracting value from generative AI and gen AI initiatives. We'll have those links in our show notes for you. That's it for today. If you like what you've heard, please subscribe wherever you listen. And leave us a review on Apple Podcast or Spotify. Your feedback and reviews really do help us understand what you want to hear. If you want to reach out to me directly and share your ideas for the show, you can e-mail me at threatvector@paloalto networks.com. I want to thank our executive producer. Michael Heller, our content and production teams, which include Kenne Miller, Joe Bettencourt and Virginia Tran. Elliott Peltzman edits Threat Vector. We'll be back next week. Until then, stay secure, stay vigilant. Goodbye for now. [ Music ]