Threat Vector 9.19.24
Ep 35 | 9.19.24

Securing the GenAI Transformation Journey with Accenture's Daniel Kendzior

Transcript

Daniel Kendzior: I think they should take away that again. Security orgs really can and should lead and innovate in this space. We should identify areas where we can help drive better risk reduction, but enablement of your business. And do so defining what those guardrails should look like, so the rest of the business can then follow in your footsteps in a very secure and responsible manner. Let's lead for the front and really help our businesses adopt wherever we can. [ Music ]

David Moulton: Welcome to "Threat Vector," the Palo Alto Networks Podcast, where we discuss pressing cybersecurity threats and resilience, and uncover insights into the latest industry trends. I'm your host David Moulton, director of Thought Leadership. [ Music ] Today, I'm thrilled to be joined by Daniel Kendzior, the Global Data & AI Security Practice Lead at Accenture, and a managing director. Daniel brings an impressive wealth of experience in cybersecurity strategy and architecture, particularly in orchestrating large-scale information security transformations, across global organizations. He is a recognized leader in integrating cybersecurity into the core architecture of products and services, making it a true business-enabler. His work spans various sectors, including life sciences, health and industrial clients where he's been instrumental in driving the seamless adoption of security practices and AI in data management. Today's topic is incredibly timely and significant, "Securing the Gen AI Transformation Journey." With the rapid advancement of generative AI technologies, organizations are undergoing a transformation that holds immense potential, but also brings a unique set of security challenges. As AI becomes increasingly integrated into business operations, ensuring it is secure is not just a technical necessity, but a strategic imperative. Daniel's expertise will shed light on how organizations can navigate this complex landscape and build a secure foundation for their AI initiatives. Here's our conversation. Daniel Kendzior, welcome to "Threat Vector." I've been looking forward to recording this episode with you.

Daniel Kendzior: Likewise, David. Happy to be here.

David Moulton: Daniel, before we get going in our conversation about Securing the Gen AI Transformation Journey, I'm wondering if you can tell me your favorite cybersecurity story.

Daniel Kendzior: Sure. I think the firsts always really kind of stick out for me. So, I remember, this is years ago, but the first time I met a threat actor on the wire really kind of stood out. We were in a Southeast Asian financial institution and the company was reporting that they were having a lot of issues with kind of core source data changing. And so, we were inspecting and looking at logs and we had literally just rotated credentials to some kind of very secretive, confidential code repositories that the bank was still developing, and we watched the threat actor try and log in and use the credentials that we had just rotated. And we saw them try it again, like they must have mistyped something and that was -- yes, that was just such a cool experience for a young person just getting into the space. And it made it so very real and very meaningful that we were able to kind of catch them in the moment, and then obviously take them down a very large path of resilience and hardening after that.

David Moulton: I love it. And your story is a win. Sometimes these stories go a different direction and stick in our craw, but I love the positivity. Let's get into the conversation. We've been well over 18 months into the broad and rapid proliferation of AI technologies across industry, as well as cases. How is the cybersecurity space impacted by AI?

Daniel Kendzior: It's been tremendous. I mean, if I just look at since ChatGPT kind of entered the public domain, we're really seeing kind of two or three very large transitions occur, right? One from a threat actor perspective. We've seen a real surge in ransomware. So, you know, over 76% increase since end of 2022, which is very significant, right? And it's hard to tie that all specifically to AI or things like that, but you know, along the same time like with all this going on from a security perspective, very impactful. You know, the rise of phishing attacks. We're seeing almost a 1300% increase since ChatGPT3 launched. And in that same time, cyber defenders are really trying to figure out how do we take all these really exciting LOMs and exciting plug-ins and all this AU technology that's hitting the market and figure out, "How do we leverage it for improving cybersecurity operations, right? So, making our jobs faster, a little bit easier, allowing us to really spend time on the things that are -- is exciting and is adding to risk reduction for the organization. And so, that's kind of the backdrop. And then on top of that, you obviously have a very evolving regulatory landscape. Right? And so, all businesses are really trying to understand you know, what things like the EUI Act are really going to mean for them. You know, try and follow along these different other federal, state, local government, draft regulations that are going on and trying to figure out you know, "How do we stitch all these three things together?" So, you know, if you're a Chief Information Security Officer or Chief Digital Officer, things like that, it's a lot to juggle all at once, but just a super exciting time for the industry from my perspective.

David Moulton: So, Daniel, I recently talked to Ryan Barger on our Offensive Security Team, and he was talking about the uses that they have for AI focused in on something mundane like automation around reporting, but also got into building infrastructure and some of the different things that they need when they're going to attack a customer as part of a Red Team operation. I'm wondering if you can give some more examples across the cybersecurity spectrum where AI is enhancing what we're doing on the cyber defense side?

Daniel Kendzior: Yes, absolutely. I kind of think of it in two big layers, right? There's more strategic things that you're going to do and then there's more operational tactical things that you're going to focus on, right? When we think about it more from a strategic perspective, you know, like we were talking about before, a whole bunch of new requirements hitting your organization, kind of a very evolving landscape of the ecosystem. So, you've got new pieces of technology being integrated. You've got new use cases that the business is trying to leverage. And so, from a threat actor perspective or someone looking to attack an organization, all that new technology is very kind of interesting area to go after, right, because generally the stuff that's very new might have a little bit less security controls put in place, maybe a little bit less rigor from an identity and access management control perspective. Obviously, you have less than a baseline so you can understand, right, what good looks like, what secure looks like. And so, you know, we're seeing kind of threat actors really go after that. And so, from a defending perspective, right, you know, trying to really be able to focus on, "How can I take all of these new integration points, these new services, these new APIs, how can I quickly identify pattern recognition?" That's a great use case for AI, right? And even traditional AI. I think what the generative AI component really can help do is take and enable the defenders so that they can take all these new sources, you know, create summaries, create patterns, etcetera, and do that in a much faster way. And then really communicate it across a growing ecosystem more efficiently. Most of our clients are operating globally at this point, right. The need for sharing intelligence, sharing data across your own enterprise, but also third parties in different languages, is obviously super topical. So, to being able to figure out, "How can we identify something that's going on? How can we create testing protocols which are very rigorous, you know, very tailored?" Those are all areas where we see generative AI adding really significant value in areas which are obviously still very early days, right? So, I think there's a lot more to come in terms of being able to own the landscape.

David Moulton: So, as organizations embark on their AI transformation journeys, what should they consider to maintain trust, security, and resilience?

Daniel Kendzior: I think it's such an interesting time for security to really lead, right? And let me explain that in a little bit more detail. You know, when we look at the journey to cloud, it wasn't necessarily the best day for the security industry as a whole. By and large, security kind of struggled to keep pace with the aspiration of some of our technology counterparts, some of our business counterparts for sure. And so, from a Chief Information Security Officer's CIS [phonetic], right, or Security Operations Teams location etcetera, this is a great time to really say, "Hey, we're going to take this new technology and we're going to be a first adopter. And in doing so, we're going to identify areas where we're going to add value to our own operations, but we're going to do that by adding the security guardrails in real time. At the end of you know, a couple sprints of innovation or experimentation or whatever it is, we not only have shown the business how we can be more effective, more efficient etcetera, but we've also paved the way for the guard rails that the rest of the organization should be leveraging. So, things like AI firewalls. Things like being able to do proper assessments of the use cases that you're conducting with AI and creating those initial inventories and the frameworks and everything that the rest of the organization can really kind of draft after. And so, when you kind of start with, "How do we do innovation at pace? How do we put the guard rails in place? And then how do we start to also leverage all these other components about AI that are super important from a governance perspective. So, things like responsible AI, right? So, how do we make sure that we're being very thoughtful, limiting, let's say toxic behavior, toxic speech, bias, inserting trust into these different workloads. Those are all things that security can demonstrate to the rest of the organization, and rather than let them start to go down a path and then say, "Okay, great. This is what you really need to be think about. This is a great opportunity just to lead from the front and really make it baked in from Day 1, and then that way, I think security will really be an enabler on these AI transformations.

David Moulton: Daniel, I've talked to Noelle Russell about this idea of baby tigers. That's what she calls AI projects. We ignore the claws, the teeth, the hungry, hungry eyes. They're very cute little, baby tigers. And I think a lot of us run off and try to create solutions. We invest in something that's shiny and new. Do you think we've learned the lesson yet to include security in the development of AI tools, AI experiences, or are we still at a point where the business is moving ahead, moving away from having security as part of that cohort?

Daniel Kendzior: I think by and large, we're getting better. I think there is more of an acknowledgement around security risk, and I think things like responsible AI really come to the forefront. In terms of business teams sometimes is they know that they're going to throw so many ideas out there and they know that their batting average is going to be low about the ones that they really want to hit. And so, sometimes there's a hesitancy that they're going to waste security's time. And I would really just reframe that as an opportunity. I think sometimes security will also be able to bring to the table areas where you can innovate faster, right? Because they've seen other types of transformations and maybe where either things got hung up or the business or technology leaders were thinking about it in a kind of a very traditional way, and so much of security is being resilient and is being scrappy and trying to make things work in an organization where you don't always really have the right resources that you want to have. So, I think the more that we can bring them in, the better. I would just say that 99% of the security defenders that I work with would love to be in more early-stage conversations.

David Moulton: You said something there that I think is going to take us to our next question about being scrappy and resilient. And because some of the security teams don't have infinite funding and all the capacity that they could ever hope for, they automate a lot of their work. What role does AI play in automating some of the routine security tasks? And can you talk about how that might free up an analyst to focus on more complex issues?

Daniel Kendzior: Yes, absolutely. Every security operations center team right now is getting pummeled with alerts, logs, requests to do assessments, requests to handle incidents. You know, it's just a very busy time from a business perspective. A very busy time from a technology perspective. And that's right after environment is obviously more complex than ever, right? So, I think we've established that basis. And you know, when you always have more inbound than you're going to have from a human capital perspective, you know, the automation piece become super, super important. And to me, it's not automating tasks to the point where they're entirely autonomous and there isn't any type of human involvement, but it's really about taking the really great, skilled resources that you have and letting them use the majority of their working time on things where it really requires a human decision and allows them to also take on things where you're probing and you're kind of pushing boundaries and you're looking into areas that you hadn't in the past. And so, if you've done something in the past, that's obviously where the automation comes in from a big perspective. When I just think of a typical day in the life of a SOC analyst, incident and event handling where you're pulling data together, aggregating it into some type of format to then either pass up the chain or pass to a third party, you know, very few of the defenders that I talk about -- that I talk to on a daily basis feel that that's the best use of their time is in PowerPoint and Word documents and Excel and things like that. That's where we believe generative AI has an immediate ability to really kind of add a lot of efficiency, right? So, being able to define standard reporting templates, and what are the fields that I need to leverage, and how can I format this in a way that you know, rather than take two hours to go clean something up after I've pulled all the data, I can do that in 25 seconds. And guess what? I can also have the AI do some validation, particularly for more junior analysts that might not have as much experience with the organization or some of the processes. And actually help coach that individual. Say. "Hey, by the way, you know, we didn't see you pull in this dataset," or "We didn't see this attribute come in. You know, frequently this is part of the check of what happens when we generate this report. You know, is that something that we should look into now?" And then assisting that analyst? You know, that type of paperwork, we feel from a toil perspective is just super impactful and really kind of frees up those analysts to go kind of climb the stack and do more of that proper you know, threat hunting and inspection down the path. You know, earlier we also touched on this global organization concept, right? And so, you know, being able to leverage teams that are in all different parts of the world that potentially speak very different languages, and be able to drive that communication efficiently, I can't tell you how many examples we have, you know, opportunities where we were able to help curb or mitigate a situation and it's you know, just in time, right? So, you're kind of pushing telemetry to someone. You're pushing an event to someone. And being able to save potentially a couple hours by not having to go through a third-party translation, either that internally or some other software, really allows those defenders that are maybe just waking up for their shift to take action. And those types of cycles, when they compound day after day after day, has a tremendous impact year over year.

David Moulton: So, you talked about this idea that translation in real time, and I think that's an amazing observation. You have the evidence in front of you. You have the thing that you can act on in front of you, but it's in the wrong language, until it isn't. And that's a huge unlock. It's not a mission impossible style, but it's so practical. I've tried to explain this, and I'm going to try -- I'm going to try this idea out with you, of practical AI analyze. I have this amazing little phone in my pocket. And when I snap a photo, AI is applied to the photo. I don't have to adjust so much. I don't have to know how to use a camera extremely well. And I look at that as my daily use of artificial intelligence giving me better outcomes. But then, it's doing a thing where it's collecting those photos into memories. And surfacing them in a contextual way a year later, a month later, it doesn't really matter to me, so that I can enjoy them again. Totally different than what you're talking about in the flow that a security analyst would have. But I think it's the kind of thing that all of us can look at when AI fades into the background, we can start to make choices. "Yes. That's a great memory I want to keep it. That photo looks so much better than I remember it." And how did we get to that? Well, AI assisted, but you still had the human point and aim the camera. You still have the human sharing the memory. That to me is the promise of AI that I see coming, and it's already here and on the pocket computer. And it really doesn't matter which brand you have, right? It's available at scale.

Daniel Kendzior: Yes, I love that, David. I mean, I think when we think of all business or all technology or all security, you know, the point of AI is not to focus on how do we get AI to work, right? It's to add value to your life. It's to add efficiency. It's to improve quality. And so, security's no different than that, right? So, to me this is an area where you know, there's plenty of use cases to lean in, and those can be small wins. They can be tiger projects to our earlier discussion, or they can be much larger, more significant, right? But I do think to your point it will become very transparent, very quickly, in some areas. And that's what keeps me really excited.

David Moulton: Can you lean into that a little bit? Talk to me about how a security analyst might use AI driven tools to contextualize and prioritize a emerging threat.

Daniel Kendzior: So, we've developed an AI security assistant, right? And I'm going to take you through a little of the history of kind of how this came about. When we were looking to innovate and explore with you know, things like some of the early ChatGPT models and things of that nature, we were trying to figure out, "What is a particular use case that someone might do on a day-to-day basis?" So, we talked about the report generation. We talked about language translation. We talked about aggregation of data sources, already. What we saw early on is we started to kind of view this a little bit too much as a blade of grass at a time. And we realized what our security operations center folks really needed was more of a platform. And so, a lot of the requests, you'd do one request, and then you have to go kind of work multiple stages of that process. AI might not be the perfect answer for all 12 stages of that, but there's generally some connective tissue that needs to come in, right? And so, taking an event that's handled, enriching it, identifying that need to get escalated. But then being able to ask the question, you know, "Who are relevant people to communicate this too?" Right? So, there might be, let's say, plant operators, if there's someone in a physical manufacturing site that needs to be informed of a situation that's occurring, what we saw with AI as an example is being able to kind of weave all of that telemetry into an assistant, and take a SOC analyst through more of a workflow based approach, where they were using it as a primary toolkit throughout all those different stops, and being able to say, "Okay, great. This is Operator 3 on this line. Why don't we go call them and ask them if they're seeing anything weird going on because we're pulling in some logs that are -- were very atypical and having the generative AI system be able to quickly route us to their phone number, understand who their supervisor was, what shift they're on. Those are some of the innovative things that we looked at that was really impactful. And we were able to kind of continue to riff off of that, right? And so, take it further and further down and say, "Okay, great." Once we -- now, we've actually determined that there is a true incident and we need to start driving a response and remediation, you know, how can we help enable that as well? And so, some of that can be things like creating software templates where we can do clean installs. Some of that can be you know, further driving out communications within the organization as well, right? Create draft emails that are going to go out and alert individuals. So, you know, what we just saw very quickly is while we started to look everything kind of independently, the more that we could pull it together into this workbench, AI system technical approach, the more value it really gave to our analysts. And so, that's the path we're continuing to go down and we're excited about you know, being able to scale that out even further and connect it to other disparate groups within security, right? So, how do you start connecting it into what the identity folks are doing and what the applications teams are doing from a support perspective?

David Moulton: A couple years ago, I had a chance to talk to Jamil Farshchi about artificial intelligence. This was right at the cusp of it really blowing up. And one of the predictions he made was that on the security analyst, the operator side that the UX was going to be the unlock. That was going to be the thing that allowed people to make faster decisions, better decisions. And I hadn't fully grokked what that would look like until just this moment. I need to get ahold of somebody in a specific area, and there's a friction point of looking up their supervisor, their phone number, and getting ahold of them quickly. But if I can hand that off to and/or have an AI anticipate that, that friction goes away, that UX goes up. And I think that that's a really interesting space to look for, not necessarily what is going on with the data that's coming in on an event and/or a threat actor and how do we correlate that together, but how do we make the connective moments in security seamless and fade into the background so that you never come out of flow? You are always in that ability to move ahead to protect, to shut something down, or to make the decision, "No, this is not real. Don't need to do anything. Don't interrupt the business." So, you get the outcome that you're looking for much, much quicker. Daniel, let's shift gears a little bit. Can you talk to me about how AI can provide real-time guidance and recommendations to maybe less experienced professionals?

Daniel Kendzior: Yes, AI has this great ability to both be able to review the inputs that you're providing to it, right? What are the prompts? What are the datasets? But then also be able to take the history of folks that have come before you, take other data from in the organization through RAG models and other types of constructs like that. And so, you know, within any field, but particularly I think a security operation center's a great example, right, where there's a very pronounced methodology of pulling people in as Level 1 analysts, right, with kind of minimal experience in the field, limited experience with your organization. Letting them operate within that for a year or two and then kind of getting promoted to Level 2, Level 3, etcetera. Right? You know, AI is able to provide some of that real-time mentorship, where it can pull out and help suggest things around quality of work, potentially some of the data that's being aggregated together, how that analyst is contextualizing it, and then providing feedback about what is the efficiency of some of those processes as well, right? So, showing that from when the original alert came in or the original ticket came in, how long did that analyst have it? What were some of the checks that they went through? Here's kind of the standard SOP of the nine steps that we would look for. You know, and it took you 45 minutes to do so, right? And that's exceeding kind of the long-term average of the -- your peers. You know, all those types of things are something that the analyst can interact with an AI system wherever they're at, whenever they're working on it, right? And so, obviously that's not going to replace the human managerial and mentorship element, which is super critical, but particularly with folks working remotely and increasing different types of shift coverage and things like that, you know, the AI can really be a companion and take someone that's you know, relatively new. And we've seen you know, really kind of help provide significant upscaling in a couple months.

David Moulton: So, Daniel, two quick follow-ups. First, what is a RAG model?

Daniel Kendzior: So, a RAG model is -- we'll just start with the acronym RAG: Retrieval Augmented Generation, right? The concept of a RAG model is that you have a large language model or some type of other AI model, and it's been trained on certain sets of data. It's been prompt engineered in a certain way to interact with a person. But a lot of data that it potentially needs to reference is outside of the actual model itself. And so, what RAG allows the model to do is have reference pointers to other types of data repositories. So, as an example, it could be, "Hey, we have our cyber policy stores in this Service Now application over there, or it could be things like workday data from a human perspective. And it can go and point to those pieces of data and help add augmentation to the response. So, not only is it taking what the end user is prompting, but then being able to pull in additional pieces that it wasn't previously trained on.

David Moulton: And then you talked about this idea of upleveling or upskilling a less experienced cyber security professional. I would wonder if you see a potential for folks that haven't traditionally been a part of cyber security field, entering the field because they're able to pair their unique experience in mechanical engineering or the arts with a AI focused assistant that allows their particular way of thinking and unraveling a problem to be applied to cyber security issues?

Daniel Kendzior: Yes, absolutely. I think just you know, taking a step back from a broader industry perspective, you know, I'm a huge proponent of diversity of thought, diversity of background. And so, I really have seen in my own experience time and time again, some of the best cyber defenders I know have very nontraditional backgrounds, right? So, folks that are deep in languages, folks that are deep in you know, applied math, etcetera. You know, to your point David, I think what security really needs is people with curiosity, a willingness to learn, and a willingness to contextualize what you see from a risk perspective, across a broader business. I think that's an area particularly in the traditional enterprise space where security defenders sometimes struggle, is you know, we're really good at looking at data. We're really looking good at being able to identify where we believe threat actors are and what risks at a business. But being able to offset that against business risk, sometimes can be a challenge for someone that's, you know, really, really deep in security. And so, you know, AI can take and provide a lot of the telemetry. It can help provide context. It can help aggregate data and summarize it and things of that nature, but then we really need that human to use all those other pieces of judgement, business savvy, communication skills, etcetera. And this is just a tremendous opportunity to take people that have some of those natural gifts or some of that training or history or experience and being able to apply it in an area where we desperately need more people.

David Moulton: So, let's shift away from the security professional for a second, and look at employees, because security is a team sport. We say that a lot. And I think that the way that it's applied is a lot of awareness. You don't know that you're causing issues until you're trained. What ways could an AI enhance the security awareness training programs to make them maybe more engaging, more effective?

Daniel Kendzior: So, we've seen AI make hyper-contextualized training and awareness, right? So, making it very, very specific, not only to let's say that organization, the business unit, and then even the individual. When folks can understand cyber risk and be able to really much apply it to their day to day, it becomes very meaningful. I think just due to lack of general resources, historically we've tried to typecast personas within an enterprise. Okay, "You're an executive," or "You're a developer," or "You're a frontline worker," etcetera, but the reality is there's a huge gradient of the jobs. And particularly as organizations are being reinvented, the specificity of what people are doing on a day-to-day basis is very different. And so, letting generative AI create that content, the tone, the language, the images, etcetera, super valuable from an external marketing perspective, incredibly valuable from an internal training and awareness perspective as well.

David Moulton: Daniel, when you talked about this idea of typecasting or personas, and how limiting they are, it reminds me of this story I recently came across about topic design and the early days of the jet engine. And at the time, everything was built for the average soldier, or the average pilot. And across ten key dimensions, it turned out that a very, very small number of people had even one or two things that were average. And that caused a problem, right? You couldn't fly a plane if the joystick or the seat was in the wrong spot and have a fast enough reaction time. Being aware of the averages, being aware of the personas' limitations, I think you're onto something here where security training isn't just for the podcast host. It's for this podcast host and my particular weaknesses or interests. And I love the idea that that could scale and become extremely effective at getting me to, let's just say, not click that, or whatever sorts of things I'm not supposed to do, or I am supposed to do. We'll see how that all turns out, but I think we can look to lessons from history like this story about jet engines or jet planes as something that helps us accelerate in this digital space.

Daniel Kendzior: And I think we learn from the threat actors, too. Right? I mean, you look at the rise of deep fakes, you know, folks using the likeness of celebrities, executives, etcetera to kind of create a sense of urgency or a sense of compelling someone to do something, that those are real risks that enterprises need to focus on right now, but also opportunities, right, where you can use some types of these super-immersive personalized experiences to really have individuals understand what to be prepared for and then how to respond on a daily basis. And we look at things like augmented reality and Metaverse and you know, deep fakes I think are just kind of the next frontier of what that can look like.

David Moulton: So, let's lean into this a little bit further. I think you're right. You can lean into what threat actors are doing and how they're using deep fakes and some of these technology that are both amazing but also a bit creepy. What is it that an org would have to do to collect the data to personalize that education without crossing the line where it becomes a privacy risk?

Daniel Kendzior: Yes, it's a great question David. There's definitely some real steps, right? So, I mean and starting at the beginning, you know, I feel very strongly about this concept of responsible use of AI, right? You have to have your own principles as an organization, what you care about, etcetera. You need to make sure you have the determination on where you want to sit from a deep fake perspective. And that's at an enterprise level. But then you obviously have to look at the individual level. And so, that individual needs to give willing consent of using their likeness, using their likeness for a specific duration, right? It shouldn't be an infinite use of it. And then also for very specific purposes. And then you want to be very thoughtful about how that content is stored, disseminated, etcetera. Frequently what we see is that the Chief Information Security Officer sometimes is the individual who's most interested in being deep fake for these types of campaigns. It can be a very immersive experience, but we're seeing more and more folks want to use it in different ways, and I think that the technology is obviously there. It's really around making sure that everyone's thoughtful and responsible about you know, how they're engaging with their peers as well as other folks in the organization. And that knowing that that consent can also be removed at some point, right? Things might change or the technology might change a situation, etcetera. And so, how do you have that life cycle management around? Okay, great. We're going to use it for this period of time for this intent. I'm good with that. And that might change and what do we do because of it?

David Moulton: So, you talked about the idea of using deep fakes to run training and also finding a balance with privacy. And it makes me go back to where we started with the conversation I had with Ryan Barger not too long ago, about using deep fakes and AI to automate some of the red tea work on the offensive security side. If we can go back to that, what role do you think AI's going to play in some of the scenario planning the red team exercises, those things that help security teams better prepare for future threats.

Daniel Kendzior: What I've seen it be used for so far is a lot of early preparedness, right? So, being able to pull intel about individuals, business events, competitive landscape, etcetera. Being able to create hyper-localized content, right? So, if you're doing a smishing campaign or spear-phishing or deepfake or things like that, obviously you can make it really tailored. You know, we've also seen some progressive organizations really start to look at things like digital twins and how can we spin up digital twins of environments and be able to leverage that as a bit of a testing ground or approving ground before we actually go and run a red team operation. Those are some kind of the initial ones. And again, that parallels a lot of what we see threat actors using as well.

David Moulton: So, Daniel, this has been a really fascinating conversation. I've gotten my geek on where I've learned a couple of new things. When you were talking about the RAG model earlier, I think you were getting to an idea of it helps maybe reduce hallucinations, or maybe I heard that wrong, but that's my new thing for the day. But don't take it from me. What is the most important thing that a listener should take away from today's conversation?

Daniel Kendzior: I think that they should take away that again security orgs really can and should lead and innovate in this space. You should identify areas where you can help drive better risk reduction, but enablement of your business. And do so, defining what those guard rails should look like, so the rest of the business can then follow in your footsteps in a very secure and responsible manner. Let's lead for the front and really help our businesses adopt wherever we can. [ Music ]

David Moulton: Daniel, thanks for the great conversation today. And I appreciate you sharing your insights on AI and how it's Impacted and influenced so many aspects of cyber security. The potential wins that you spoke about are truly incredible and exciting.

Daniel Kendzior: Thank you, David. Yes, it was a great opportunity and I'm really passionate about the topic and appreciate you having me on the pod.

David Moulton: That's it for today. If you like what you heard, please subscribe wherever you listen and leave us a review on Apple Podcast or Spotify. Your reviews and feedback really do help us understand what you want to hear about. I want to thank our executive producer, Michael Heller, our content and production teams which include Kenne Miller, Joe Bettencourt, and Virginia Tran. Elliott Peltzman edits and mixes our audio. We'll be back next week. Until then, stay secure, stay vigilant, goodbye for now. [ Music ]