Data Security Decoded 2.17.26
Ep 45 | 2.17.26

The Real Risks of Agentic AI in the Enterprise

Transcript

Camille Stewart Gloster: What organizations have to be careful about is how they keep people in the loop, because that contextual knowledge, that human judgment, is going to be really important. The AI systems and eventually AI agents as you deploy them can be helpful in doing some load reduction for your analysts so that you can get more out of them. You should always think about AI systems as an augmentation to your team, not a replacement, particularly in security.

Caleb Tolin: Hello, and welcome to another episode of "Data Security Decoded". I'm your host, Caleb Tolin, and if this is your first time joining us, welcome to the party. I'd love it if you'd take a moment now to subscribe to the show so you don't miss any future episodes. And if you're a returning subscriber, thanks for spending some more time with us. Drop a comment below, give us a rating wherever you're listening. This really helps me understand what you want to hear more about and helps us reach more listeners like you. Now, in today's episode, I am joined by Camille Stewart Gloster, CEO of CAS Strategies, an advisory firm at the intersection of AI, cybersecurity, digital trust, and geopolitical risk. She also serves as a fractional chief AI officer and CISO for select clients, and advises companies on secure, responsible deployment, and enterprise resilience. Camille previously served as the first Deputy National Cyber Director of Enterprise Technology and Ecosystem Security at the White House, leading national efforts on AI security, quantum readiness, supply chain security, and workforce resilience. We talked about the relationship between AI and EDR, ethical AI, and mitigating threats from agentic AI. You notice the theme here? We're talking all about AI. So if you're eager to learn more about AI in practice in the enterprise, this one's for you. Let's get into it.

Camille, welcome to "Data Security Decoded". I'm really excited to have you on, but I'm going to start with a question that I ask all of our guests, and what is something that is not related to cyber that you are obsessed with lately? For me, people may, you know, have an interesting reaction to this, but I am completely obsessed with instant coffee. I'm not going to say it's better than, like, drip coffee or espresso, but it's just so easy. I mean, you can control the strength of it, you can make great iced coffee super easily, and, you know, I'm just unashamed in my obsession with it, and I'm curious what you're obsessed with lately.

Camille Stewart Gloster: Well, first, thank you for having me. I'm excited to be here. I cannot relate to the instant coffee thing. I drink my coffee black, and so it's got to taste really good. But I am obsessed with trash TV right now. Like, I want the more mindless, the better. There's so much going on in the world. We're all working on really heavy things. Cybersecurity can be all-consuming. So I want things that don't reflect my real life, and don't make me feel feelings other than joy.

Caleb Tolin: What's your TV obsession lately? What's the latest thing you've been binging?

Camille Stewart Gloster: My husband and I right now are watching "His & Hers". It's not trash TV, but it's this, like, limited series on Netflix that's really interesting, and somebody killed somebody and I'm going to figure out who.

Caleb Tolin: Very interesting. I love all the, like, the real estate ones, like "Selling the OC" and "Selling Sunset". Those are kind of my like --

Camille Stewart Gloster: Yes, I'm watching those too on the side. My husband just won't watch them with me, but yeah.

Caleb Tolin: I get it. I get it. Guilty pleasures, but I don't even really feel guilty about it because I'm just --

Camille Stewart Gloster: Not at all. I need to let this. I feel no guilt.

Caleb Tolin: Awesome. Well, now everybody has some new TV recs, but let's dive into the media conversation here. So I'd love to start with the US National Cyber Security Strategy. At the time we're recording this right now, the strategy hasn't been released yet, but US National Cyber Director Sean Cairncross has teased out some of the themes that we expect to see in the strategy. Some of which are partnering with private industry to enhance the nation's cyber posture and, interestingly enough, increasing cyber offensive operations when adversaries attack US critical infrastructure. I'd love to kind of get your take on those themes and specifically on that cyber offensive piece. Is that, you know, based off of your experience in cyber policy and national security, do you think that's an escalation or is this something that's kind of long overdue?

Camille Stewart Gloster: So, you know, I'm really eager to see the cyber strategy. There's a lot of really good things that were mentioned. For the offense piece, that remains to be seen. Cyber offense, particularly against the tactics against critical infrastructure, is not going to be brand new. I'm assuming he means scale and severity and all of those things. There are a lot of tools in the toolkits. We don't always choose a cyber offensive attack as a reply to attacks on critical infrastructure. There are a whole host of other things you could do. And so I'll be interested to see what that says. I mean, the strategy is supposed to be around five pages. So I don't know that we'll get that meat in there, but all the subsequent EOs and policy implementation that they have promised us I think will give us that detail. And I'll look forward to seeing that. I think the best way to address these attacks on critical infrastructure is not to limit ourselves just to cyber offensive attacks. We should really use all of the tools in the toolkits, because sometimes that's not the best way to elicit the response that we desire.

Caleb Tolin: Anything else you'd like to see as a part of the cybersecurity strategy? Anything else that we haven't already heard about that, based on your experience, you'd like to see included?

Camille Stewart Gloster: Yeah, one of the pieces is the budget. There are a lot of interesting things in there. Where's the money to support that? There's been a shift from federal leadership on cybersecurity to a lot of state leadership. And so I'm interested to see how they fund and support that and how they keep it coordinated, particularly since the proliferation of AI tools and AI systems mean that we have a software quality problem that will exacerbate the cybersecurity issues that we are worried about. And with this bifurcation of AI policy, us wanting us to handle that at the federal level and cybersecurity at the state level, there's a little bit of a delta there. So I hope that's addressed. Love the public-private collaboration piece. We always need that, particularly in this space where the private sector is really leaning in on innovation and driving where we are on AI innovation. And that will mean that they definitely need to partner with the federal government to promote security, to align innovation to our values and our goals as a nation, to promote national security. So, I'll be looking for some granular detail on what that actually means beyond an investment in innovation. And I do actually hope it has that too, right? We have seen a pullback in a lot of research and development dollars, and a lot of R&D money focused specifically on the cybersecurity threats that we stand to see as our technology continues to evolve will be huge. There was mention of cyber workforce. I think that is great. I mean, I was the champion of the cyber workforce strategy that is currently out, and I want to see what that means. Vocational training, yes. Where's the money for all of this? How are you championing this? Are you using the private sector? Are you only focused on federal cyber workforce? Are we thinking about this whole scale? Because back to that software quality problem, that's going to be a real national security issue for all of us and a really quality of life issue for all of us. And so without the proactive investment and pushing organizations to really think about secure-by-design software, we're going to find ourselves in a problem.

Caleb Tolin: Right. Right. Funding and workforce are topics we've talked about a lot. I know just in the news, especially the funding piece has been such an ongoing issue, and there's so much volatility. So having some stabilization in both of those areas will be really important in the long term for sure. Now, I'd love to hear a little bit more about what you're currently working on, which from our previous conversations is a lot of AI security and strategies for enterprises, and talk a little bit about identities as well. So with your experience shaping AI security programs at companies like CrowdStrike and advising enterprises globally, how do you see identity-based attacks evolving to bypass traditional EDR tools? And what are the biggest gaps that organizations need to address today in terms of those areas?

Camille Stewart Gloster: Yeah, I mean, traditional EDR tools are looking for malware, they're looking for, you know, abnormal movement, but identity is the attack surface now, right? They are latching on to the opportunity to leverage your credentials, your tokens, your passwords to access a system. According to a recent industry report, there are roughly 82:1 non-human identities outnumber human identities. That's insane. And if that's the case, you're not only worried about the identity of the people within your organization and how you train them. You need to be worried about whether those are AI agents or their APIs, IoT devices. You have to be worried about how you're locking those things down, because they have become the most attacked surface within your environment. And one of the things that organizations need to do is really basic. We've been talking about it for a long time, but less than 50% of organizations have implemented, and that's multifactor authentication. That is a huge part of building some resilience in your organization. But that said, if leverage OAuth and if someone gets a token after you have used multifactor authentication, they can still move within your organization. That means that you really should think about some conditional access policies. Not everything should use OAuth to allow you to authenticate into a number of different applications. Some of your SaaS apps have too much sensitive information for you to be able to authenticate into your Gmail and then authenticate into something that houses really sensitive customer information or organizational information. The other thing that organizations really need to start thinking about is with the proliferation of AI systems, whether they are sanctioned by the organization or you've got a bit of a problem because you've got some shadow AI, that is an aggregation of really valuable information that has been put into one place, so probably pulled from a bunch of different data sources, whether that's the brains of your team or, you know, from different tools about customers, about internal processes, pulled together in aggregate, synthesized, and operationalized. So that becomes a high-value target. I don't even need to move laterally within your organization as much as I was inclined to do in the past. If I can find that AI system that has not been secured properly or you're using OAuth to authenticate, I've got a treasure trove of information without moving around your organization. So, there's a lot to be thought about in terms of identity, particularly as organizations seek to deploy agents and give them a lot of autonomy.

Caleb Tolin: Absolutely. I do want to circle back to AI agents in a little bit, too. But something that I'm really excited to do is we actually have some questions from listeners that are related to AI and cybersecurity. And I would love to pick your brain on these and kind of get your response to them. So the first one that we got was, How can AI enhance threat detection and response?

Camille Stewart Gloster: Oh, yeah. I mean, we've been long using anomaly detection and leveraging AI and machine learning in cybersecurity to help calm down the noise of all the indicators that you get. It can also be a first line of defense in terms of teeing up response actions. What organizations have to be careful about is how they keep people in the loop, because that contextual knowledge, that human judgment is going to be really important. But leveraging AI systems to incorporate intelligence, to really wade through that and get some synthesis on kind of the movements that are happening, to inform the choices that you make, weeding through all the log data and some of the indicators that come up. All of those things are really helpful mechanisms that AI systems, and eventually AI agents as you deploy them, can be helpful in doing some load reduction for your analysts so that you can get more out of them. You should always think about AI systems as an augmentation to your team, not a replacement, particularly in security.

Caleb Tolin: Right, absolutely. And we recently published an episode with Amit Malik, who is from Rubrik Zero Labs, on how they developed a system for LLMs to analyze malware in code and kind of, like, just expedite the work that that team's already doing. And so they can focus more of their time on some of that high-impact analysis as well and what you do with the analysis afterwards, too.

Camille Stewart Gloster: That's a great example.

Caleb Tolin: Yeah, absolutely. All right. Our next one is, What are the ethical implications of using AI and security practices?

Camille Stewart Gloster: I mean, I think it kind of ties to what I just said. Thinking that you can completely replace your security team with a suite of AI agents is not only a bad move from an effectiveness and perspective, but there are ethical pitfalls. Because depending on how the system is trained, there are going to be indications that are biased or lead you down a wrong path. We've seen a lot of that, not with AI specifically, but it will manifest itself there in insider risk, right? The data says that, or some of the data says that this attribute about a person tends to pop up when they are a potential insider risk. And if your agent or AI system overpivots on that one characteristic and doesn't leverage the contextual information that is inherent in the people who usually kind of comb through that, you could set yourself up for legal risk and for other risk or to miss things. So there are a lot of considerations both from an ethical perspective and from a strategic and just effectiveness perspective. The other thing I would say is people like to talk about AI ethics as if it's a completely separate discipline from thinking about AI security. AI ethics and thinking about the content that comes out, the data that goes in, how bias moves through a system, how it's trained, all the things that tend to be rolled up in AI ethics are actually extremely important to AI security and securing an organization. So what I would caution folks to do is not separate those two disciplines, because you will have to do AI ethics well to do AI security well, as well as AI security tools. Security tools in general are the way that you get your AI systems to act and behave in the manner that you want them to, in the ways that you've promised your users and your customers.

Caleb Tolin: That's so true. Absolutely. And you've already kind of touched on this one a little bit, but just to lay it out more specifically, how can organizations effectively implement these technologies without compromising data integrity?

Camille Stewart Gloster: The biggest mistake that I see organizations make is think that you just shove AI into an existing workflow or process. It is not a plug-and-play technology. You get the, what I think is truly a luxury to redesign a process around the capability that lives within the AI system. Think about where you are and where you'd like to be and how an AI system can help you get there. AI is not going to be the best tool for every process, for every outcome you seek. There are a number where it can reduce load, add value, completely take a piece off of the plate of an analyst or an organization. But you really have to understand the context, and that's where people come into play. You have to pull together a cross-functional team that can really think about the threat, not just from a security perspective, but from a user perspective, and how this impacts your workers, and how this impacts your bottom line. Organizations that don't think about governance as a part of deploying AI systems find themselves with a number of different exposures that they didn't account for. So if I had two pieces of advice, it would be AI governance really investing in a cross-functional operation that hones the intellect and skills and expertise of the organization you have, and not thinking about security in the ways that you've done before. Trust and safety as a discipline has long been left to big tech companies and social platforms to think about content and, you know, behavior. But behavior and content and all of those considerations and concerns and the policy rigor and the operational rigor that are within trust and safety teams now must be part of how you think about holistic security. So broaden your aperture on that as well.

Caleb Tolin: Absolutely. Yeah, governance and policy implementation for AI is, like, such a critical part. rather than just shipping something into production and, you know, hoping it works.

Camille Stewart Gloster: Hoping for the best.

Caleb Tolin: Hoping for the best, yeah.

Camille Stewart Gloster: Start small and grow. Pick a pilot project, really invest in understanding how it impacts your organization, and then go from there.

Caleb Tolin: Yeah. Absolutely. Absolutely. So, I want to hop back a little bit to talking about EDR. And you mentioned this earlier too a little bit and you just touched on it, but I'd love to get a deeper perspective from you on this as well. So, many organizations are heavily relying on automated detection through EDR platforms. But from your perspective, how should they balance that automation with the human-led threat intelligence to spot attacks?

Camille Stewart Gloster: Oh, I love this. So I actually have been coaching a lot of organizations on rethinking what threat modeling looks like. It involves that cross-functional perspective, but in the past, a lot of the threat modeling components of your organization, maybe a subset of your threat intel team, maybe a separate organization, maybe one person who just gets creative, is kind of relegated to thinking about short-term new threats, not the big audacious, but the time from where we are now to some manipulation that we did not expect, or emergent behavior from an AI system, is much shorter. And so I encourage organizations to really invest in that threat modeling capability as part of their threat intelligence. The other thing is to really extract a lot of data from your systems about how they're operating. You should have a lot of observability across your organization and be using that to build a continuous learning system for your detection and response. You should not have static detection criteria, unless that's necessary for a legacy system or something like that. But your detection should be adaptive. It should adapt to the changing intel you're getting. It should be adapting to the research that's coming out really rapidly on how AI systems are changing the way adversaries move, are changing the way organizations operate and how systems are able to access different information. You know, back to the identity piece, one thing I've seen a lot is when you provision access for an agent and you forgot that you have provisioned access in a number of different ways when it was people, and you can kind of rely on the fact that they wouldn't go back to access that system, even that physical location. Agents then don't have that same context, aren't as inclined to not go back, and they can move through your organization in ways that you didn't anticipate. So, creating a monitoring capability that really looks for those deviations even beyond some of the normal anomaly detection is going to be really important. That continuous learning piece is huge.

Caleb Tolin: Absolutely. And you know, we like to keep things pretty vendor-agnostic here too, but it would behoove me to mention Rubrik, the company that sponsors this podcast. Also, we've talked about governance. That's definitely one of the huge pillars of the Rubrik Agent Cloud Platform. We've talked a lot about that and its importance, but a really interesting feature that the platform has as well is this agent rewind, or remediation capability too. Which, like, to your point, governance and observability of these, you know, your agents and what they're doing is so important, but then also having that kind of kill switch and that moment to say, like, or that capability to rewind an agent's mistakes so that they go back to a state before they were doing this incorrect action is really a game changer for many organizations, too. So that's, I have to throw that in as a quick shoutout as well. Because it's a really cool capability.

Camille Stewart Gloster: And I'm glad you said that too, because it's not even just rewind before they took that action, but making sure that you can recreate that data, making sure, you know, there's so many layers to that, but the ability to undo a mistake is huge.

Caleb Tolin: Yeah, absolutely. And since we're on the topic of AI agents, you know, looking ahead, what are the top three strategic adjustments that organizations should make to protect against identity-based attacks and mitigate internal and external threats from agentic AI?

Camille Stewart Gloster: I mean, let's start with the one we started with at the beginning. Please implement MFA at least. Some conditional use policies that kind of segment how you use OAuth to allow people to authenticate across apps within your environment. Please be using zero trust if you're not already. And make sure you really think about your agents or your AI systems not as normal software, but as identities or employees that will be making decisions and moving through your organization in ways that you do anticipate. Because they're doing the things you ask, but also as emergent behavior pops up or poorly coded rules or bad data influence how they move, you could have what amounts to an insider risk that you didn't account for. So really honing in on what identities you are provisioning access to. I talked to an organization this week that what they decided to do was anything granted by an agent gets shut down 24 hours later. So if you need access to a physical space, if you need access to a tool, that's fine. The agent can use whatever criteria to approve that, but it will be temporary. And anything that needs to be approved for a longer term, you will have to get done by a human. They can tee up the recommendation, but they won't be able to do it. And guardrails like that really help you get a handle on your organization and prevent this kind of unauthorized access and lateral movement tied to identity.

Caleb Tolin: Right. What I really appreciate about your response here is that we're talking about really, you know, new and innovative technologies. We're talking about identity, security, and resilience and AI agents and all of these. I don't want to say that they're, especially with AI agents, I understand it is a newer topic. Identity isn't as much of a new topic. But what you're saying is it's really about some of the fundamentals. You brought it up several times now. It's, like, MFA is still so critically important and many organizations are not implementing that, for various different reasons, of course. But it's really, a lot of times it's going back to the basics. MFA, governance, observability, they all still matter so much today. So, I really appreciate that you're hitting that home regularly. Well, Camille, thank you so much for your time. Is there anything else you'd like to leave with the folks listening in today?

Camille Stewart Gloster: You know, I hope that organizations do play around with these new capabilities and this new functionality. That's not the goal, right? It's to limit your ability to deploy AI agents across your environment or to leverage an AI system to streamline a workflow or a process. But just take the time to build the governance apparatus around it and to really make the necessary investments up front, like cleaning up your data. Your data really matters. Those things help mitigate the potential for security risk later on. And so that would be my caution to folks. The places where I see the most mistakes in large organizations and small organizations alike is when you rush to try to deploy an AI system because it looks cool, but you haven't done the work to think about what value it's actually adding to you, how you organize your organization around it, and what guardrails you need to put in place.

Caleb Tolin: Absolutely, absolutely. Great sentiment to end it on. Camille, thank you again so much. I really appreciate the time.

Camille Stewart Gloster: Thank you.

Caleb Tolin: That's a wrap on today's episode of "Data Security Decoded". If you like what you heard today, please subscribe wherever you listen, and leave us a review on Apple Podcasts or Spotify. Your feedback really helps me understand what you want to hear more about. And if you want to reach out to me about the show, email me directly at data-security-decoded@n2k.com. Thank you to Rubrik for sponsoring this podcast. The team at N2K includes Senior Producer Alice Carruth and executive producer Jennifer Eiben, content strategy by Ma'ayan Plaut, sound design by Elliott Peltzman, audio mixing by Elliott Peltzman and Tre Hester, video production support by Brigitte Criqui-Wild and Sarelle Joppy. Until next time, stay resilient.