
The Kill Switch for AI Agents
David Moulton: Welcome to "Threat Vector," the Palo Alto Networks podcast, where we discuss pressing cyber security threats and resilience and uncover insights into the latest industry trends. I'm your host, David Moulton, Senior Director of Thought leadership for Unit 42.
Carey Frey: We are at this inflection point where we have a choice in front of us. We can choose to try and fix some of the legacy identity infrastructure that AI will crumble on if we don't fix it. And we can also try and ensure that as agentic AI develops, that we're putting forth the best practices in identity that we know about and that in many ways we have made the choice not to implement in our organizations for the past 20 years. And it'll provide a much more secure future for the Internet and AI and humanity. But we can absolutely choose not to do that and reap the consequences as well. [ Music ]
David Moulton: Today, I'm speaking with Carey Frey, Chief Security Officer at TELUS. Carey has helped shape practitioner guidance on identity in the age of AI and leads one of North America's largest telecommunications companies. Today we're going to talk about identity for AI agents, why solving identity first is essential to safely deploying Agentic AI, how cloud scale complexity amplifies risk, and what leaders can do now to build trustworthy, audible AI operations. [ Music ] Carey, welcome to "Threat Vector." I'm really excited to have you here this morning.
Carey Frey: Thank you. It's a pleasure to be here.
David Moulton: I want to get into your background a little bit. I'm curious, what drew you to cybersecurity and how did that lead to your current role as the Chief Security Officer at TELUS?
Carey Frey: Yes, that's a -- that's a very interesting question, and it's a bit of an interesting answer. I was fortunate enough to be a co-op student when I was in university in computer science. And I got hired into a work term at the Communications Security Establishment, which is the National Cryptologic Agency here in Canada, counterpart agency to the NSA in the Five Eyes. And in my first job, I was working on basically installing web servers and setting up web server technology. And one of you know, my customers came to me and said, "You know, all of this information in the web server is available to everybody." This was, you know, on an internal network, not on the Internet. And he said, "We have different kinds of information here, some of which everyone can see, some of which, you know, a subset can see, and some of which only a few people can see. We need that kind of functionality." And so, today we have all sorts of different names for that in the security community. Least privilege and all that kind of thing. But I said, "Yes, the Mosaic web server doesn't do this." And he said, "Well, we will -- we will need that level of security functionality if we're to use this technology in this domain." And so, thus began my journey on developing security models and security overlays onto commercially produced technologies for us to use in that community. And it led me into all sorts of different work, both in the security industry in North America, and you know, the security -- the security community, national security and intelligence community within government. And after two decades of, you know, working in that commercial security standards and cyber defense and you know, third -- third party supply chain risk, many different aspects of it, after working with the telecommunications industry here in Canada, I had the opportunity to become the Chief Security Officer of TELUS. And that is where I've spent the past ten years of my career. And at TELUS, I manage our internal security programs as well as support the day two operations of our managed, security services provider.
David Moulton: So, you mentioned that you worked in government intelligence and in communication. What experiences from your past have most shaped your view on identity as the foundation for secure AI?
Carey Frey: One of the things that I learned was that there was a huge debate in the organization over who was the source of truth. So, the HR department argued that they were the source of truth because they hired people and paid them. And the finance department argued that they were the source of truth because they were actually the ones that paid people. And I went to meetings for three years where they fought with each other like cats and dogs over that issue. But I could never resolve it because they couldn't agree on should I take the data from the HR system or the finance system. So, I had to build a hybrid. And I secretly argued that you know, my identity system was a source of truth because that's what all of the systems in the network were using to decide what you got. So, in some ways, it didn't even matter. But not having a single source of truth was a great part of it. The -- the second was that, you know, it's only your access that matters. So, you could work at the organization and be paid. If you didn't have access to anything, then you couldn't do your job, right? So, the power was secretly in IT. But you know, we just -- we just didn't lord it over people. And then I started to learn all about how people engaged with their identity and with others' identities, which -- which was to say, you know, they quite often challenged what they thought should be going on in the system, what privileges that they had. And in many cases, people made attempts to circumvent or avoid those guardrails in order to, you know, have less friction in their job. And by that I mean, you know, granting -- granting ourselves local access permissions and super user permissions on computers and desktop computers and laptops and all banner of that type of thing.
David Moulton: So, one of the consistent things I've heard in talking to CISOs, whether it was on this podcast or in a previous role where I was designing software, was identity was the third rail, the last thing any security leader wanted to touch. They'd fix the -- the sim. They'd fix the firewalls. They'd work on policy, GRC, it didn't matter, anything, just as long as touching identity was the last thing they did before they got fired. Because that was the, you know, there were six different identity systems, and everyone wanted to keep theirs and none of them worked quite right. And yet, here we are at a point where our hand is forced. I -- I don't think that we can go into this age of AI without understanding how to map identity to agents. Are we getting to the point where our hand is forced? And we're going to have to, as organizations agree on how this works, and if we don't, what are some of the consequences that you see coming?
Carey Frey: So, our hand isn't forced and I'll explain why, but there are definitely consequences. And if we start on the consequences piece, the cyber threat actors out there have laid the table for us because over 80% of the breaches that are happening around the world now are happening because of identities that are being compromised. So, we've seen this whole shift in cybersecurity from the deployment of malicious software and the methodology that went along with that, and ransomware and etcetera, you name it, to -- to really, you know, most -- most of the threat actor groups abandoning that and focusing just on compromised identities and living off the land attacks, meaning you can -- you can log in and conduct attacks with legitimate identities. Organizations don't realize it's not you logging in. That's an -- it's an imposter. And it's really easy to exfiltrate information and it's a cheaper attack. So, they're smart -- they're smart business people and they go, you know, "Why worry about developing new sources of malware and reverse engineering technologies and finding zero day vulnerabilities and all of those very expensive activities when we can easily steal employee identities, login as them, find their elevated privileges and you know, we can have the same outcome?" So, that is now, you know, I mean, in -- in terms of discussing identity as the third rail, that was before any of this was an effective threat. Now, it's the effective threat combined with identity being the third rail. And so, if you're going to build this new foundation of Agentic AI on top of that, you can just imagine or foretell, you know, the great success that the threat actor community is going to have if it gets control of, you know, what we call "agents with agency," meaning that they have all these permissions and abilities to do things. And therefore, we are at this inflection point where we have a choice in front of us. We can choose to try and fix some of the legacy identity infrastructure that AI will crumble on if we don't fix it. And we can also try and ensure that as Agentic AI develops, that we're putting forth the best practices in identity that we know about, and that in many ways, we have made the choice not to implement in our organizations for the past 20 years, and it'll provide a much more secure future for the Internet and AI and humanity. But we can absolutely choose not to do that and reap the consequences as well.
David Moulton: So, when you were collaborating on your practitioner guidance, right, that you've published out on -- on a couple of sites I've seen, you shared the links with me, Medium comes to mind. And then the -- the SINET piece. What problems were you trying to solve for the security leaders? And I'm curious, what did you learn as you were collaborating on writing this piece?
Carey Frey: Yes. So, in terms of the SINET community, just to give a little bit of the background there. So, that's a -- that's a -- a network of CISOs and innovators that I've been a part of for going on 20 years now. And -- and at a -- and at a private meeting that we had a year and a half ago, we talked about, What's the number one problem in cybersecurity that our community sees?" And we all agreed with a consensus that it was identity. And we agreed to formulate a working group to talk about, you know, what we should -- what we should do about it. And in parallel, Generative AI was evolving into Agentic AI. And so, those two things very much came together in terms of that process. And -- and so, here are a number of things that we learned. One was we looked into the traditional information security and cybersecurity community for guidance on identity and found that it just didn't exist. And -- and for example, what -- what I mean by that is, you know, if you go and look for the equivalent of, you know, a NIST -- the NIST framework in cybersecurity, but you look for that in identity, or you look for a maturity model and ask, you know, like, "What can I baseline my company's identity practices against to determine how mature I am?" Those types of things simply did not exist in the -- in the form that we needed them. So, we -- we knew right away what we wanted to write about, which was to develop that type of guidance and to develop a maturity model so that companies and organizations facing this challenge could easily agree, yes, we have this challenge, but how do we know what good looks like? And so, we set out to, you know, develop what good looks like. And -- and many of those discussions that the working group had, you know, we talked about that conflict in organizations between the single source of truth. We talked about the legacy technologies and the turf wars over workforce and consumer identity. And part of the conclusion that we reached is, you know, the CISO is not going to change that. Not even the CEO is going to change that, right? There's -- there's been too much investment. There's too much operational capability. We're not going to go and start transforming and revolutionizing those systems. So, we needed to come up with a framework that said, "We're going to accept that those things are there. You know, organizations have been around for a long time, are going to have a lot of, you know, identity pedigree and technical debt and things that they can't change. And even new organizations that are cloud native or, you know, very, very Greenfield in their IT are going to struggle with this, but even to a lesser degree. So, let's take identity above all that and, you know, let -- let's use -- let's create a data plane for identity that artificial intelligence could work on top of to give CISOs visibility, to train models that, you know, agents will be able to eventually see the vulnerabilities and to perform the actions that we need to, you know, take people away who don't work there anymore, to find highly privileged people who have access to the wrong systems and to -- to take that away. Essentially, the learning is that humans are not going to keep up with this at scale, and therefore, we also need to use AI to do this. And lastly, with agents, we saw that there's a -- there's a great possibility of them to not have an identity that would allow us to track what they do and to control them, and that we need to put that kind of identity foundation or root of trust into agents so that we'll have the same kind of security capabilities that we've had for computers and information and, you know, all the systems that we know and -- and love today as we get these in the future.
David Moulton: I want to shift gears a little bit and talk to you about identity as a starting point. You know, a lot of teams, they talk about securing AI by hardening models or -- or putting up more guardrails. Why do you argue that we -- we must solve identity before anything else when deploying AI agents?
Carey Frey: So -- so, two thoughts. We definitely need to continue our security work on Generative AI. So, there's still lots to be done on LLMs to make sure that they're you know, as good or as ethical or integral as possible and, you know, can't be -- can't be sort of conned into performing evil tasks. So, there's a lot of work that needs to continue in that space. But the idea with agents and identities specifically is that this agent is spun up and it'll be given agency, so it'll be asked to go off and perform a bunch of actions. It might spawn sub-agents and it's going to do a bunch of things. And later on, if those bunch of things are determined to be a violation of the security policy or a data breach or something like that, we want to -- we want to have logs. We want to have telemetry. We want to know what was happening. And the way that these agents access systems today, there are sort of two choices. You can assign it a separate identity, like a non-human identity. So, we can create an account called Agent One. And Agent One will log into our systems and leave an audit trail under Agent One, or the other thing that organizations are doing is using the human identity. So, we'll say, "We're going to assign this agent to Carey." And then this agent is logging into things, and it says, "Carey did these things." Either way, if these agents aren't attached to anything, right, then they're going to go off and do things either -- either as you. And then, how are we going to know the difference between what Carey the human was doing or what Carey's agent was doing? Because when my security team comes to interview you, you're going to say, "Well no, that wasn't me. that was the agent." And then, how are we going to know the source of truth? And if it's this non-human identity, then like what does Agent One tie back to? Because it was just this ephemeral thing that existed for a period of time. We'll have lost what, like the activity log around what that agent actually did. So, identity and agents is not just about saying, you know, "Does it -- does it have this ID that you give it?" which is a unique ID that differentiates it from everything else? It's also about its cryptographic root of trust and its authentication, so that it's forced to go through the same checks and balances that we do in our own organization. So, there's a reason that we have zero trust. And if you're going to have a zero trust model for Agentic -- for Agentic AI, it means that these agents will have to prove that they are the legitimate agent with the legitimate identity and access that has been prescribed to it and -- and it's going and doing all these things. The problem is the identity systems that we have today or at least, you know, a year and a half ago, they have no capability to do this. So, you can't go into Active Directory and say, "I would like the console where I put in my agents and my non-human identities." Active Directory doesn't understand the context of AI, nor do our IDPs. Right? So, the Entras and the Oktas and Pings and the other -- the other things that are out there. They don't understand this either. So, the good news is that we have started to see this capability get built into products like CyberArk and other products in the identity ecosystem, because I think the industry has heard us and said, "Yes, we get it. We have to start building this identity management capability for agents." So, that's good that we can get sort of, you know, this -- these Greenfield tools, but these agents are going to work in these large, complex, interdependent enterprise constructs that we have and there's going to be a lot of those systems that don't understand that context. So -- so, what this is about is ensuring that we can bridge this Agentic AI world with the identity world, so that we have that integrity and non-repudiation going forward. [ Music ]
David Moulton: So, a quick follow up here. Are there early indicators that tell you that an organization's identity foundation can't support Agentic AI at scale?
Carey Frey: Yes, yes, absolutely. So, if you consult the maturity model that we developed in the -- in the SINET guidance, there's actually several categories where you can assess your ability to understand your non-human identity landscape. And it's not dissimilar from the NIST framework for cybersecurity in the sense that the kinds of things that you assess yourself against, the controls or the capabilities that you have, are things like inventories, are things like policies and standards, and are things like capabilities and technology and tool sets. So, if your organization that's responsible for managing identity and security is not aware that IT has created hundreds or thousands of AI agents, then that's going to be a problem. Right, because you can't protect what you can't see. So, if you don't have an inventory or you know, something around that, if you don't have governance, right? If you haven't set up what your rules are going to be, what your policies are going to be, and developers are just deciding on their own whatever they think is the most optimal and efficient idea to implement the functionality that they've been asked to implement, you know, they -- they might get it right or they might not, but you need to have that governance. And then finally, we know a lot of the tooling doesn't have functionality that's specifically designed for this. So, either organizations are going to hack it or force fit it in retroactively into overlays, or they're not going to build it at all. And so, you can go look at those three things right away and you can find out, "Okay, we're often doing this and we have no governance, you know, rules, process, technology." Right? Or you can see that you have -- you have some of the artifacts of that. And -- and I worked with a lot of organizations. I've talked to -- so, I've talked to companies that are building AI agents. You know, you can see where they are in their thinking. They're not that far along. You can talk to enterprises who are doing this. TELUS is an enterprise that's doing this. We know where we are. We've done the maturity model. We have work to do to get to the place we want to go. And then I'm going to postulate that 95% of organizations haven't even thought about it. So, I think, you know, that's -- that's just the state of -- the state of Agentic AI and identity.
David Moulton: All right, I want to talk about AI native identity fabric. Can you paint a picture of that AI native identity fabric, what must exist for identity to act as that core trust layer across the -- the human and non-human and AI identities?
Carey Frey: Yes. So, if you could -- if you look at identity systems that we know of today, so your active directories, your LDAPs, your IDPs, multiple different systems, they all have their own sources of information unless you put in a lot of complicated synchronization and automation, right? And that's really the only way these things work. And -- and I'll go back again to source of truth because we have our, you know, we have our HR systems, our workdays, our SAPs, we have our finance systems. So, all these things have data about -- about people and non-human identities and contingent labor. So, how do we come along to the idea of, you know, an AI fabric for identity? And it's using the same principles, right? So, there's data in all these layers that we can take and we can abstract to level up. So, instead of thinking about it as an east-west flow, let's think about it as a north-south flow in the sense that these data -- identity data repositories and tools are south. And we want to take all the data up into a single control plane. So, you take the data from your HRIS system, from your workday about people. You take your data from your contingent labor pool of contractors and third parties. You take your data from your IT system identity repositories. And then, if you have an IGA tool doing -- doing identity governance and authorization, then there's going to be information there about roles and systems and who has access to what. So, if -- if you bring this all up and you know, put it in a -- put it in a conventional data warehouse or data lake, then we can train models that run on that, that can understand what the big picture looks like. And so, we're very bullish on that kind of model because it's the only thing that can scale, right? So, today it takes humans to run those systems. Someone's got to onboard people into workday, right? We recruit someone. We verify their identity, I hope. We put them in the system. We start paying them. Then IT takes it. They create their accounts. They get their laptop, all those things. What we're trying to do is we're trying to get to a place where all that can happen, but where we have agents that can automate a lot of those functions. We want to have this little monitoring kind of function on top of what those agents are doing further down in the system privilege layer. So, the -- the data, the complicated systems, the actions that -- that AI is taking. And -- and that's what we did. So, when we talk about AI native, that's really the place where we can get to is it's a place where, you know, AI is running the identity just as much as AI is running everything else. And that sort of lays out the path of how we can get there. I wouldn't say that, you know, that's a roadmap. I'd say it's a vision. It's an idea that a bunch of us have about how that can take shape. But given the power of the tools that we have, cloud, data, AI, that's -- that's how we think we can use these tools, essentially for good, to combat the cyber threat actors that are out there which will -- which will try to do the opposite against us.
David Moulton: So, a quick recommendation for CISOs who are looking at that fragmented IAM environment and how do you advise them to move forward? So, this like, unified identity and access system that you're talking about.
Carey Frey: So, first I would say use the maturity model in the SINET guide and baseline where you are as an organization because it might tell you, "You have some things to do before you worry about AI." And then, it's going to be different for every company, but it will really give you a sense of, you know, and across numerous different categories, "Are we a one or a five?" And it is a, you know, sort of a self-assessment with easily identified criteria that says, "If our place looks like this, then like, we're a three or we're a four," or what have you. And I think on the basis of that, then you can -- you can decide, "Are there some fundamental things that we need to -- we need to clean up?" But you can baseline your system's capabilities against that. If you are starting from the place of a cloud native IDP, then I think you can pretty easily talk to the vendor community about what's in the roadmap coming for this type of thing. But the game in that vendor space is connectors, right? So, it is who has a connector from everything from the right side, from the HRIS and then into the, you know, the directories and then into the authentication systems? Who's got the connectors that can suck up all the data from those and then give that capability directly to the CISO and to the SOC to say, "Now, you have the visibility. Now, you can see what's going on. Now, you can perform actions at scale." The next step is we've got to go from the data to the actual act of defense, if I can call it that, right? So, from the passive monitoring to the capability to say, "Revoke that privilege. Delete that user. Offboard that person." I think that's the next step. So like, people should understand that I don't think those tool sets are -- are there today, but that's the vision of where we want to go. And then once that capability is built in, then we can train the agents to do that on our behalf. When they're following rules or guidelines or seeing things that, you know, they go, "Okay, like that person isn't working here anymore, but I see them accessing stuff, I'm going to terminate them right now." I can't tell you how many times in our own SOC we see a compromised identity and then, you know, we have to go to the very front end of that system, cut off their birthright access and then wait for all that information to synchronize all along the path and hope that we have applications that are connected to that, that, you know, don't -- don't then have their own accounts for that person or that access mechanism is -- is cut off. Right? So, that's what we want to be able to do at machine speed in the -- in the AI world.
David Moulton: When agents act on behalf of people or even on behalf of other agents, what delegated authority should be included to be safe and to be auditable?
Carey Frey: I love that you're assuming there's going to be delegated authority.
David Moulton: I mean --
Carey Frey: So that's -- so that's the first part of your premise. Right? I -- that would be great if there's delegated authority in the sense that, you know, if we think about say how Gmail or Outlook works today, I can go to my EA's identity and I can say, "My EA can read my inbox and/or they can send emails on my behalf and/or they can make calendar appointments." Right? We both remember when those were capabilities that these systems did not have. And a lot of these systems, like especially in a worker or a corporate context, still don't have all those delegated authorities. And so, if we move that construct into the consumer world, that really doesn't exist. Right? I can't go into my consumer Gmail easily and say, you know, go to my -- go to my partner or one of my children and say, you know, "You can have these delegated authorities over my inbox." So -- so number one, we sort of -- I think we sort of need to understand that -- that that construct needs to be there. If I think about my online banking, there is no delegated authority. Right? There is only the capability to sort of give someone access to my account. And if we think about how our big corporations work, you know, you have to call the call center and say, "So and so is able to operate on my behalf." And maybe you have a shared secret like a pin or a common answer to a question or something and --
David Moulton: Right.
Carey Frey: -- and that's the way that it works. So, yes. So, number one, we have to have delegated authorities. And then what I would say to the -- the people who are building this new world is just do the test in your own head to say, "Would I be comfortable, you know, with whatever the -- whatever finger's on the button to say I would trust someone else to do that on my behalf?" If I think about a wire transfer in my bank account, I might be willing to let an agent read invoices from contractors and suppliers that come into my inbox, like say, my landscaper who says, "I mowed the lawn, it's $89." I might be comfortable with the agent automatically paying that, doing a -- to doing a -- an online banking transfer. But I might like to have a threshold or a limit, right? Like anything over $250, I'd like the human in the loop. And so, what -- what we're really talking about is role-based access control, fine grained privileges, really common concepts from the world of cybersecurity, but that are typically the exception rather than the rule. And so, I think if we see platforms thinking about building those in and thinking about what the critical functions are, deleting an email, sending a wire transfer, buying something, there are already AI agents today that you can ask like, to go on different commerce platforms and automatically buy things for you. Or even like automatically buying stocks. Right? "If the stock hits a certain level, buy a thousand for me." So -- so, all of that thing is going to be really key. The delegated authority, the fine-grained access model and the human in the loop. And I really hope that as consumers and as enterprises, we prioritize the platforms who are going to give us the control rather than saying, "We have this agent and you either give it 100% agency over everything, or you get no capability at all." I think that would be really unfortunate.
David Moulton: I think that you and I could spend another hour talking about this because it is the intersection of security and I think design to get this right, so that especially at that consumer level, that you've built something that people want to use and don't just throw up their hands and go, "Yes, just turn it all on. I'm -- I'm fine with it," without having really thought about the impact of a decision like that. Before we wrap up, when you think about the future of identity and AI, what excites you the most and what keeps you up at night?
Carey Frey: The thing that excites me the most is actually the scary opportunity that I think AI has presented for identity. Because back to your point about it being the third rail, these problems existed before and when you went to talk to people about it, they would just go like, "Oh my God, like I'm not, you know, that's a toxic pile of my career is over if I go and try and solve that problem." But I think now, we're coming to understand that we don't have a choice. So -- so, there is -- there is at least for the moment, some urgency. You know, there's interest in the -- in the investment community. CISOs everywhere are saying this is the priority. And so, I think when all those things happen, good things will come from end. And I like what I'm hearing from the leaders in the industry, you know, the Anthropics and the Googles and, you know, many other companies that I won't name. But I think -- I think they're hearing the message. I'm starting to see them come out with things that says, "Yes, we understand the gravity of the this. We're working on fixing it." The part that I'm concerned about is that we will go too fast and deploy too many things and be in that race to win the market. That's -- that's how it always goes. And so, I don't know who's going to win in the Agentic AI space yet. It's far too premature. If going back to the beginning of our podcast, I think about who the dominant players were when I was rolling out Mosaic and Netscape in the late 1990s. A lot of -- a lot of things changed but you could see -- you could see that same problem about to happen. But you know, by the time SQL injection attacks became a thing, that was really like 15 to 20 years later. So, we have the opportunity to lay some groundwork, so that that doesn't happen to us in the future. We won't get it perfect. We won't get 100%. We all have to accept that. But I think there was a worst-case scenario which -- which we know that we can avoid if we're all smart and mindful about this.
David Moulton: Carey Frey, thank you for an awesome conversation today. I'd love to have you back on and keep digging into this because I think this is a -- a story and a conversation that will continue to rapidly develop and frankly I think it's going to change quite a bit as we figure out what works and what doesn't as agents become more and more common. I'm hoping that our listeners will go out and read your paper and the work that you and the -- the working group did. Again, we'll have that link there in our Show Notes so that it's easy to get to and thank you. Appreciate the conversation today.
Carey Frey: Thank you. I would be happy to return to talk about identity at any time or any other subject for that matter, but it is my favorite.
David Moulton: Awesome. Maybe next time, down here in Texas when it's warm and we'll send you -- send you home with some barbecue tips.
Carey Frey: That would be perfect. Yes. [ Music ]
David Moulton: That's it for today. If you like what you heard, please subscribe wherever you listen and leave us a review on Apple Podcast or Spotify. Your reviews and feedback really do help me understand what you want to hear about. Or you can contact me directly about the show. Email me at threatvector @paloaltonetworks.com. I want to thank our executive producer, Michael Heller, our content and production teams, which include Kenne Miller, Joe Bettencourt and Virginia Tran. Original music and mix by Elliott Peltzman. We'll be back next week. Until then, stay secure, stay vigilant. Goodbye for now. [ Music ]

