Navigating the AI Frontier: A Security Perspective with Mike Spisak
Mike Spisak: When I was younger, I did not know what I wanted to do either. And, you know, I was looking at business, looking at computers, but for a long time I was I guess what I would consider an amateur magician. So I would do table magic. I would do street magic. I used portions of that to fund buying books for school. And my father used to say all the time, "Magic will serve you well." And I didn't know what he meant. I thought maybe he wanted me to go to Vegas and be a magician. But what he actually meant was the art of storytelling, improvisation, and taking people on a small journey and experience through magic. And it did. It paid off in dividends. Nowadays when I try to go and talk to customers and peers and colleagues about complex technical topics I fall back on my lessons as a magician. Right? To help people understand and consume and go on that journey with me. [ Music ]
David Moulton: Welcome to "Threat Vector," the Palo Alto Network's podcast where we discuss pressing cybersecurity threats, ways to stay resilient, and uncover the latest industry trends. I'm your host David Moulton, director of thought leadership for Unit 42. [ Music ] In today's episode I'll share a conversation I had with technical managing director Mike Spisak who's responsible for proactive security solutions at Unit 42. Mike is spearheading Unit 42's effort to safeguard AI systems. In our conversation we'll get into how organizations can harness AI to build cutting edge tools and platforms without compromising security and reflect on the lessons learned from the early days of cloud computing. Let's jump right into the conversation. We're going to talk about the work that you're leading here at Palo Alto Networks Unit 42 on protecting AI systems. You know we both lived through the shift of cloud computing. And I look at that as a near term proxy for the sort of the environment that we're in today. Do you see some key similarities between the early days of cloud adoption and the current wave of AI integration?
Mike Spisak: That's a great question and observation. What's interesting about AI and its sort of compare and -- an analogy, if you will, to the adoption of cloud computing. Both were extremely revolutionary and in their own right. And also equally mystifying. Right? Because of that I think, you know, you could look at -- you could look at it from a practical perspective, right. Cloud computing really revolutionized computing access, data storage, making infrastructure scalable and accessible to many. Generative AI or just AI in general is sort of reshaping our interactions, content creation, embedding intelligence into various services, and they're -- the two are closely related because generative AI and AI in general is taking advantage of cloud computing. So yeah. So there are -- they are parallel tracks, but they're leveraging each other in some ways, in many ways. Now what's -- from a security perspective I was talking about the adoption of generative AI and how -- and I was speaking with chief information security officers. Small room. Intimate setting. And we were discussing how many CISOs in the room were finding out about AI projects as they were headed out the door. And one individual spoke up and gave us a quick story and basically said, "Yeah. We're pushing a generative AI app out the door. Security, you're our last gate. We just need you to approve this so we can push to prod." And a couple things I want to observe there. Number one. This was the first time the organization -- the security organization was hearing about this app that had been built. Number two. So that's a problem. Right? Number two. You're our last gate. Right? How often have you heard this? So security's the last gate out before they can go out the door which not only are they hearing about it for the first time. They're the last box to check before --
David Moulton: Yeah. The last hurdle that's in the way of us being able to, you know, have something in market that we've been really pushing for and now can you hurry up and just please check the box and let us go do our thing.
Mike Spisak: Right. And David you nailed the third -- you nailed the third element there which is you're in our way. In other words, the connotation to it was you're in our way. You're our gate. You're what's blocking us from being, you know. And when people are talking about adopting AI they associate it with acceleration, you know, competitive advantage, productivity. Right? So security are you going to stop us from being, you know, productive and accelerating and achieving market share? Right? So that whole negative energy around security being the office of no is exacerbated by making it the last box to check, the last hurdle to hop, before sending an app out the door. So and when I pause and you reflect this was very reminiscent of cloud, right, when people were trying to adopt cloud computing. Security was last in, you know -- in the line up to, you know, check the box and so on. So --
David Moulton: For a second I felt like you were telling me a cloud security story, but that's maybe it sets up the lessons from the last technological shift have not necessarily sunk in or some of the culture has not changed. Maybe not everywhere. Is that accurate? Is that what you're seeing or is this kind of a case by case some have learned, some are still running a little risky?
Mike Spisak: I am seeing a hybrid. Some have learned. And are doing what we would call shift left. Right? So they're pulling security left. They recognize that, you know -- that security needs to be a part of this. And one of the reasons why I think that's happening is thanks largely to the media. Right? People are seeing the chinks in the armor or seeing flaws with AI options early. Then maybe they would have -- in other, you know -- because all of a sudden it's posted on social media. It's posted in the news that this AI think is acting strange of you know. So there's been a lot of some mainstream highly visible stories around AI. So some are trying to shift left, pull security left, and make sure that there is a sort of cooperation of security during the build process or the adoption. The other is also true. I am finding that we still continue to build in silos. And that's why you know when you hear things like, "You're the last gate, security before we go out the door," you know that -- and in some cases -- again I've spoken with clients that, you know, they don't hear about these apps being built until it's too late. Now there's always going to be some level of shadow IT, shadow AI, people are building. Right? But when you have sponsored projects that just, you know, they have to have -- and we've been saying this forever, David. Right? You've got to have security embedded and build it secure by design so that things can go out the door together. And part of what we focus on at unit 42 is you know bringing that thread intel, bringing the years of experience for incident response, and then helping our customers assess and understand their adoption. The both consumption of AI as well as the build and integrate of AI into applications. And do it in a way that helps them go out the door quickly, but safely.
David Moulton: Yeah. And Mike when you're talking about this I think in the aperture of a team, a development team that's putting something together, it probably does feel much faster to not bring in other teams, not bring in other gates, so to speak. But if you pull back just a little bit the speed of getting something out when you include that news article, when you include that breach or that malicious attack, right, like those sorts of things that are going to be out there, then your speed to market actually ends up being a speed to risk. And that's where I think you need that proactive point of view. You need to be bringing in your security team earlier such that you're going, "You know what? That one's probably not one that should go out as is." I was recently talking to Noelle Russell [assumed spelling] and she talked about this idea of building AI applications, voice applications, at a variety of different companies. One of the analogies Noelle came up with was this idea of adopting a baby tiger. And at first they seem cute. They seem cuddly. You're not too worried about the fact that they have big paws. They have claws. They have fangs. You're not worried about what are you going to do with those things that are dangerous. But as those AI models grow up their danger becomes a lot more apparent. And I think that as we see this AI enthusiasm I'd wonder if you have thoughts on some of the potential pitfalls or dangers that organizations are overlooking as they rush to adopt AI technology.
Mike Spisak: So I love that analogy too, by the way. And I may have to borrow it. But and you smile a little when you think about, you know, baby tigers.
David Moulton: Right. They're cute.
Mike Spisak: They're cute. Right? And then my goodness. When they get older. And you still see, you know, they can still be well behaved pets, but you know -- but what's interesting about that analogy is it's the training over time. Right? Of an animal. Right? As it grows up. And it's laughable, but almost quite true for especially for generative AI models that will change and alter their behavior over time based on what it's been trained on and what it's been trained to do. Some of these interesting -- you asked a question about around some of these threats. Now what's a little dangerous about this is depending upon how you manifest an AI application, right, it could very much at the surface look like a web app or a mobile app or an API just like anything else. And you may treat it just like from a cybersecurity perspective you may treat it just like you would any other mobile app, web app, or API type of interaction. And that wouldn't necessarily be a wrong thing to do. Right? You would want to monitor it. You would want to ensure these privilege. You would want to you know put a firewall in front of it. You would want to make sure there's encryption at rest and so on. So these are all great things, but leveraging AI does introduce and in particular generative AI which is where all of us are headed to now introduces just sort of this what I'll call an expanded attack surface beyond what classic cybersecurity controls will be able to handle for. For example prompt injection or insecure output handling or model theft or, you know, exposure or sensitive data over sharing or over exposure. And --
David Moulton: So something that seems kind of benign, but when compounded and/or when methodically put into your generative AI chat it actually reveals information that it shouldn't. It gives you answers that are maybe malicious and you can push the model a little further than the guard rails are capable of protecting.
Mike Spisak: That's right. Now what -- so you're right. Those are -- I gave a couple of examples, but to expand on them, prompt injection. Right? The -- this is the -- the act of putting things into -- so a prompt is a question you might pose to a generative AI system and you would get a response to that prompt. So by, you know, malicious actors or even non malicious people, putting things into a prompt and those things could be data over sharing. Right? If it's non malicious. Or it could be something intentionally trying to extract or excise data from an AI system or potentially make that AI system produce erroneous output or harmful output. Now what's -- the other thing I think is interesting to understand and be aware of is that in traditional software engineering if we were to try to -- we'll just take a web app. Let's pretend we had a web app that you could interact with and click buttons and you know and get data out of it. There's a defined set of inputs and outputs. So if I click certain buttons a certain output is expected. And when as an engineer if you're testing for these inputs and outputs it's very defined and finite and you can get your hands around it. You can write test cases. And you could click this button a dozen times and you're always going to get the same dozen answers. And what's different about this new world this we're entering into I think if it's not obvious already is that you can say, okay, I have a prompt and it's natural language and as this prompt comes in I'm checking to make sure output going back out is being handled the right way. But just by changing the syntax of that prompt, right, and just and you can change it to the levels of how humans can change a sentence. Right? By moving words around, by inserting new adjectives, new nouns, new verbs, the structure of the sentence can take on a different meaning. So by simply manipulating these you know -- this level of language that's coming in through a prompt will dramatically change and produce inconsistent output from a generative AI system. And this poses risks that we haven't had to really deal with before, at least at this scale.
David Moulton: So Mike how can we ensure organizations are approaching AI adoption strategically rather than impulsively?
Mike Spisak: So there's a number of ways. And this -- the answer to this can go quite deep, but I'm going to try to keep it somewhat surface so that our listeners can take action with it. Right? It begins with an understanding. So inventory. Right? When I say inventory I mean get a sense of how your organization is either using AI -- and when I say using I mean consuming. Right? Are you consuming AI from third party providers? Are you using AI in your clouds? Do you have engineers developing with AI apps? Just try to get a sense of the inventory and the landscape of what you're dealing with. And then evaluate the controls you have in place for governing those things. And those controls can start as high level as do you have an acceptable use policy that's corporate wide that everyone had to agree to? Right? And can be as low level as do you have, you know, checks in place at the firewall that prevent certain people from over sharing? Do you have data loss prevention policies and controls in place to prevent, you know, usage of some AI that maybe you don't want certain users to use or over share information inbound or outbound. So it can go from, you know, 100,000 foot all the way down to 10 feet and anywhere in between. But inventory, evaluation, and constant assessment of where you are to -- would be a really good place to start on this adoption journey.
David Moulton: And one of the things I've noticed with AI or generative AI is that it is rapidly changing and it is advancing. I can remember some of the initial forays into the chat and I was like this is kind of lame. And then it got remarkably better remarkably quick and it continues, you know, I'd say week by week. But, you know, even within the same day if the prompt changes just a little the answer gets a lot better. So that constant review or looking at the policies and what you've adopted as a company. Maybe there's shadow IT. I know that you're trying to detect quicker. This seems like it's one of those things that you need to have a healthy level of management on and assessment on all the time because it's moving so quick. Is that, you know, generally the culture that you're running into with security teams understanding that?
Mike Spisak: Yes. So that is exactly true. Like I said, we've found that many clients and customers that I've spoken with are adopting AI and in particular generative AI really in either consumption, a build slash integrate, or I would say a hybrid of both. And when you have this world of the hybrid model, right, where you almost certainly have -- yeah. Where you almost certainly have people trying to use generative AI just to -- for productivity. I think there's a study that said -- I think it's over now. It's somewhere around 60% of employees have used generative AI at least once a week for productivity. Right? I'll raise my hand. Yeah. I mean I think it's more than that now, to be honest with you. That study was probably a few months ago. And the rate and pace continues to increase. And you're almost certainly going to have -- because you have a lot of innovative people in the trenches that, "Oh, I have an API. Oh, I know a little Python. Let's start stitching these things together and see if they can make a really neat tool out it." You absolutely have that going on. In fact, when I was giving a talk two weeks ago I asked the room, I said, "How many people are doing something with generative AI?" And pretty much every hand went up. And I said, "Well, how many of you," of you, right, "How many of you are building generative AI apps?" And, by the way, these are cybersecurity professionals. And about half the hands stayed up. And then I -- then I said, "Okay. Of the hands that didn't go up, right, how many of you think that there's probably some generative AI stuff going on at your company that you just don't know about?" And unfortunately a lot of hands went up. Right? So there's still this space of unknown where they know it's being used and they don't know -- and they're struggling to get a handle on the discovery of that. So correct. This cycle. This -- it's not a one and done. This will be a rinse and repeat cycle of discovery, inventory, assess, evaluate, report. And, you know -- and continue.
David Moulton: I think from a line of business standpoint anyone that is working day to day or starting to see AI tools integrated into all the different types of tooling and capabilities that we have to a point where it becomes common and invisible. You think about whether you're writing an email and there's a suggestion or you're writing a paper and you've got, you know, a wordsmithing tool that can help you out. Or, you know, you're looking at something that's helping you arrange a calendar. And all of those can feel like they're benign bits of information, but they're -- you know, if you're using them wrong or you're not aware as an IT team or a security team that that's going on as a plug in, a browser plug in or a tool that somebody's picking up, that data can leak. And your, you know, hand raisers out there might go like, "I'm not going out to Chat GPT. I'm not going out to Gemini. I'm not using Anthropic." And yet you really are. You just don't --
Mike Spisak: You just don't know it.
David Moulton: Yeah. You don't know it because the UI makes it invisible. I remember asking this question several months ago and the answer surprised me. People were like, "I don't use it every single day." And I was like, "Well, do you ever use the camera on your phone?" And sure. And I don't even care what brand it is. Right? Like that's a -- you know, that's an AI tool. And, you know, the question of the time is are you using AI tools in your day to day. And I think a lot of us are and we're not necessarily aware of that inventory that you talked about at an individual level. And then if you scale that up to an enterprise you start to realize that the number of things that AI is touching is incredible. It's a massive, massive attack surface potentially.
Mike Spisak: Yes. And, you know, David, you're on a great point. I love that word invisible and ubiquitous. Right? I think those are two great words, but to describe the space that should make or does make security people very nervous. You know, this -- I'll give you another example. Right? You mentioned a couple already, but browser plug ins. Right? Many user browser plug ins constantly. Don't realize the impact. Now AI aside browser plug ins can be a great thing to help improve and enhance your productivity, but it could also be quite dangerous to a security organization because something that just turns -- oh. I have a plug in that turns from light to dark mode. Right? Well, in order for a browser plug in to turn things from light mode to dark mode it needs to read every element on your screen to do that. Right? And so there's just some inherent risk in allowing a plug in that you didn't author, that you didn't authorize -- well, rather that you didn't author and you may not know the pedigree of it to read everything on your screen in order to flip its coloring. So that's one aspect, but the other is true in a sense that browser plug ins are being embedded more and more with AI. And that AI goes to the cloud and elsewhere. Another one is, you know, you can't even hop on a web conference these days without some form of AI doing transcription, doing summaries, doing all kinds of things like that. And I love your example about the camera. A CISO did ask me. He said, "I can only fit about three to five things in my brain. What are say five ways or areas that I should be -- " You know, a checklist if you will. So, you know, if I were to have to -- if we were riding an elevator together and I had to give you quick five quick things I would say again document your inventory. Right? That's number one. Right? Know it. Number two. I would say know the ins and outs of your AI data. So after that inventory just like we were describing try to come around and understand the plug ins. You know, Grammarly for emails, things like that. Right? Number three. Write them down. Develop a register. And then understand and measure the risks. Hey, we allow AI on these types of web conferencing calls. We don't allow it in this context. Right? We do allow it here. Just start to write it down and form this register. And then once you have it written down number four you catalog all of these AI services and that's where you start to communicate these shall be used. These shall not be used. And number five is then get that universally wide AI policy and educate folks on number one through four. Right? And again this is not a one time exercise. Unfortunately you need to go back to number one and then repeat that whole process. It doesn't need to be as exhaustive the second time around, but you've got to know how to do inventory, cataloging, writing it down, communicating it, and then, you know, educating people on it. And I think that will go a long way in at least, you know, getting lift off.
David Moulton: Mike, as you were talking about the pedigree of a browser plug in it makes me think a lot of times you don't necessarily know where that's coming from. It comes through a -- you know, a site that makes those available to you with little to no investigation needed. You know, you just click it and it installs. But then what's the pedigree of the AI engine that's running between you and the commercial interface? You mentioned Grammarly or any of these others. There's a question in my mind of where does this go and getting into some of those policies could be helpful to understand, you know, what's the acceptable use. What's this training? How much of this data is essentially just rolling open in the clear as it goes out to train another model?
Mike Spisak: Yeah. Yeah. So I'm not saying that any of these are good or bad. Right? In any of the examples we mentioned. Right? However to your point about you know -- about integrating to well known or in some cases open sourced types of AI systems, what's very interesting here is that never before and especially now -- right? I mean it's always been a challenge, but especially now having a handle on what I'll just call supply chain, right -- what is supply chain. You know, the bill of materials. So, you know, what are -- so cool. I have a plug in. Very benign. But like you say, I love your example, like the way you describe it. What are all the things that are happening between my keyboard and the back end systems? This is -- there are tons of open source libraries, maybe proprietary libraries, AI systems, logging libraries. All kinds of things. And it's very invisible to us. But I think especially as AI starts to surface more now it's going to be very important for us to understand, you know, the software bill of materials and the supply chain involved in creating and delivering that thing to us. Right? So we can understand what's going on under the hood.
David Moulton: Back in February I talked with Mike Sikorsky [assumed spelling] about you know his thoughts on generative AI and you know at the time we released a report that vulnerabilities were the biggest IR problem that we'd seen. But his prediction is that phishing was going to come roaring back because of the ability for generative AI to write really good phishing emails. And I think training data that comes from a grammar tool, you know, could give you a very specific idea of what kinds of typos and word patterns a specific individual uses if you wanted to really emulate not just perfect English, but perfect to a target the types of things where either you forget or you add in too many commas or you spell a word a particular way even though it's incorrect. That's just how you go. It could be really a useful set of data to say, "You know, let's have a perfect fish." With, you know -- with a -- you know I'm not going to pick on any brand in particular, but something that's watching you write your emails and helping you. It also knows where you've made consistent mistakes.
Mike Sikorsky: Exactly. They call that using my voice. And I don't mean even just my actual audio voice, but using the voice in email. Right? So you're absolutely correct. If an email system's compromised and an adversary can scan using AI all those emails and understand my voice, right, if you were to get a phished email, a spear phished email from what was presumably me, but not really me, and it was using my quote, unquote, "voice" in that email, it would -- yeah. You would likely be fooled from it because you are familiar with how I write emails or slack messages and you're familiar with my tone and my capitalization and my strange punctuation and so on. Right? So yeah. That I agree is going to be an area of what I'll just call brings a new level of business email compromise that -- of risk that we need to understand and get ahead of. And I'll also say that you know there's a lot of areas where, you know, the -- where we have adversaries, you know, accelerating attacks, scaling attacks, and obviously new attack vectors with AI. I think and it's always been the case cat and mouse. But you know defenders need the same or very similar tools to allow them -- that attackers have to allow them to move just as fast or faster than an adversary. Right? So, you know. And that's -- again that's always true and that's always that's sort of the arms race, if you will, in our industry.
David Moulton: Mike, let me take it back to a question about fostering a culture of responsible AI use. How do you take that enthusiasm that companies have about new AI tools, but then have them balance that with an understanding of the inherent risks and ethical considerations?
Mike Sikorsky: So I think it's a multi part question. So if I had to break it down into a couple of key things, and some of them we covered already, but I'll just sort of reiterate them, the first one would be education. And when I say education it comes in multiple flavors. I think this -- this idea of consumption, right, so this education in the sense of using generative AI, what types of information is safe to put in? What types of information is not safe to put in? From a consumption perspective. The other side of the education would be, you know, workshops from a technical perspective to allow builders and engineers a very similar sense of you know what are we allowed to use. What libraries are safe to use? How much data? Right? What are our objectives? But also understanding what are the new nuances related to software engineering that I need to be aware of that will allow me to effectively process data, right, that goes into or comes out of a generative AI system. So education, training, I think are paramount and are almost always after inventory and discovery are almost always the next step or one of the earlier steps. Guidelines and policies and I mentioned this a few times about just everything from an organizational wide acceptable use policy down to you know we all had to be trained on what's confidential information, what's top secret information, what's proprietary and so on. I think we need to have clear guidelines and policies around, you know, the ins and outs of AI. Just what I'll call classic or narrow AI as well as generative AI. I think extending past that when we get into ethics there should be committees and leadership just promoting and advocating. It's easy to say, hard to do. So without champions and leaders in your organization, and at the organization level and down even at the department level, right, champions and leaders to effectively and transparently communicate these polices and lead by example from an ethical perspective. That in my experience goes a lot further than just the forced march of this is the policy. You shall obey. Right? So I think that will go a lot further, you know having a champion, having leadership, backing at all levels. Right? And that is another thing as well. Not just the lead by example, but also shows that you have an organization that is committed to leaning in to the adoption of this accelerant technology, but at the same time doing it in a way that's I think effective for the business and will allow all of us to you know accelerate and flourish together. So.
David Moulton: Mike, two more questions. One. If we need to come back to it, I get it. But you mentioned this idea of an inventory. What are the three first things that a company needs to do when they're coming up with that inventory?
Mike Sikorsky: Yeah. So I think when it comes to an inventory you might start to think about classifying the inventory. So I've already talked about this is a low level library that might get used as part of an application or this is a third party provider type of chat bot that might go in another bucket. And then there might be one in the middle where it's a little less obvious. For example, this is an enterprise application we're using like a copilot that writes code. Right? But we're using it through this enterprise app. Right? So immediately I see three -- and there might be more, but at the surface there's three big categories of discovery. Right? Third party commercial enterprise and slash third party. Right? And then build slash integrate which are more like libraries, APIs, things of that nature. And finding them takes a -- discovering them takes a little bit of a different tact. For example over on the left if you've got the enterprise and the commercial you might be able to discover that using firewall logs. Right? Or, you know, doing inventory of end point systems, but it might -- as you move more to the right it might become a little harder to do the discovery where you have to start now you know traversing cloud systems to see if libraries are present on end points or looking in code repositories to see if they're making use of certain Python libraries or API calls or things like that. Right? So as you move from left to right the level of skill and effort involved in doing the discovery and the inventory can get a little harder as the AI gets more invisible.
David Moulton: Yep. Well put, Mike. Appreciate you coming on "Threat Vector" today to talk about all things AI whether it's getting into the career in security and understanding where AI can help out or to protect those systems that you're building and to be thoughtful, responsible, ethical in the way that you're deploying and reducing risk as a sort of new thing goes out into the world. It's always a pleasure to talk to you and to learn from you.
Mike Sikorsky: I appreciate that, David. Thank you for having me. Look forward to chatting again soon.
David Moulton: In today's episode we uncovered three critical insights around AI adoption and security. First we highlighted the need for an inventory to classify AI usage. Companies should differentiate between third party enterprise and custom AI solutions to fully understand their security landscape. Third party AI services like Chat Box and plug ins offer quick benefits, but can also be unpredictable. Enterprise level applications may be more stable, but you still need to know their strengths and weaknesses. Lastly with custom tools ensure you're tracking what libraries and APIs are being used. By properly classifying AI usage, security teams can tailor protection measures that align with each type's risks and vulnerabilities. Second. We discussed proactive security measures. Bringing security into the conversation earlier in the development cycle ensures potential risks are addressed before the products hit the market. Bring your security teams into the development process early. Treating them as last gate as often happens leads to rushed security checks and increases in the risk of missed vulnerabilities. Finally we explored the concept of a baby tiger illustrating how AI tools although initially benign can become dangerous over time if not carefully managed. This is a great analogy. AI tools can go from small and seemingly harmless to fully grown and dangerous if their adoption is not properly managed. Responsible AI adoption requires continuous risk assessment and strong policies that balance innovation with safety. A strategic approach to AI means recognizing its evolving landscape and being prepared to adapt swiftly. Whether it's developing comprehensive policies or fostering a culture of responsible AI use, these efforts will empower organizations to harness AI's immense potential without compromising security or ethical considerations. That's it for "Threat Vector" this week. I want to thank the "Threat Vector" team. Michael Heller is our executive producer. Our content team includes Shelia Droski, Tanya Wilkins, and Danny Milrad. I edit the show and Elliott Peltzman mixes the audio. We'll be back in two weeks. Until then stay secure, stay vigilant. Goodbye for now. [ Music ]