Uncovering Hidden Risks 2.21.24
Ep 15 | 2.21.24

Secure Access in the Era of AI


Erica Toelle: Hello, and welcome to "Uncovering Hidden Risks," a new podcast from Microsoft where we explore how organizations can take a holistic approach to data protection and reduce their overall risk. I'm your host, Erica Toelle, Senior Product Marketing Manager on the Microsoft Purview team. And now, let's get into this week's episode. Welcome to another episode of the "Uncovering Hidden Risks" podcast. AI has been a hot topic across all industries. Today we will be talking about security in the era of cloud and AI with the Microsoft security product team. Join us as we delve into how AI impacts cybersecurity with topics including AI-driven security measures, data protection, identity management, and compliance in the cloud. This episode offers valuable insights for professionals interested in the evolving landscape of cloud security and AI's role in shaping its future. Let's start by introducing today's guest who will join us for the discussion. Bailey Bercik is a Senior Product Manager within Microsoft Security's Identity team. She's spoken on identity and security best practices at numerous industry events, including DEF CON, Blue Team Village, SANS, ISC2, Security Congress, Blue Team Con, Authenticate Con, and more. She's also written about minimizing business impact during outages for Identiverse's newsroom and serves on Bates Technical College's Advanced Technology Advisory Committee. Thanks so much for joining us, Bailey.

Bailey Bercik: It's great to be here, Erica.

Erica Toelle: Next, let's introduce our special guest joining us for this discussion. Jef Kazmir is a Principal Product Manager in the Microsoft Security Identity team. With over 25 years in the Enterprise IT industry, 12 of those at Microsoft, Jef focuses on identity governance, modernization in the cloud, and the AI era to help customers secure their organizations. Thanks, Jef, for joining us.

Jef Kazimer: Erica, thanks for having me. I'm always excited to talk security, AI, and, of course, identity governance.

Erica Toelle: Also joining us today is Lisa Huang-North. Lisa is a Senior Product Manager at Microsoft Security, leading the development of identity and access innovations for intraconditional access. She has over 10 years of experience in various fields, including identity, access management, aerospace, strategic consulting, and financial services. Thank you, Lisa, for joining us.

Lisa Huang-North: Yes, thank you, Erica. Really great to be here. I think we have a really exciting topic today talking about the advancement in artificial intelligence and how we've seen an increase in scale and sophistication of cyber techs across the globe. So, I'm excited for our conversation on how security leaders should approach securing their organization in the era of AI.

Erica Toelle: And with that, let's dive into today's topic. Bailey, we're officially in the era of AI now. Maybe a good place to start would be to define some of the common terms we'll be talking about today.

Bailey Bercik: Absolutely. I think it will be good for us all to get on the same page since this is a super hot topic. So, first of all, when we say AI or artificial intelligence, we're talking about any type of simulation of human intelligence by machines. And when we talk about machine learning or ML, we're talking about any type of AI where the computer learns without human interaction so, for example, pattern recognition and analysis. When we talk about generative AI, which I think is one of the hottest topics right now, it's using AI to create or generate new content so this can be text, images, or data. And when we also think about LLMs or large language models, that's when we use natural language processing to process text input and generate text output. And that text input could be referred to as open prompting where I can input just about anything. Closed prompting would be where I'm able to select yes, no, or from a varied list of questions. So, just to make sure we're all on the same page when we talk about these different terms during the podcast today.

Erica Toelle: Perfect. Thanks, Bailey. Now that we've defined some of these key terms, what are common risks that we're seeing at the intersection of security and AI?

Bailey Bercik: So, for those of you all who are familiar with the MITRE Att&ck framework and when we think about those TTPs or those tactics, techniques, and procedures that have been in place for other well-studied attack tactics, MITRE also released the MITRE Atlas which had the top tactics centered around AI. And so, that's a great place to get started if you're looking at what's been widely studied. I'm also going to recommend a couple of other resources that were helpful for folks at our team and friends of mine who are working on AI. So, the first one is going to be a book called "Not with the Bug, with the Sticker", which covers hacking AI systems. And that title actually originates from confusion around self-driving cars where researchers put stickers on stop signs where the car would misread that. And so, it's important to think about how you can actually hack those AI systems, and that book goes in-depth about how that would start out. Now, if you don't have the bandwidth to read an entire book, if you want to read a blog post, which I think very nicely answers your question on what risks organizations should be thinking about, Caleb Sima published this on Medium and it's based off of his presentation that he gave at the Cloud Security Alliance, which he talks about researching 92 different ML attacks and narrowing them down to the three that are perhaps most spoken about, and ones that we'll likely be seeing more of on the horizon.

Lisa Huang-North: So, Bailey, can you tell us more about what are some of the common types of AI-drive cybersecurity attacks?

Bailey Bercik: So, I want to go into those three since those are the top three risks that I've been hearing customers talking about and also as I've been playing around with different AI tools and technologies, I felt they have been the ones that I think that are more commonplace. The first one being prompt injection, which we can think of like SQL injection or cross-site scripting, which is where you would put in a command and get something else. And we can think about this if you've ever played around with ChatGPT or other LLM technologies where I'm asking the AI to do something else or finding a new rule. So, I may be asking that LLM to do something nefarious or that it might not have the permissions to do. But if I say, well, I'm writing a novel where the main character is doing x, y, z, or I want you to pretend that you're a help desk representative that's going to help me get some sort of bit of information. So, we're assigning some sort of new rule or getting fresh information, assigning it something else where as an attacker, I can get something back in response that the defender might not be expecting. Data poisoning is another one that comes up. And so, that's manipulating training data. And so, there have been some instances of unintentional poisoning. So, for example, this was actually in that "Not with a Bug, But with a Sticker" book, and I thought this is so interesting and timely. But it was where recently teenagers accidentally corrected research data on a research participant recruiting platform. There was a social media post that went out encouraging side hustles for students and this actually skewed data because the respondents were over 90% women between the ages of being teenagers to in their early 20s. Which if you're doing research for a laboratory or a paper, that heavily skews to a certain population set. So, a lot of that data was poisoned and burned out. That's an example of this is happening unintentionally. But within your organization, this may look like training with data from untrusted sources. Or it can look like intentionally sending bad data. So, for instance, if we're training a model on public data and I'm purposely sending information that might not be what the original designers had in their mind for it. And the third one that I've heard most commonly in the industry have been data leakage, which is where private or confidential data gets incorporated into that training data. And you can think about it with the previous one I talked about, prompt injection, where then a bad actor will be able to ask for data it shouldn't have access to, potentially leaking some sort of corporate secret.

Lisa Huang-North: Wow, Bailey, thank you for sharing that. It is scary to think about the 92 different risk outlined in Caleb's article. But I think even just the three you mentioned here, especially around data poisoning, is a really good call out for organizations to consider the data sources that their product or operation might be consuming today. So, with that, Jef, I'd love to hear more about how you think generative AI could be used in some of the tech scenarios, for example, phishing and social engineering.

Jef Kazimer: Thanks, Lisa. I think, as Bailey outlined, we see a lot of this growth area in AI to do some really great and powerful things but I also think we'll start seeing it be used in some more traditional ways that are enhancing the attack vectors, phishing and social engineering. Now, we all know that everybody should be using strong authentication or at least MFA these days. Well, one of the challenges with us having single control to prevent inadvertent access is that there's a human aspect to this, right? So, if I have the ability to convince somebody to do an action, they may actually approve that MFA prompt, they may sign into that system. So, how do I drive that person to do what I want? Well, today we have phishing emails, let's say it's coming from a sender, we have social engineering where I can send you a text and start a conversation. But what if I have AI or generative AI that makes it look like it's coming from your manager or what if it's coming from a trusted advisor, or maybe even a family member, right? I'm a little concerned about how easy it will become to use this technology to trick people in our traditional phishing and social engineering ways.

Erica Toelle: Jef, what abilities do customers have to prevent generative AI-based attacks?

Jef Kazimer: Well, if we look at the traditional attacks, whether it's phishing, social engineering, AI may be used as part of the attack, but controls we have today can help mitigate some of these things. So, we can move away from our sort of weaker credentials. We want to start really focusing on deploying phishing-resistant credentials. You may know of these as FIDO2 security keys, but really in the industry, we're moving towards passkeys. And I'm really excited to see that the passkeys are not just an enterprise control. You'll start seeing those in the consumer space. And I really hope this will drive the adoption, right? So, if you're using a passkey for your work environment, you may also be using a passkey for your bank environment or your social media. I think it's going to help drive that industry away from those traditional weaker credentials. But I also want to highlight it's not just the single technology that's going to solve this problem for us, right? AI may exacerbate it, it may make it more likely that there will be more attacks. But we can add additional controls. In the enterprise space, we focus on authentication, that strong authentication mechanism. But we want to have other controls like, not just user identity, we want device identity, we want to use adaptive controls that learn from the environment, from the usage pattern, we call those risk-based controls. And when I look at these attacks in the past where they had a single mitigation control, that control was bypassed and given the attacker access. By having those additional layers, we're now increasing the chances that we can mitigate attacks because we have different layers that the attacker must thwart to gain access. So, it's very critical that we not just focus on one technology but a set of technologies to achieve that more secure outcome. But I just want to highlight something. When we're focusing on AI, we're typically thinking of how it can be used against us or an organization. But I really want to highlight how it can actually benefit an organization. I think in our very cloud-native sprawling environments for organizations, we bring complexity that organizations have to manage. And one of the challenges there is around configuration, how do people learn what is appropriate in their environment? How do they learn the best practices and apply them into an environment? And iterate that as that changes. I think we're going to see things like AI that help augment the skills of admins to guide them on what a configuration is and hopefully a more secure configuration. So, we want to be able to have this more secure environment because we know a lot of the attacks happen today if you do a misconfiguration. And if AI can help guide us and drive a more secure environment, I think that's a big benefit.

Erica Toelle: Thank you, Jef. In the next section, I'd love to learn more about the principles of zero trust and how it relates to this new wave of threats. Lisa, could you start off by telling us a bit about the zero-trust model for people that aren't familiar?

Lisa Huang-North: Yeah, definitely. And that's a great question. So, I think in the past, especially pre-pandemic, companies have always relied on physical parameters such as network firewalls to protect access to their infrastructure, resources like servers, and cloud applications and data. But in today's world of hybrid work, any organization's employee could really be working from anywhere at any time as long as they have internet access. And that's where zero trust come in play. Zero trust is a security model that never trust, always verify. And really what this security model entails is that it requires continuous verification of identity, access, and security posture before granting or maintaining access to data and resources. And I would love to maybe go into the three core principles of the zero-trust model so we can talk about how that may impact AI cybersecurity risk. How does that sound?

Erica Toelle: Perfect. Thanks, Lisa. Let's dive in.

Lisa Huang-North: Yeah. So, with zero trust, there's three core principles that we're all familiar with, that's to verify explicitly, use least privilege access, and assume breach. Now, these principles should also be applied to AI systems as organizations continue to add AI-powered tools to their toolbox. So, first, verify explicitly. With security professionals, as you're configuring your environment, you should always verify the identity and authorization of all users, devices, and workloads that may interact with this AI system. And ensure that you're using secure encryption and off-protocols to ensure data integrity. Second, for using least privilege access, AI systems should be granted access to the minimum amount of data and permissions necessary for their functionality, just as we do currently for cloud apps. So, for example, if your organization is utilizing an AI system or an algorithm that generates personalized recommendations, really this system should only have read-only access to the user's data such as preferences or transaction history. However, it shouldn't have access to unnecessary sensitive data such as your users' credit cards or contact details. Then finally, assume breach. AI systems should be designed and deployed with the assumption that they can be compromised or attacked by bad actors just like any other systems and applications. So, be sure to have robust backup and recovery mechanisms in place and plugged into your organization's existing threat detection and response workflows. Now, I want to call back to something Jef said earlier which is evaluating the mitigation controls you currently have in place. I think it's absolutely essential for organizations to define their AI policies as part of the corporate governance and risk-management process because as AI technologies evolve, organizations will need to regularly review and align with updated standards as well as regulatory requirements.

Erica Toelle: Thank you, Lisa, for talking about how AI relates to the zero-trust framework. Jef, moving to you, what other more sophisticated identity-based attacks are happening?

Jef Kazimer: Well, historically, many organizations like ourself have been focused on securing the authentication. We've put additional controls such as we want stronger authentication, we want those risk-based policies to make sure that it is appropriate for what is being accessed. But we're starting to really see a shift, not just on the securing of the authentication but understanding what is the authorization. So, again, authentication who I am, authorization is what do I have access to. So, when we're looking at access, it's important that not every access is equal, right, we have this concept of least privilege. We want to have the ability to have the lowest possible privilege for the task or what we're accessing. And what that means is we want to manage that each user, whether they are regular users or admins, has the right access for what they're trying to do and for the right amount of time. This can help reduce the attack surface. So, if we do have a compromise, we do have an incident, right, we could reduce that blast radius by reducing the access that identity has. But now we start looking at things like AI, right, because AI brings us information, right? How do we look at the information that it has access to and services to us? So, I think we're going to start seeing the focus, again, not just on the user has access to but also what AI has access to, to bring information to the user or to the organization. So, I see this as an area that we're going to be I guess delving into, not just identity governance for the end users but the AI that the end user may be using. And I think it's critical to understand that when we talk about security, identity governance is security because that is what you have access to, that is the information that the attacker may be trying to get, whether it's through elevated access or moving around in an organization.

Lisa Huang-North: Thanks, Jef. That was really insightful. So, back to Bailey. What should organizations do? What are your recommendations for organizations to prevent more identity attacks more easily?

Bailey Bercik: So, more sophisticated identity-based attacks are coming and more sophisticated AI-based attacks are coming. But I think it's still important to approach this with the defense-in-depth strategy. And being aware that good security basics are still at play here. So, if we think about this like securing your home, for example, this is the equivalent of investing in cameras and motion sensors, and all of these fancy things, yet not locking your front door. So, when we think about approaching and securing for these new advancements that are on the horizon, good best practices still apply. For example, rolling out MFA is still going to be super key. The amount of organizations that I've seen in the industry that don't yet have MFA for their privileged accounts or for all of their users is way too high and we should really be thinking about rolling out MFA to all of the users in our environment. Also, Jef touched on earlier, the principles of least privilege so ensuring that all individuals only have the access that they truly need to do their job and making sure that we're really cutting down on the level of permissions that individuals have to further reduce that blast radius. And if we want to think about this with an AI mindset, also reducing that for applications and the various permissions that they might have to further reduce that blast radius. We also touched on governance earlier. So, if we think about securing your home, this may be thinking about the fact that a former roommate of yours perhaps doesn't need to have a key to your current home if they do not currently live there. So, changing the locks when somebody moves out, for example, just like for users where if somebody no longer works with your organization or if somebody has perhaps changed job title, making sure that we're cleaning up those accounts and ensuring that we truly have principles of least privilege at play here. Another thing that's not super fun to talk about but is really key is patching and making sure our systems are up to date. And so, really at the end of the day what I want to encourage folks to think about is, yes, looking at these more sophisticated attacks that are at play, but also thinking about the basics and ensuring that we really have a defense-in-depth strategy here.

Erica Toelle: Thanks, Bailey. Lisa, are there any recommendations that you have for our audience?

Lisa Huang-North: Yeah, I think, as Bailey mentioned, multifactor auth is really the base layer of security that everyone should have as a non-negotiable start, right? And as AI continues to evolve, I think there will always be new attack patterns, new different security surfaces. So, security teams, as security professionals, we really need to continue upscaling so that we can defend against more sophisticated attacks whether that be token theft or cookie replay. And beyond that, we also need the help of our colleagues and teammates and have all the employees within the organization to stay vigilant to protect themself against things like AI-powered phishing campaigns.

Erica Toelle: Jef, is there anything you would add?

Jef Kazimer: Yes. I want to echo what Bailey and Lisa said. There are things that you can do right now that help increase your security posture for whatever may be coming down the pipeline here. If we're thinking about enabling MFA, do it now, don't wait for perfect, right? The sooner you start on these base scenarios, whether it's defense-in-depth, least privilege, every bit helps move you closer to a more secure posture. So, take action now and we'll adapt as things evolve in the industry.

Erica Toelle: Thank you so much to all of you for joining us today. We have a tradition to close out our podcast. To wrap it up, we'd love to know what is your personal motto or what words do you live by.

Jef Kazimer: ABL, always be learning. We are very fortunate to live in the time where things are growing around us for us to delve into. But the more we learn, the more we can pass on that to others to help them learn.

Bailey Bercik: Mine is that the basics get you far so whether it's security basics and having an understanding of security foundations and fundamentals or an understanding of foundations and fundamentals across any aspects of your life can really get you far. And so, don't let it intimidate you. If you're somebody new to whatever industry that you're in or if you're in security, because having that foundational understanding will be super key in your day.

Lisa Huang-North: And for mine, really the journey is the destination. I believe that there's no mistakes or failures, it's just about the lessons we learn along the way.

Erica Toelle: Well, that is certainly great advice and great words to live by. Thank you, Jef, Bailey, and Lisa, for helping us uncover some hidden risks today.

Jef Kazimer: Thank you.

Bailey Bercik: Thank you for having us.

Lisa Huang-North: Yeah, I learned a lot today. Thank you all.

Erica Toelle: We had a great time uncovering hidden risks with you today. Keep an eye out for our next episode. And don't forget to tweet us at msftsecurity or email us at uhr@microsoft.com. We want to know the topics you'd like to hear on a future episode. Be sure to subscribe to "Uncovering Hidden Risks" on your favorite podcast platform. And you can catch up on past episodes on our website, uncoveringhiddenrisks.com. Until then, remember that opportunity and risk come in pairs, and it's up to you where to focus.