The BlueHat Podcast 9.18.24
Ep 37 | 9.18.24

Guy Arazi on the Art and Science of Variant Hunting

Transcript

Nic Fillingham: Since 2005, BlueHat has been where the security research community and Microsoft come together as peers.

Wendy Zenone: To debate and discuss, share and challenge, celebrate and learn.

Nic Fillingham: On The BlueHat Podcast, join me, Nic Fillingham.

Wendy Zenone: And me, Wendy Zenone, for conversations with researchers, responders and industry leaders, both inside and outside of Microsoft.

Nic Fillingham: Working to secure the planet's technology and create a safer world for all.

Wendy Zenone: And now on with The BlueHat Podcast. [ Music ]

Nic Fillingham: Welcome to The BlueHat Podcast, Guy Arazi. Guy, welcome.

Guy Arazi: Hi, thank you.

Nic Fillingham: You are in the UK in the United Kingdom.

Guy Arazi: Correct.

Nic Fillingham: It's a bit after 8 p.m. First of all, thank you for staying up. Thanks for eating, I assume you've eaten dinner and now you're doing a podcast before bed. Thanks for joining us. Tell us a little bit about yourself. Who are you and what do you do here?

Guy Arazi: My name is Guy Arazi. I currently live in the UK. I have a lovely wife and a daughter. And I've been passionate about security since I remember myself, I think since childhood. And my journey with Microsoft started around 2018. I was a researcher in the EDR group, basically developing detections for Windows OS, mostly lateral movement, credential theft, etc., and then moved to the vulnerability research in the GIL team. And now I'm in MSRC OLS team, the V&M group, dealing with online services, vulnerabilities, and mitigations. And that's it for now.

Nic Fillingham: Can you talk a little bit about sort of day-to-day, week-to-week, month-to-month, in this role that you're in now, is there a particular area that you're focused on? You talk about online services, but are there sort of, what parts of that do you spend most of your time on?

Guy Arazi: So basically, like, I would define it the other way around. So anything that is online services is not low-level, not binary exploitation. We deal with Azure, anything that is basically web facing. Azure, any online service that you can possibly think that doesn't involve something this low level, we take a look on and we have, like, the ownership of it. And on a daily basis, we basically get, like, reports from external and internal researchers that basically found bugs that cross boundary of a security of a feature or a product that might leading to compromise Microsoft as a first party, or maybe even their customers as third party. And we also work out way in order to create a better mitigation with the engineering groups and the teams that relates and own the specific features that the bug was reported on in order to avoid this issue and obviously address it.

Nic Fillingham: Got it. And part of that work, or perhaps the majority of that work, centers around variant hunting. Is that correct?

Guy Arazi: Yeah.

Nic Fillingham: And so we're going to talk about variant hunting today. And a little bit of inside baseball or, you know, behind the curtain. So here in MSRC, we have lots of sort of internal learning and sharing programs that run where folks are able to talk about the work that they do and sort of present it to a wider audience. And you recently presented internally about variant hunting and some of the work that you and the team have been doing on trying to tackle some of those challenges, which I'd love for you to talk about today on the podcast. I would love to start maybe just with some basics. I feel people have heard variant hunting, they probably know what it is, but let's just start again. What does variant hunting mean in this context? What is variant hunting?

Guy Arazi: That's a great question, and I think that's the base for this whole podcast right now. So I think it's essential that we answer on it on the right way. So basically this variant hunting is the whole meaning of our podcast. This is the whole base of what we're speaking today, and I think it's essential that we answer it on the right way. And when a bug is being reported to MSRC, the first thing that we do is obviously mitigate this bug, like, go to the code base, understand what the bug, what is the bug about, and try to address and avoid it, right? But there are other case scenarios where the same pattern, the same, let's say, the same bug repeats itself in other areas, in other features, in other products in Microsoft. And we need to understand and try to figure how do we get these type of bugs, these same patterns from the same report that we just mitigated and addressed, and disclosed it and obviously mitigate it before someone else does it, like either external researcher or even worse than that, a malicious actor.

Nic Fillingham: Got it. And so is variant hunting about looking for the exact same sequence? So for example, if you are looking down at the actual source code and you see a line or several lines that have been identified as needing to be changed to mitigate some sort of vulnerability, is it about just sort of doing a search, doing a CodeQL search or a find-and-replace across all source code for Microsoft for that particular chunk? Or is it, as you, you used the word pattern. Is it more about coming up with a way of identifying what actually is happening more in an abstract sense and then going and looking for places where that happens in other products and services and technologies, or is it both or is it something else?

Guy Arazi: So I think it really depends on what the case is. But, like, I think that the first thing that we can do is obviously use static scanners in order to find these patterns. Because finding something that is abstract wouldn't be something that obviously is easy to find. So first we're going to take care of these low-hanging fruits and try to tackle these first down because we already have those capabilities of searching, right? The one thing is that, like, those static tools are not able to find these scenarios by themselves because sometimes we have some custom code and obviously patterns that are, does not fall in the detections or in the rules that the static analysis provides us. And that's why we create these rules. So in one case, like, we would probably write a CodeQL or even same work rules in order to find this pattern. But on the other hand, we might even try to brainstorm it to try and understand where can it else can be? Like, are we missing something that like middle code might, like, block our view when it comes to static scanning can be? And that's why we as a team try to tackle these things together and try to focus mainly on the aspect of, like, what is the root cause? Like, what's happening here that might happen in other projects or features. And then we're trying to extract these insights and the indications to try and maybe exploit some other ways but mainly logical. We're not trying to do it, like, searches or, like, code based. We're just trying to think about it. And then we can even build our own scanners. We can build our own pipelines to find different things without even disclosing them right here, right? But we can be creative.

Nic Fillingham: So is that a uniquely human role, you know, to identify the root cause of an issue, to look at the source code, to be able to understand what's happening from a, sort of a pattern and logic perspective, and then stepping back to think about, Well, how could that be applied somewhere else? Is that something that only hands on keyboards and markers on whiteboards and actual humans can do? Or is part of the work that you're doing coming up with ways to leverage tools and automation and AI and stuff to sort of do that work for you?

Guy Arazi: So, most of the times, probably the whiteboard is a good pick, but, like, you want to automate and scale stuff, right? Microsoft is not a small company, and there are many products and many features and many things hanging around, and the code base is almost endless. And that's why we're trying to use all kinds of insights and indications. And we're trying to build some sort of an automation in detection that is working a bit different from static analysis, is working more towards the proactive scanning against live assets using the feeds of MSRC reports. And essentially we're trying to tackle, let's say the most impactful or the most severe scenarios that are being reported to MSRC. Because obviously we care about them the most, right? We need to prioritize it somehow. And I think impact is the key here. And obviously there are many verticals and many scenarios and we can even break them down one by one. But the idea here is to understand that we're trying to create some sort of unique detection on many products and many features using the reports that we've seen in MSRC. Because right now, the tools that we possess, the tool that we maintain and hold in terms of static analysis, doesn't provide the cutting edge that we need in terms of finding other variants. You might be able to, we might be able to find, like, different case scenarios that are pretty generic. But if you're talking about something that is tailor-made to your company, it doesn't really exist. It's only things that are pretty straightforward. And when we're speaking about Microsoft, and not just Microsoft but large enterprises, we're talking about complex code, we're talking about chaining services, and many variations of code implementations. And I think that what we are trying to do is add another layer from the insights and the value of the reports that we see on daily basis.

Nic Fillingham: Got it. So I'll just recap a little bit because I want to make sure I'm understanding. So static analysis is looking through source code for actual hits, results.

Guy Arazi: Yeah.

Nic Fillingham: You're searching, look for this string, look for this chunk and see if you can find it, or something that's very similar to it. So that's static analysis. But then what you're referring to here when we talk about variant hunting is it's stepping back from the code. It's what is the pattern? What is the logic? And then how do we look for those patterns and logic, not necessarily at the source code level, but in how things actually function or interact across multiple products, services, technologies? Did I get that distinction correct?

Guy Arazi: So, yeah, I think you're on the right path. I just think that, like, you can do also variant hunting on static analysis.

Nic Fillingham: Okay.

Guy Arazi: Yeah. So you can do that and people do use it, but, like, in order to actually carry variant hunting with static analysis, you will have to create some complex query that potentially find this specific variant that you just got reported on, right? And it's not always the simplest task to do because it's not just by finding a specific string. It's about finding a chain of code chunks that potentially chain together and they cause the issue by their chain and not by their arbitrary existence in a page. And that's why sometimes it's a bit hard to analyze the code, especially when you're speaking about code based, right, when you're investigating or you're researching a code chunk, all the functionality of the code does exist there, but, like, behind the scenes, there are modules and other components that are being loaded, which you don't get to see. Then you need to understand what other file you need to incorporate to your query in order to find the right chain and understand that the exploitation is really feasible here, like, the vulnerability does exist. The other area where we lean on, like, the area where we create a detection on, is based on the actual service that has an instance on the cloud or out there, and we're trying to automate the exploitation against this asset. So we basically are, we're not trying to find all the specific variations between the code. We're trying to see what does the final product says and how does it match the report that we've got in MSRC. So, basically imagine that we've got some report of some exploitation, right? Now, in order to actually leverage this piece of information right, we need to obviously ingest it, understand what's wrong here. And then we can maybe replicate it and say, Yeah, we can maybe create some dynamic detection that can do the same thing, just generically on assets on the go. So we wouldn't need to go through all these code chunks and all the other things around static analysis because another thing that doesn't get talked about is when you're looking on a piece of code, you're basically, you're seeing just one side of what the customer sees, right? Because there are many chains, there are many components in the chain, like proxies and load balancers and web servers and things that might modify, and not might, they probably do, modify the code in some case. Or at least the HTTP request and the response when it's on transit, when it's going back and forth to the customer and back to the server. So when we're looking at it from the external side, on the dynamic side where we literally communicate with the server as, like, a specific entity that runs the code, which is already, like, the product that is already built on. And we know that there is nothing that is different. And it differs from static analysis because they're, like, we cannot be 100% sure of what is the final product looks like. And when we are communicating it directly with a specific instance, we can say we know what headers are exposed. We know what is the request looks like. We know what the response looks like. And we can build something a bit more stable, a bit more concrete, a bit to the ground without needing to guess a lot. Because if you think about it, and I'll finish this answer with that, when someone submits an MSRC report, the main thing that he relies on most of the time, yeah, not always, is, on all live services, is the request and the response that are like their call of exploiting the specific service he just reported on. So we are trying to mimic that exploitation, that final product that they just delivered us and try to move it to other features and products in the same case scenario, just different variation.

Nic Fillingham: Got it. Thank you for that explanation. I wanted to ask, just to sort of really understand maybe the size of the problem, or maybe the scope of the potential impact here. So, I'm not going to ask you to come up with a number here, but help us understand when a vulnerability or a discovery is submitted to MSRC, and then the variant hunting begins. How likely is it that variants will be discovered? How likely is it that a researcher finds something and sends it into MSRC, and then that results in variants being found? And I guess that's less a question of what percentage of vulnerabilities are present in more than one thing, and more just sort of is variant hunting, are you looking for, is this the long tail or are you looking, you know, is this actually where the sort of bulk of the impact happens because the researcher has sort of pointed us in a direction. They've given us a little bit of cheese or a nugget or something, and then your team and the various other teams inside of MSRC use that to go and make an even significantly larger impact by finding the variants and then going and making some more systemic changes. Does that make sense?

Guy Arazi: So you're basically like, I don't know if I can quantify and say what are the numbers or how many times it happens, but, like, I can firmly say that, like, on significant and impactful vulnerabilities, most of the times we find variants because they're normally involved with other services and sometimes you know how is it like developers implement some other code bases that they've seen or some other practices which are fine. But there might not be the best practices that they can use, and that's where we find the mistakes. You know, when you're trying to implement your own authentication mechanism or you're trying to create your own class of something that already exists, which is completely fine in terms of testing and other case scenarios. But, like, when you're going to production, you normally are trying to avoid these case scenarios. And we do get to avoid, but in some sense and also in some variation, some components can be still being accessed by other users through all kinds of ways. And that's what we're also trying to avoid. You might be looking on some products or feature and you might be logging in with a specific authentication mechanism, right? And you have no idea that a different authentication mechanism exists because it's not being really exposed in the UI. But, like, if someone, even not in Microsoft, found a bug that essentially allowed them to authenticate with an authentication mechanism that wasn't found in an app, but they managed to bypass it. Although it wasn't, like, wasn't existed in the UI, they might try it on Microsoft as well. And maybe the developers forgot to delete their ways in, like, the implementation of this maybe testing feature or a feature that was deprecated or anything like that. And then these vulnerabilities and these variants occur. Because if you think about it, we're human. We try to learn from others' mistakes, but also from other success. And that's what we're trying to do also as researchers. We're always trying to look and understand what someone else has done that made them find these vulnerabilities? How did they even find them? You know, because also the story has some weight to it. When you're just surfing the web and you're just trying to find vulnerabilities for the sake of vulnerabilities, you might not always get the proper and the impactful vulnerabilities you're looking for. But if you try to go as a user and try to understand, like, how does the system work? How do I actually make it work? Just in on, like, without even the security research perspective, just as a normal user, and you will get to know the application in a good way, you potentially can find vulnerabilities that other wouldn't find, like automatic standards in static analysis, or even the tools that we've developed in-house. It's just a bit of a level up for researchers. And I think that it's very essential that we learn and we communicate these bugs and failures and success with others. Because maybe in Microsoft we found and resolve something, but it can also help one of our customers, or even someone that is not our customer to potentially address and mitigate something that was completely found by following up some blog post or something that was published by Microsoft. Vice versa, by the way.

Nic Fillingham: As I was asking you that question, or as you were answering, I realized that really what I was asking you is, How important is variant hunting and how often do you find stuff? And I think you said, very important, and we find stuff all the time. Is that an accurate sort of summary?

Guy Arazi: Yeah, I said four verys, not three verys.

Nic Fillingham: Oh, you said four, I'm sorry. Very, very, very, very, my bad. So thank you for that sort of refresher and that sort of reframing of variant hunting. Can we now talk about your, some of the work that you and the team are doing to make variant hunting more efficient and more impactful? And in the sense of what sort of learnings, things, advice, guidance, do you have both for researchers out there, that are out there looking for stuff and they want to do their own variant hunting. But for folks who are listening to the podcast that are engineers or are in the response space and they need to go and, you know, do variant hunting on their own code base and within their own products. So what are some of the things that your team has been doing and sort of what have you learned and what can we share?

Guy Arazi: Okay, I will try to split the answer into a few sections, but I will start with something that I think that means a lot, especially to me when I research and assess cases. When you're implementing some code, and I'm talking about a developer, you should always understand that something that is temporary can be set for life, in quotes. When you're doing something and you're just implementing something for testing and you just forgot, like, a secret expose or you forgot to implement some authentication mechanism, it might be quick and easy to run whatever POC, whatever proof of concept you're just doing, but it can essentially also mean that you can leave those traces, those implementations out there without knowing that you've left them there. So when you're, like, building your code, make sure that everything that you're doing is well-documented. So even if you add, like, something that should have been deprecated, like add something in your team notes or even add it on the code base, this needs to be removed in some sense. Because, you know, when you're going through cycles of development and you need to ship the products and you need to go GA, there is a lot of stress and there is a lot of pressure, and we all know that. And I think that the last thing that you need is another chunk that might compromise the customers that you are relying on, the people that you want to serve. And so the best thing I could say is try to focus on everything that you put in the code. Try and not use shortcuts. Use some, like, make sure that everything is documented. Write everything, try to follow best practices. And even if it takes you 10% more, it might be worth it because the pain that you're going to have afterwards and all the aftermath is just not worth it for you, and obviously for the customers you're serving. So we said, like, the question that the second part was what is important to us, I guess, as researchers. Yeah. So I think that when approaching a security vulnerability, right, there are many aspects that can go wrong. Try to ask yourself the simplest question out of them all. What security boundary does it cross? What does it do? Like, and why do I ask that? Is that you're now looking on some implementation that does something wrong, right? That is breaking the developer assumption in terms of, like, it's not intended to run like that, right? Now, your thoughts and your way around solving the issue is not around this specific implementation. It's around the vertical or at least the scenario of what you're trying to avoid. So, the same sense, I'm going to go back and go for, like, the authentication mechanism. If you stop user from, like, developer from implementing authentication mechanism X, doesn't mean that you don't need to forbade in the authentication mechanism Z. But, like, if you're just following the naming of them or the implementation, it's going to be very hard for you to focus on what is the real problem, what is the real issue here. And the real issue is the security boundary that was crossed and was breached by an external intel and finder. So try to break it down in a way of, like, what you're trying to solve, like, what you're trying to mitigate. Can this security vulnerability mitigated only on the code wise, can be mitigated maybe on some middleman, like the proxy or anything like that that can avoid these bugs in the future. Because if you think about it, you can address immediately the vulnerability straightaway, right? You can fix the bug and forget about it. But there are other ways to fix bugs, like, for the long-term, like, even educational, like knowledge sharing, teaching the developers how the security works, how do you implement, like, proper, I would say implementation for specific security mitigations and features. I would say, like, don't just invest in one area. Like, you can either knowledge share the developers, you can think about ways on how to mitigate it, but not just in the traditional way, just stick to the code and forget about it. Try to think about other key players or key roles that you have in the application that might stop it in the future. And it can be anything from IAM access to some protected library that you can use instead of like unprotected one and can stop this whole thing. It, like, it really depends on what is the vulnerability and how far you want to mitigate it or at least avoid it in the future. And when you decide, like, at the end of the day, you're setting a target. And when you, like, if you're happy with just resolving it and addressing it for now, that's fine, but it might not stop the other variants and other, like, other mushrooms that appear after the rain in the forest. So don't keep everything damp. Try to control the moisture and you wouldn't see them as much.

Nic Fillingham: Got it. So that first tip was about very, document, you know, especially if you're doing something temporary that needs to be removed or deprecated before shipping. Make sure you have, you know, really good documentation. You said it could be an extra sort of 10% of effort, but that extra effort is going to pay off in the long run. And then the second point here was about really, when you find something, really step back a little bit and understand or discover how is it, what is the security boundary that's crossing and less about maybe what's happening at a very, you know, at a code level. Yeah, so did I get that right?

Guy Arazi: Yeah.

Nic Fillingham: Okay.

Guy Arazi: Can I add another one thing?

Nic Fillingham: Oh, please keep going. Keep going.

Guy Arazi: So another thing I would say that maybe I omitted, but basically try to use your colleagues and try to ask their opinion. Try to get reviews from security-oriented folks in your department or team. Because they can also save a lot of pain and they can also explain a few tech scenarios and a few vectors. And if you're not sure about something, you can always Google it and you can even use ChatGPT that solves a lot of issues for many people, at least in Lino, and try to enrich yourself, make your knowledge around security and how does it work, especially for something you're developing and you're about to go, like, to general availability with.

Nic Fillingham: Got it, and then so that second, or that third, excuse me, is ask your colleagues, ask your friends, ask, you know, maybe community groups that you have a relationship with and/or some LLM products like a Copilot or something.

Guy Arazi: Yeah.

Nic Fillingham: So, all right, so let's just stay with that for a second. So I'm going to ask my colleagues, maybe this is, you tell me if this is too generic a question, but, like, what am I going to ask them? Am I going to ask them, Hey, I found this thing, take a look at it, do you see what I see? Or am I going to ask them, What security boundary do you think this crosses? Like, is there a, you know, is there a set of questions to go ask, or are you just sort of saying leverage your colleagues and leverage the people in and around your space to sort of check your work?

Guy Arazi: So basically, when you're promoting new code and you're trying to push new code to, like, your main branches, normally you'll get, like, peer review. Most of the time it's something mandatory that goes by every company. But it doesn't mean that it gets reviewed in security terms. It might get reviewed for optimization or for readability or other things, but not always for security. And if you know you have someone that has some edge in security, or you're not sure about the code that you've just implemented that involves, I don't know, user access or, and I'm just throwing, like, an admin panel, authentication mechanism, or I don't know, you know, user read on secret properties or whatever. Maybe try to do the extra mile and try to understand yourself that you're now dealing with something that is a bit more sensitive. And it's not just something that would be only exposed on, maybe on the UI, which also has its risks, but, like, it doesn't really involve any, like, cross-tenant operations or something that can be significantly. And yeah, I think that know what you're doing and know if it's sensitive enough to try and go to the other folks and see if they might have a better insight for you. Yeah.

Nic Fillingham: Got it. And then what about tools that you, I mean, it sounds like your team builds their own in-house tools, but are there other industry available, sort of publicly available tools that you recommend folks leverage that perhaps they don't currently leverage or aren't aware of to assist in variant hunting?

Guy Arazi: Yeah, so there are many open-source projects. I think that you can most definitely use tools like Semgrep for static analysis. Obviously, don't expect it to find everything, and you need to play around with it. There are many good guides and even Semgrep, their official page, have, like, a really cool guide that walks you through on everything you should probably know about in Semgrep, like, how you build the query and how do you do everything. So it really depends on where you want to go, but, like, if you're doing web app security testing, maybe use even Burp. I'm not sure that everyone has the pro version, but you can also always develop your own Burp extensions with some coding, you know, Copilot, ChatGPT, etc. You might be creating something that you didn't think about. But, like, then again, I'm going back to the same thing. Don't focus on the implementation. Don't focus on the code. Focus on what you're trying to solve and what hurts most to your product. And try to find the solution around it in order to avoid it. Like, tooling is nice, like, all these scanners and everything is fine. It's not, like, it's not a game changer. What's game changer is your attitude and your approach towards these vulnerabilities and backlashes. Because essentially you want to eliminate backlashes. You don't want to just rip off a simple instance of availability. You want to eliminate them in a way that they wouldn't return in your code base. Or even if they would get, like, they would return, or at least they would be implemented in a way that this exploitation is feasible, make sure that you have other mitigations that might stop them. So go with your approach first and then try to understand what tooling you can develop or what other open-source tooling you can use and leverage. And I don't have a specific tool. I would say this is my go-to, because different tasks require a different set of capability, and we always try to work our way around the actual problem, not the solution. Like, the solution can be 400 different variations of mitigations or whatever. We're always trying to find what do we want to solve. And then we start looking on tooling and other components that might assist us in our job, but they're not really our job. So.

Nic Fillingham: And, got it. And how is this space evolving and what do you see happening in the near and sort of short-term future for how variant hunting will become perhaps more sophisticated and more sort of dynamic? Where are you hoping to take this space and what are your team focusing on, if you can share any of that?

Guy Arazi: I think that right now, even not future-wise, there are many variant hunting that, like, we as Microsoft struggle to find and also I believe that other companies. So I think that there are things that we need to solve now. And, like, I'm sure that the future wouldn't let us down and it will bring more issues and more complex scenarios. Don't forget, like, we all have AI going on now, all the LLMs, we have no idea how it's going to change the code bases, the way we build code, even in a way that, like, maybe LLM will potentially just distribute variations of vulnerability in our code bases in the future. So in case that it might help us now, but it might also hurt us in the long term. We might always remember that, like, security is unexpected and security progresses as long as technology progresses. And technology progresses exponentially. And it's not something that we see it growing pretty fast. And it's not something that we can say that slows down, it just goes the other way around. And I think that there isn't something specific I would say that will maybe allow us to do better variant hunting in the future, but I think that we potentially, we're in a time that we might need a different game changer, or a different thing to maybe do something better with variant hunting. Because right now, there's so many variant hunting and, like, so many variations of different vulnerabilities and different scenarios when different vulnerabilities are being found in the wild, or at least in Microsoft, that we're trying also to understand how to wrap our head around different things, mainly in terms of the logical part about it. How do we want to tackle it and what scenarios we want to focus first. So yeah, I think that I don't have any big plans for the futures around it. I think that we need to be dynamic and always adapt ourselves to the current technology and the current step of the way. So every time we need to assess the market, assess the features, assess the products, assess everything around us. And I think that once we find a stack that works in an appropriate way that will allow us to variant hunting. And by doing variant hunting doesn't mean that you need to disclose different type of vulnerabilities. You just need to enhance your capabilities around different variations of vulnerabilities in your code base. And where we're going a bit deeper, finding the vulnerability doesn't always mean that you're potentially resolving the bugs. So maybe in the future we're going to see something that will also find variations, but also mitigate them in the same sense. But that's too early to say. But the one thing I can say is that vulnerability is going to have, like, they're going to have effect on code bases by having multiple variations. And we see it now, and I assume that we're going to see it also in the future. So it's definitely going to stay here.

Nic Fillingham: Got it. So I'm going to try and do a summary. Tell me what I've missed here. So variant hunting sort of tips and tricks or guidance here is very much to focus less on the actual code itself, the actual sort of string, understand the logic, what's actually going on from a patent perspective so you can then look to find those patents in other places. Also, on understanding what is the security boundary that is being crossed, again, as opposed to what is the actual code snippet or the code chunk. Leverage your peers in the space around you, whether they are your colleagues or folks in the community to get their thoughts on what it is that you've found and how you've sort of analyzed it to understand that security boundary and the logic and the patterns that are happening. You can also use the LLMs, you can also use the Copilots and the ChatGPTs. I think you also said that this is still a uniquely human set of challenges that we sort of need human brains to be in and around this as opposed to just sort of automated static code analysis tools. And then obviously when you are writing code, anytime you put something in that is designed to be temporary in some sense, make sure you document it really well so that it doesn't accidentally find its way into the final product. How did I go?

Guy Arazi: Perfectly. Way better than what I've done, so.

Nic Fillingham: No, no, gosh.

Guy Arazi: I'm joking.

Nic Fillingham: I'd love to sort of open the floor, you know, again, for anything else you'd like to add, or is there any training or documentation, courses, anything that you sort of recommend folks that would like to learn more about this space, or perhaps even follow the work that you and your team are doing. Anything you'd like to talk about or anything we haven't sort of covered that you think is still important with regards to variant hunting?

Guy Arazi: So I at least try to share as much as I could in terms of, like, what we do and, like, give a better understanding and maybe ways for others to try and think about the issue in other sets of eyes. So don't go, like, too narrow and try to understand the issue. I don't think that I have specific courses or anything like that that would help them do variant hunting. But the one thing I can say is that always stay updated, like, try to learn what are the latest vectors and scenarios and vulnerabilities that were discovered. Try to understand how does, they might appear in your code bases. Maybe try to understand how can you, like, leverage other folks' insights and blogs and white papers or whatever, and try to understand how you can take it down to your company, to your assets, to your ownership and try to find the same similarities there. Maybe it's not straightforward, but when you find something that do look similar and you're very, let's say, well-trained, or maybe you're very familiar with your code base or your infrastructure stack, it will be easier for you to understand it by yourself, what means that, what is really important and if I can relate it to my research in my current job. So yeah, just be dynamic about everything. Try to explore, try to see for yourself if it makes sense to you. Because don't forget, like, a piece of data might not mean something to someone, but it might mean something to you. And this is always something that we need to follow. The data is, like, the key for our small choices, or at least we can route our paths in a way that will make us less mistake and more value. And I think that if we rely on it and we learn from others, we can essentially save a lot of time, pain, and gain a lot of value to the products and the features where we have ownership on.

Nic Fillingham: Got it. That's a great place to almost end it. I want to add one more thing, which I feel is important here. And that is, you know, variant hunting is important. Variant hunting needs to happen because I think, I didn't want to put you on the spot with a metric or a number, but it does sound like from Microsoft's perspective, when bugs are submitted to us, when vulnerabilities are submitted to us from researchers, it sounds like perhaps more often than not, or at least in a statistically significant sense, variants are found. And so if Microsoft is having that experience, is having those numbers and that volume with what's being submitted, then that's probably going to be indicative of what other folks in the industry should expect. So variant hunting is critical, variant hunting is important, and it should be invested in, both as a security researcher and as a, you know, an engineer and responder on the other side.

Guy Arazi: Yeah, I agree with that. Don't overlook variant hunting. Make sure that you turn every stone upside down and you really get to know every component. And then you'll probably, it'll be easier for you to find these different type of vulnerabilities. Don't just rely on one code chunk that says, like, that screams, this is a vulnerability. Find the other ones that don't scream, but they still have the variability in them. That's one thing I can say.

Nic Fillingham: Lovely. Guy, thank you so much for your time. This has been a fantastic chat. I've learned a lot. I hope, I know our listeners will as well. Is there anywhere we can follow you on the interwebs? Are you on any of the social, are you on the Twitters? Are you on LinkedIn? Would you like folks to look you up somewhere or reach out with questions?

Guy Arazi: Yeah, sure. So I'm basically mainly on LinkedIn and sometimes on X. On LinkedIn, I'm just Guy Arazi and on X, I'm handle MindFSXV. Just hit me up with questions and I promise that I will try to respond to everyone, even if it's not on Microsoft products and features. Thank you for listening.

Nic Fillingham: Oh gosh, well, thank you so much for your time. We'd love to have you back on The BlueHat Podcast on another day. Guy Arazi, thanks so much for joining us.

Guy Arazi: Thank you for having me.

Wendy Zenone: Thank you for joining us for The BlueHat Podcast.

Nic Fillingham: If you have feedback, topic requests, or questions about this episode.

Wendy Zenone: Please email us at bluehat@microsoft.com or message us on Twitter @MSFTBlueHat.

Nic Fillingham: Be sure to subscribe for more conversations and insights from security researchers and responders across the industry.

Wendy Zenone: By visiting bluehatpodcast.com or wherever you get your favorite podcasts. [ Music ]