The BlueHat Podcast 2.5.25
Ep 46 | 2.5.25

Automating Dynamic Application Security Testing at Scale

Transcript

Nic Fillingham: Since 2005, BlueHat has been where the security research community and Microsoft come together as peers --

Wendy Zenone: To debate and discuss, share and challenge, celebrate and learn.

Nic Fillingham: On "The BlueHat Podcast," join me, Nic Fillingham --

Wendy Zenone: And me, Wendy Zenone, for conversations with researchers, responders and industry leaders, both inside and outside of Microsoft --

Nic Fillingham: Working to secure the planet's technology and create a safer world for all.

Wendy Zenone: And now, on with "The BlueHat Podcast." Welcome to the BlueHat Podcast. We have a special guest on today, Jason Geffner. Welcome to the podcast. We will be discussing Jason's amazing blog post, "Scaling Dynamic Application Security Testing," DAST, for those in the know. Jason, tell us a little bit about yourself and then we'll dig into your blog post.

Jason Geffner: Yes, thanks Wendy for having me here today. Thanks Nic, as well. So, I am a Principal Security Architect at Microsoft. I've been here for the past three years and for the past one year I have been focusing on what you just said, which is how to best scale out Dynamic Application Security Testing or fuzzing of Microsoft's web services.

Wendy Zenone: Do you want to give us a brief overview of what DAST is for those that maybe are just hearing about it for the first time?

Jason Geffner: Absolutely. So, Dynamic Application Security Testing, or DAST, is a way to use automated tooling to try to find security vulnerabilities in a website or web service or web application. And the idea is that it works by sending many, many, many random looking requests to the target web service to try to find if the web server responds with any suspicious responses that might indicate that there's a security vulnerability in how the web service is designed or implemented. But it's much more efficient than having a human manually look at every web service in an enterprise deployment.

Nic Fillingham: How does DAST, do I say DAST? That sounds that sounds strange in my mouth. But anyway, how does DAST differ from SAST? And is there any other permutation of that acronym that we should be aware of?

Jason Geffner: So, there are three main permutations. So, there's DAST. There's SAST, Static Application Security Testing. Both approaches use automation to try to find security vulnerabilities. The difference though between DAST and SAST, is that DAST uses runtime testing while the target is up and running to try to find security weaknesses. Whereas for SAST, the S in SAST is -- well, the first S, is for static, and SAST works by using automation to look at the source code for the target application to try to find weaknesses based on what is in the source code. So, the target doesn't need to actually be up and running or deployed anywhere. As long as the SAST this tool has access to the source code, it can find vulnerabilities. The third version is IAST. The idea here is that it's similar to DAST, but it's more driven by a human. It's not fully automated. The human who's doing the testing has interactive testing abilities, which is what the I in IAST is for.

Nic Fillingham: So, your blog post, which I have up in my screen now, and we'll put the link in the Show Notes, you reference upfront that this is a follow on from your BlueHat 2024 talk. And so, I wondered if we could just go back to BlueHat 2024 for a minute and what you presented there. For folks that have -- were at BlueHat or have -- have watched some of the video recordings and sort of seen some of that coverage and then are reading this blog post today, so, at a high level, what are the differences? What did you -- what was published in the -- in the blog post today that -- that maybe wasn't yet fully formed or presented at BlueHat?

Jason Geffner: Absolutely. So, many of your listeners are probably thinking, "Why is Microsoft even talking about DAST?" DAST has been around for, I don't know, 20 years? And there are already so many really good tools out there to do DAST. Some might be open-source tools that are free, some might be commercial tools. I want to say first and foremost, this work that I've been doing for the past year is not on reinventing the wheel. It's not about creating a new DAST tool. The work that I've been doing has been focused on taking existing DAST tools that are already out there and actually figuring out how to get them to run automatically without requiring any human intervention. And the reason is that DAST tools, for the most part, need two things to be able to work properly. One is that they need to know for the target that they're going to scan, now what are all of the endpoints or interfaces exposed by the target? The second thing that they need is to be able to send authorized requests to the target, because if you think about most web services, especially the type that Microsoft runs, if you send an anonymous request to a web service that is expecting users to send authenticator requests, if it's anonymous and there's -- there's no authentication token going along with the request, and the web service is going to say, "Thanks for sending this request to me, but I don't know who you are. You're not authorized to send this request, so we're not going to actually process the request you're sending." So, if you imagine a DAST tool running without any privileges to send authorized requests, almost all the requests that it would send to a target web service are going to get rejected, meaning that the DAST tool really isn't going to be able to exercise the functionality of the service. So typically, when companies like Microsoft or other enterprises set up DAST scanning for services, they have to deal with these two problems of discovering what the DAST tool should test and specifying that, passing it as input to the tool. And secondly, making sure that the DAST tool has the right credentials to do the testing in an authorized way. And historically, that has been done through manual effort of configuring the service to allow for authenticated and authorized testing and manually handing those credentials to the DAST tool, and also making sure that an Open API specification or a Swagger specification which defines the endpoints, the interfaces offered by the service is made available to the DAST tool. And those two things, handling the -- the auth and handling the Open API specification, they're not necessarily difficult to do, but they are time consuming. And when you're a company like Microsoft and you have thousands or tens of thousands or hundreds of thousands of endpoints, doing this manually doesn't scale, which is why I endeavored on this about a year ago to figure out, "Is it possible to automatically run DAST tools without needing a human to manually configure their services or manually generate these specifications. That's what we're doing now. We're doing this all automatically.

Nic Fillingham: And what's the answer? Is the answer, yes?

Jason Geffner: The answer is yes, it's possible.

Nic Fillingham: All right, well, thanks for your time, Jason. We'll see you on another -- oh, is there more? Sorry.

Wendy Zenone: I was wondering, does -- does the automation help also validate the findings? Does it look at it and maybe remove false positives or things that maybe were accidentally flagged in some way?

Jason Geffner: It can. So, it's actually one of the exciting things about LLMs as they are advancing that we're seeing every day, is being able to use LLMs to try to find what we would call false positives, and this could apply to output of a DAST tool. It could apply to output of a SAST tool. It could even apply to vulnerabilities discovered by humans. And if you're running a bug bounty program and you're getting many, many security vulnerabilities reported to you, being able to use LLMs or other forms of automation to try to detect is something likely a false positive or a true positive. So, that's not something that I personally have been working on for the past year. I'm more interested in getting DAST tools running, but when it comes to handling the output, yes, we're always interested in exploring new ways to make sure that we're not bubbling false positives up to our developers.

Nic Fillingham: As you were talking, Jason, and as I've sort of read through the blog post, not to turn this into a critique of your title choice, but I wondered if another way of thinking the -- sort of the what you talk about in the blog is this is also about sort of automating. Part of it is, you know, it's called scaling DAST, but in some ways it's also automating DAST, right? So, the question I was going to ask is why is this problem that you're talking about, why is the problem of scale the higher order bit than perhaps the idea of automating? So, you talked about the challenge of how you not have to require so much manual input and perhaps human input to get DAST up and running for a particular application or thousands of applications. And so, that felt like maybe an automated part. But you know, there's obviously a scale bit. Maybe I didn't explain this very well. Why is this called scaling DAST and not automating DAST? Maybe that's another way to ask the question.

Jason Geffner: Yes, I think when you are an enterprise the size of Microsoft, anything that you're Automating, is going to be automating at scale. If, you know, we were a much smaller company with many fewer web services to worry about securing, we'd still be interested in automating, but we wouldn't have to worry about the scale because there are fewer targets that we need to shore up. So, I think the scale in this context is really a result of the size of -- of the company, the size of the attack surface that Microsoft has, and making sure that we support that in the -- the best way possible.

Wendy Zenone: Who is this blog post for? What -- what is the audience you're targeting?

Jason Geffner: Yes, I think there are a few audiences in this case. One audience is customers of Microsoft Azure and other Microsoft services, showing our customers, all the work that we're doing at Microsoft through SFI and other means to ensure that we are securing their services, securing their data by using clever engineering approaches that perhaps our competitors aren't using. The other audience in this case, is security assurance subject matter experts or owners at other companies who are perhaps grappling with similar problems of how to perform DAST in an automated way, perhaps at scale, encountering the same issues that we've encountered of automatically generating these Open API specifications, automatically handling authentication and authorization, and perhaps giving them ideas of how they can use similar approaches in their own environments to secure their products and services.

Wendy Zenone: Quick question. In your blog post, you mentioned confidential containers. That's new for me and I know there's a link in the blog post that does give a much more detailed explanation. But if could you give me like, a quick, just overview what a confidential container is? I've not heard that before.

Nic Fillingham: Yes, maybe how it's used in this -- in this context.

Jason Geffner: Yes, so a confidential container is similar to a confidential VM, but a container version of it. Now of course, that begs the question, "What is a confidential VM?" The idea for both confidential VMs and confidential containers is that any code or data running in or deployed to these environments is not accessible or -- and it's also tamper proof, from the -- the operator of that cloud environment. So, if you are an Azure customer and you deploy a confidential VM or a confidential container, Microsoft cannot see what is running in that VM or container. They can't tamper with the data in that container or VM. The other benefit is that it allows us to perform attestation for what is running in the container to ensure that if an attacker somehow compromised a confidential VM or a confidential container, we would actually be able to detect that by doing attestation reporting on the container to see is what is running in the container really what we expect to be running in the container or the VM. So, this adds another layer of security because you know, I was talking earlier about how there are so many excellent DAST tools that are written outside of Microsoft, that we would love to leverage and run against our own services. But because there have been so many historic risks in supply chain security weaknesses, we are fully cognizant of the fact that some third-party DAST tool that we may deploy for scanning our services, may be compromised. There may be backdoors. We want to ensure that execution of any third-party dash tool is fully isolated and that if an adversary is somehow able to sneak a backdoor, say, into one of these tools, that we always know what's running in -- in those containers, and that doesn't allow an attacker to slip anything new in without us detecting it before -- before it's executed.

Wendy Zenone: That was a great explanation. Thank you.

Nic Fillingham: To sort of summarize what you just said, so using confidential containers, confidential VMs, comes with this capability of attestation, which means that worst case scenario, some third party tool has malicious code in it, has a backdoor in it. It can run inside a -- a commercial, sorry, a confidential container or a confidential VM, but you would be able to -- it couldn't do any damage, or is it more that you would see it running and be able to kill that VM or kill that container? I guess if we play out that worst case scenario, how does being in a confidential container help mitigate any of that risk?

Jason Geffner: Yes. So, it allows us to prevent it from running to begin with because we would ensure that the image that is deployed to the container matches the image and the security policy for that image that we have designed the container to run with. So, if we see an image about to be run in a container that we are deploying, and it doesn't match what we expect to see, we just don't run it. So. we prevent it from happening.

Nic Fillingham: Got it. So, Jason, in the blog post, you talked about how you, and I think maybe it's your team, are you a lone wolf on this one or I assume there's a team of folks working on this problem?

Jason Geffner: So, I've been focused on this for the past year myself. I do have some others supporting me on this, a developer named Eugen [phonetic], a developer named Satish [phonetic], who've been contributing to this as well. And I've also been working with Microsoft's DAST council on this --

Nic Fillingham: Oh.

Jason Geffner: -- project in many ways, and when it comes to planning and design, and this council is made-up of security subject matter experts from all security assurance teams from around the entire company. I think oftentimes people think, "Oh, Microsoft Security, it's one team." The reality is that yes, we do have an organization named Microsoft Security, but there are numerous security assurance teams throughout the entire company focused on different products and services offered by various organizations. And the DAST Council has representation from each of those security assurance teams to ensure that we are designing and developing this new DAST platform in the best way possible, in a way that meets the needs and expectations of all of these security assurance teams around the company.

Nic Fillingham: When you say DAST council, I immediately see sort of a Tolkien-esque scene of folks with staffs and pointy hats. Anyway, the reason I ask is that if I've read the blog correctly and I understand what the announcement is. So, I read in the blog post that you and the team and the council have created an agent and the agent is currently running in non-production or test environments. Is the eventual goal to have that agent running in production environments so that it is able to, as you say, dynamically test code that is running in a production environment or is this -- is DAST, you know, conceptually something that only really happens in the test phase and then once it goes to production, you rely on other mechanisms and other tooling to ensure you know, security and the validity of -- of what's running?

Jason Geffner: Right. So, the goal is for this specific agent that we are developing to remain only in non-production environments because Microsoft already duplicates the functionality of services that are running in production into nonproduction environments and If we can test in non-production environments, it's better for us to do it that way because then we don't have to interfere with anything running in production. So, even if there is a performance penalty as a result of the DAST scanning we're doing, better to have that performance penalty not adversely affect our customers. That said, there are requirements for some programs like FedRAMP and other compliance requirements, that do need Microsoft to do DAST testing in production. And Microsoft does have a DAST platform that already does that very well. The reason why we endeavored on this project, though, is that the existing platform that we have that's up and running already, and has been for many years, still requires service owners, web service owners, to opt in and to provide the open API specifications and configure their services for authentication and authorization. And we do have that for our -- our most critical services of course. We do and have been doing DAST say on those for -- for years and years and years.

Wendy Zenone: Is there any plans or is this available open source, or is there any plans to share this through the open-source community?

Jason Geffner: Not currently, but I don't want to, you know -- never say never.

Wendy Zenone: Right.

Jason Geffner: It's possible we may find value in open sourcing it in the future, but currently it's -- it's inner sourced only.

Wendy Zenone: Wonderful.

Nic Fillingham: Perhaps there are elements in the blog post, or any will - will there be white papers published, that at least sort of talk about how it works so that if you know, folks listening to this podcast, folks reading the blog that don't work for Microsoft that are facing similar challenges, could they go further than what's just in the blog post to try and sort of implement some of these best practices?

Jason Geffner: I think they certainly could and there are definitely opportunities ahead of us to release white papers with more detail. Another thing to keep in mind is that the approach that we're using for this agent is by nature implementation specific to the web frameworks that we're targeting for the services that are running, which is to say that we're Microsoft and so, I'm sure it'll come as no surprise to most people that most of our services that we run at Microsoft are written in ASP.NET. But there are two flavors of asp.net. There's ASP.NET Framework, which is the older version, and there's ASP.NET Core, which is the newer version. Well, for our prototype, we're currently only supporting ASP.NET Core, and of course down the road, we're going to add support for ASP.NET Framework, and after that, you know, we'll see what is the next most common framework used at Microsoft. Maybe it's Node.js, maybe it's Ruby on Rails. But the key takeaway here is that there is the potential for multiple white papers to come out in the future, based on this long tail of various implementation specific details that we pursue to ensure that our agent can work for the most common web service frameworks being used at Microsoft.

Wendy Zenone: I wanted to change directions a little bit and ask about the transparent auth that you mentioned. Can you talk about what is the transparent auth protocol or that you -- you mentioned in this blog post? It's mentioned a couple times and I -- it's another thing I haven't heard of, but I want to dig into a little bit.

Nic Fillingham: We like to scan the blog post for new acronyms and other, you know, words that have been jumbled together in ways that we haven't seen before.

Wendy Zenone: We're -- we're learning along with the audience here.

Nic Fillingham: Oh, yes.

Jason Geffner: Absolutely. So traditionally, when it comes to DAST scanning, a user or service center provides to the DAST tool the API endpoints or interfaces for the tool to scan, and credentials for the tool to scan with. But the approach we're taking is we don't want to have a human manually provide those authentication credentials. So, how can we use automation to make it so that a DAST tool can run without authentication credentials and still be able to exercise the functionality of the target web service? So, we took a look at how, in this case for the prototype, how ASP.NET Core checks authentication and authorization for incoming requests. And what we were able to do is create hooks in the request handling pipeline by discovering at runtime the authentication module for the target web service, and the authorization module for the target web service. And the transparent auth hooks exist right before the authentication module or component in the request handling pipeline, and right before the authorization component in the request handling pipeline. And what these two transparent auth hooks do, is they look to see, "Is the incoming request that is being handled in the request handling pipeline in this moment, from one of the DAST tools that we ourselves are orchestrating?" If it is from one of our orchestrated DAST tools, then the hooks that we've injected, actually send the incoming request to the components in the request handling pipeline after the authentication component, or after the authorization component, thereby skipping any authentication or authorization checks that would normally be done by the web service. And because this is effectively transparent to both the service, because it doesn't see that we are injecting these hooks after the fact, and transparent to the tester because we do this automatically, we decided to use the term transparent auth. But want to be clear, it's not a -- a protocol. It's not a standard. It's a term that -- that we coined for this hooking approach that we're using.

Wendy Zenone: Sounds like you wrote the blog post. You seem to know. That was great. I was following along with the, you know, diagram here and it shows the hook, the authentication to hook, the authorization. So, that was -- it was -- I encourage listeners to pull up the blog post and follow along.

Nic Fillingham: Jason, I want to come back to scale here and my -- my poor attempt at -- at creating a question to differentiate between automation and scale. It sounds, if I'm understanding all of this correctly, it sounds like the work that has been -- the work that you're doing here is in part going to remove a lot of manual human input required to configure DAST system services. But there is still some work that needs to be done around implementing some of the Open AI components. Do you have a sense for how much, you know, sort of how that -- the work that is required, how is that changing? How is that shifting? How much has been taken away and now doesn't need to happen with this work that you've created? So, it sounded like you want to implement DAST against an environment against some web apps. You have to go and manually configure it. Okay, so now instead of having to do that, it sounds like you just implement some new Open AI elements and then it's automated from that point forward and therefore, some quantifiable amount of work has now shifted from manual configuration into added in. And I'm clearly not asking this question very well.

Jason Geffner: I think I understand where you're going with this. So, let me try to answer and -- and we can clarify if we need to.

Nic Fillingham: I'm going to take one more stab because I just -- I just love just hearing the sound of my own voice. No. This isn't fully automated in the sense that you don't have to do anything. There is still some work required for developers and engineers that are running test scenarios or perhaps writing code or both with this new approach that you've built. Is that correct? Or has all of that been taken away now and it's now a completely fully-automated, fully-scaled-out system with no new input or no new sort of work that needs to happen by test teams or test engineer teams?

Jason Geffner: So, the goal of this work is the latter where there is no work required at all to get this to run on your service. Now, we're not there yet, but long-term, we do want to have this agent automatically deployed to all non-production instances of services running at Microsoft, such that as a service owner, you don't have to do anything. This just happens, you know behind the scenes. You might not even know that it's happening because it's all being taken care of for you. Now, that said, it doesn't mean that there's no human work required at all because when it comes to mitigating the security vulnerabilities found by the DAST tool or the DAST tools that are being run, that still is going to have humans involved in the loop to triage and mitigate those security vulnerabilities. And the other thing that we're not looking to get rid of from a human perspective, is targeted pen tests. DAST is really good at finding low hanging fruit, but there are always going to be classes of security vulnerabilities or even instances of security vulnerabilities that are really hard to find with automated tooling. So, we're not looking to get rid of all the incredible pen testers and red teamers here at Microsoft or elsewhere in the industry. We're looking to make sure that proper security assurance is done at scale for all the services in our scope to make sure that we are finding the low hanging fruit in our services before external attackers find them.

Wendy Zenone: Did you like that, Nic? Was that a good answer?

Nic Fillingham: It was a great answer.

Wendy Zenone: Do you need more?

Nic Fillingham: I wish I was just much more concise in -- in my -- my question asking, but maybe we can fix that in post. Make me sound smarter. You go, Wendy.

Wendy Zenone: My question for you about this is we all know security and SFI and that's our main focus right now. Was this already in motion before SFI was initiated, or was it something that maybe was in motion? SFI became our priority and then this kind of like skyrocketed it? Like, how did those things help influence innovation such as this?

Jason Geffner: Yes, so they started in tandem. This wasn't the result of SFI, but the fact that Microsoft is so committed to securing our products and our services and our -- our customers' data is certainly helping with the internal support for this work.

Nic Fillingham: SFI of course is Microsoft's Secure Future Initiative which is sometimes being referred to as the sort of trustworthy computing the next moment in case that acronym had slipped by any of our listeners.

Wendy Zenone: I love the automation side of things. Back in my very first cybersecurity job, there was one person, that's all they did was the DAST, and then one person that's all they did was DAST. And that was it because it was so hands-on and I could see this just being a huge improvement, you know, just even for small teams or even big teams, anyone. But, yes, that's great. And -- and with everything that you're doing for this, what's next? Are there any -- I mean even if it's not planned on the road map, what would your dream be? Like, what features would you love to see added, or you know, expanded upon?

Jason Geffner: So, near term, I would love for us to continue testing this on more and more web services to try to -- well, to continue to mature it as a -- a product internally offered. In the future, we were talking earlier about hooking up LLMs to try to automatically filter out false positives discovered by DAST tools. There's also the opportunity to correlate security vulnerabilities that have been found with the actual code blocks responsible for those vulnerabilities. Now, that is already part of SAST tooling. Obviously, if a Static Application Security Testing tool finds a vulnerability in source code, well, it knows what line of source code the vulnerability was in. That's usually not a feature available with DAST tools. So, our thoughts are, "Can we kind of borrow an approach used by class of tools named RASP to try to find -- to try to associate security vulnerabilities found at runtime with the actual source code responsible for those vulnerabilities or those code blocks. Another thing that we are looking to -- to do down the road is using an approach that is leveraged by native fuzzers, as opposed to web service fuzzers, whereas if you're running a fuzzer like AFL, American Fuzzy Lop, against a program on your computer, it is providing random fuzzing input to that program and it attaches A debugger and knows what code blocks in the target program are actually getting exercised at runtime based on the input from the fuzzing tool. And from there, it can do -- can determine code coverage, all the different code blocks that are executed, based on the fuzzing input. And the idea is that, hey if you run a fuzzer for a week and you've only had that lead to 5% of the program being, you know, executed, your coverage rate is pretty low, and maybe there are security vulnerabilities in the other 95% of code that you're not testing. So, it's an opportunity as a security expert who's doing the fuzzing to maybe adapt or adjust your fuzzing approach or your testing corpus. Well, that approach doesn't really exist these days for most DAST tools because usually the DAST tool is run remotely on a remote client, relative to where the web service is running. But because of the agentic approach that we're using of actually having our own agent running on the same host system that's running the web service, we should be able to monitor the code blocks in the web service process that are being executed as a result of our DAST tooling, which means that just like native fuzzers are able to measure code coverage based on fuzzing, we could also use this agentic approach for web services to measure code coverage for fuzzing web services.

Nic Fillingham: How does this system ensure that -- I'm trying to think of a sort of a malicious, sneaky approach where the web app assumes that there's an agent or something watching it run, and then sort of obfuscates or hides certain bits that it doesn't want picked up by the DAST or by the, you know, by the scanning -- security scanning tool. That's probably a DAST issue and not a scaling and automation issue, if that makes sense?

Jason Geffner: It sounds like you're saying what if someone at Microsoft created a Microsoft web service that intentionally tried to subvert security tooling deployed by Microsoft?

Nic Fillingham: Well, I mean let's say there's a -- a malicious actor who gets in and puts malicious code into a web service but also writes some extra cheeky code to check whether or not there are any security tooling search security toolings run, and if so, then it sort of doesn't run those elements, so that the DAS can't see it, or the agent can't see it, etcetera, or that it's able to sort of trick the agent or trick the tool to remove elements from this Open API specification that gets generated.

Jason Geffner: Yes, I think that's one of the -- the beauties of Microsoft's defense in-depth approach, that no one tool or technique is meant to detect or prevent every security weakness. You know, that's why we have things like policies to ensure that only signed code is run on Azure services. It's why we have detections in place for adversary activity on our services. That's why we have entire incident response teams to handle these types of things. So, if an attacker compromised a web service and subverted detection of one of our desk tools to try to find vulnerabilities, I'm confident that we would find that adversary activity in other ways. I think the -- the benefit of DAST tooling, is to try to find security vulnerabilities before we actually push something to production. It's one of the benefits of doing this in non-production environments. Ideally, we do this every time there's a code change to a service or a configuration change. We run our DAS tooling and we try to find security vulnerabilities so that they can be fixed before those updates are pushed to production. Finding security vulnerabilities only after something is in production is really less ideal, but that's why we have defense in-depth to try to detect and prevent exploitation, regardless.

Nic Fillingham: Clearly, less ideal is a great way of describing that. And yes, defense in-depth is obviously the answer. I -- I was being very pedantic there. But just to sort of you know, find out if that sort of extremely fringe case scenario was within scope for that, but it probably would get picked up with the static code analysis, or you know, in some other sort of process. So, you're right, defense in depth obviously is critical. Thank you, Jason.

Jason Geffner: My pleasure.

Wendy Zenone: What is your biggest take away from this work that you can share with the audience that maybe doesn't have all the resources that Microsoft has, and maybe they're just a team of two, and there's like is -- is there some like nugget of information that you're just like, "This is what I can share with you all to take away here"?

Nic Fillingham: And can I add to that because I was going to ask a very similar question, which please answer Wendy's question and I -- it may even be the same question is like, "What was the thing that was the most interesting, strange, unusual? Like, where were you sort of most surprised, either pleasantly or not, during this whole process? Maybe that's the same answer. I don't know, but I was wondering like, "What did you expect that didn't happen or what did you not expect that did happen?" And you're like, "Whoa." And is that the thing to pass on, as per Wendy's question?

Jason Geffner: Yes. So, I'm going to give Wendy two answers. One is --

Nic Fillingham: Can I have two as well?

Jason Geffner: I'll give you three. I'm sure there are many surprises I had. I'd say my first answer, Wendy, is my advice to people in the security space or really in any space, is try to find fun projects to work on that allow you to combine your passions. So, throughout my entire career, my passions have always been around reverse engineering, application security and automation. And it's not easy to find an intersection of all three of those things. But when I do come across problem spaces or opportunities that allow me to -- to use one, if not two or even all three of those passions of mine, I get really excited about the work. I want to talk more about it and share with other people. And it gets me to be excited every morning to actually work on solving really hard technical problems. So, that's my advice to listeners. Try to find ways to combine your passions into one fun project. The other big take away that I had from this work is that while there were plenty of technical hurdles to overcome in implementing a working prototype for this. It was at least as important to leverage the soft skills to make this type of work successful at a big company like Microsoft because even if I had a working prototype and I actually could show that end to end it works beautifully, it -- you run it with no input and automatically spits out real security vulnerabilities, if I don't have support from my peers and my leadership and other organizations around the company, there's no chance that we can actually get this deployed to services throughout the entire, you know, for all of Microsoft. So, building that support and having that -- those people skills to get people on your side, get them to champion your work, get them to support it, is just as important, if not more important, than the actual technical side of all this work. And Nic, to answer your question about the -- the biggest surprise I've had was constantly surprised as I was developing the first version of this, how many of my initial thoughts about how to implement this, ended up not working. So, for example when it came to automatically generating these Open API specs at runtime, my first thought was, "Well, hey, can I just take a crash dump at runtime, a non-invasive crash dump of the target process and passively look at the memory to discover all of the Route API endpoint objects in memory, and from there, generate outside of the target process, oh and I could just generate the Open API spec, just based on what I see in memory. And the reason that didn't work, is that many of the fields in memory for ASP.NET, actually aren't populated at runtime by default. They're only populated in memory at runtime when certain get requests for properties are made. So, this is sometimes referred to as runtime data loading or lazy loading of data, but I couldn't rely on just a single snapshot in time to be able to extract all the information I needed. What I realized is that what I need to do is get references to objects in memory and actually call their getters in order to populate the data that I needed in order to generate the Open API specs. And the next approach I used was trying to discover route endpoint objects in memory and use the address of those objects to try to get references, and then manipulate them at runtime. And I found that even though I was able to use that to find all of the objects and the addresses, there was no threat-safe way for getting references to those objects without risking a crash on the service which even in non-production environments is something we want to avoid. So, I had to continuously iterate on the approach I used in order to find something that worked. And I think that another takeaway for listeners is that you know, they might think, oh here's this guy, Jason he's been you know working in in this security space for 20 years. He probably knows exactly how to solve this from the very beginning and everything's just going to work. And the reality is that it doesn't matter how long you've been working on problems like this, it's always a result of constant iterations and learning from your mistakes as you go and building on those and trying to find the next approach that's going to work, until you land on something that does. And that's why we're publishing this blog post now.

Wendy Zenone: I know we're just about out of time, but I have to ask this question. This is one of the questions I love asking everyone is Jason, what do you like to do outside of work? Who is Jason beyond Microsoft? Any fun habits? You know, are you a opera singer on the side? You -- you know, pottery? What is it?

Jason Geffner: So, almost all of my time outside of work I dedicate to spending with my wife and daughter. They are, you know, the ones who keep me going, who keep me happy. Ninety-nine percent of the time I love my job, there's always that 1% that you can't get -- can't get away from, but in those moments, having a supportive family is -- is so critical. So, going out on date nights with my wife, trying new restaurants every week, spending time doing fun things with my daughter, it's what keeps me happy. I love doing poker games. You know, for so many years before the pandemic, I had poker games in person that I would host at my house and then it went online. And now that the pandemic thankfully for the most part's over, getting back in the swing of things of doing those in person is exciting. And even though I'm not a very good opera singer, one thing that always surprises people when I tell them is that I have years and years of experience doing improv comedy.

Wendy Zenone: Oh my gosh.

Jason Geffner: You know, I was talking earlier about those soft skills when it comes to getting people on your side and building support internally for things you're working on, having those improv skills is really helpful because it does help you go into those -- those high pressure meetings fearless and knowing what to say and how to say it and recognizing, "Hey, if you say the wrong thing, you can always recover and get back on track."

Wendy Zenone: Nic, I have an idea. Next BlueHat, we're going to have a poker night. We're going to have Jason doing improv in between presentations. It's all planned. Yes. And.

Nic Fillingham: That sounds awesome. Jason, thank you so much for your time, for being on, "The BlueHat Podcast," for presenting at BlueHat. You've also presented at other BlueHats in the past. Is that correct?

Jason Geffner: This was my first BlueHat. I presented at BlackHat many times and other conferences, but this was my first time having the pleasure to present at BlueHat. So, Nic, thank you. And Wendy, thank you for that opportunity to present at BlueHat. And thank you so much for the opportunity to talk with you and your listeners today. This was fun.

Wendy Zenone: Thank you.

Nic Fillingham: Thanks so much. We'd love to have you back on the podcast and the conference in the future. Thank you very much, Jason.

Wendy Zenone: Thank you.

Jason Geffner: Thank you.

Wendy Zenone: Thank you for joining us for "The BlueHat Podcast."

Nic Fillingham: If you have feedback, topic requests or questions about this episode --

Wendy Zenone: Please e-mail us at bluehat@microsoft.com or message us on Twitter at MSFT BlueHat.

Nic Fillingham: Be sure to subscribe for more conversations and insights from security researchers and responders across the industry --

Wendy Zenone: By visiting bluehatpodcast.com or wherever you get your favorite podcasts. [ Music ]