Threat Vector 8.8.24
Ep 29 | 8.8.24

Enhancing Ethical Hacking with AI

Transcript

Ryan Barger: So, an unskilled attacker, attempting to do anything nowadays is able to be much more powerful than they were with that-in a pre-AI era. [ Music ]

David Moulton: Welcome to Threat Vector, the Palo Alto Network's podcast, where we discuss pressing cybersecurity threats and resilience, and uncover insights into the latest industry trends. I'm your host, David Moulton, Director of Thought Leadership. [ Music ] Today, I'm speaking to Ryan Barger, Director of Offensive Security Services, about how they're using AI. Here's our conversation. [ Music ] Ryan Barger, Welcome to Threat Vector. Excited to have you here.

Ryan Barger: Likewise. Excited to be here. Thanks for having me.

David Moulton: Ryan, talk to me a little bit about what you do at Unit 42?

Ryan Barger: Sure. So I am the Director of Offensive Security Services at Unit 42. So, at the end of the day, we have a team of people who are specialized at emulating adversarial skillsets. So, basically, using the same techniques that attackers do. And they do so in order to try and identify vulnerabilities inside of a network, and to move through a network, and maybe exercise blue teams, a myriad of things, a myriad of benefits to our team, all by using the same techniques wielded by bad buys.

David Moulton: Ryan, I think this is a really interesting space, and I'm curious, how did you get into, or what got you interested in Offensive Security?

Ryan Barger: So, I started out as a developer, and somewhere along the way, I was recruited into static code analysis. So if you're not familiar with that, basically I received millions of lines of source code, was put into a dark room, you know, source code that wasn't mine, it was someone else's, and had to go find all the bugs and flaws inside that source code. Can I tell you something, David? That was my happy place. Like, I loved static code analysis. I did it for many years. If you were to ask me what can I do tomorrow that would make you happy? That would be it. So, in doing so, and constantly being exposed to different developer mistakes made inside a myriad of different applications every single day, I started to realize, you know, you start to be able to see how things are working on the back end of all the things that you're working on. Eventually, I found myself, and I think you might sympathize with this, where I was the subject matter expert in that field. The navy had appointed me as a subject matter expert for a specific command, and I was like overseeing tons of test events. But my mentors had all moved on to other things. And so one of my mentors had recruited me into Offensive Security. And I went and joined a DOD sniper red team, to you know, start taking that ethical side, and also twisting it to go from-I know how to find the vulnerabilities, but also screening exploits for them to move through a network. Really exciting stuff.

David Moulton: It sounds like it. So, today we are going to talk about AI, automation and off sec. maybe we'll get into deep fakes, and some of your thoughts on where Offensive Security should be focusing, where it's going. I'm excited for the conversation, let's get right into it. [ Music ] In Offensive Security, how is AI being leveraged to automate and enhance tasks that were previously manual or time consuming?

Ryan Barger: It's interesting. A lot of what Offensive Security is, is manually grinding through dead ends, until you know, after 99 percent of your dead ends, you finally find that one leeway that leads you further into an attack. So what we're doing with using AI is trying to help more quickly filter through those potential dead ends. So, let me give you some examples. Areas where we're using it is inside of OSINT analysis, as we're, you know, assessing open sourced information just sitting on the Internet. As we're doing payload development, and establishing new evasion techniques to get around defensive products, we're also using it to help establish and build our infrastructure, but that's just a small snippet of things. Additionally, from you know, the overall management of an operation in Offensive Security, for things like report rating, and all of the things that go along with just doing a test event, we're trying to find a way to reduce that manual grind that is hacking, and focus in on the areas that are really useful, and increase efficiency.

David Moulton: Ryan, for listeners that haven't heard the term Offensive Security, can you define that?

Ryan Barger: Absolutely. So, at its crux, right, you could just drill it down and say it's hacking, right? It's ethical hacking. So you know, I'm going to use the exact same techniques that an adversary uses to attempt to identify vulnerabilities and move through an environment, usually after a specific set of objectives, right? So, that can range from phishing, right? Making phone calls into organizations, trying to social engineer access, all the way down to pivoting through an organization's domain to try and get access to a, you know, a specific system that is specified by the customer to be their golden-they're crown jewels, right? So, we move through a network and try and use the exact same techniques as an adversary to try and assess overall cyber risk. That's really the single sentence description. At the end of the day, I'd like to say that my mission objective is not just to emulate the adversary, but to help the system sleep at night, right? Systems are, they're aware that there is a risk in their network somewhere, or at least they think there is. They send us after that perceived risk, we use all the techniques that a bad guy can use, and we tell them at the end, go yeah, that is a valid risk and here are some recommended remediations, or otherwise, they tell us, we tell them no, actually, there are sufficient safeguards there to prevent it, and then they can sleep at night.

David Moulton: I love it. So an ethical hacker, and not quite the therapist for this, or so, but [laughter], in a sense, taking their nightmares away. How about OSINT? That's a term you've used. I've heard it before, but I want to make sure our audience is aware of what OSINT is.

Ryan Barger: Sure. Open Source Intelligence. Basically, when I'm an attacker moving after an organization, before I actually go and start giving away the fact that I'm targeting an organization, I want to start looking at all the information that pre-exists out there on the Internet about that. So, looking up, you know, domain name records, going through LinkedIn profiles, and trying to figure out information about their employees, recent job postings, the most recent way with which we combined OSINT and AI has been we go through an organization's LinkedIn page, we identify all of the people affiliated with the organization, along with all their accompanying details. So, for example, Jane Doe went to college at the University of Pennsylvania, right? And now, we take that information and we correlate it with massive troves of breach information, so in other words, people who lost their Yahoo accounts in 2012, those passwords, and all that stuff, we're going through that with all the information we know about organization. So we'll figure out that, you know, we'll go and look through Jane Doe's name, and there will be a million records for Jane Doe. But we'll correlate that to addresses using AI, that are associated with University of Pennsylvania, or other accounts associated with University of Pennsylvania, or maybe phone numbers, with the right zip code, that starts to narrow it down, and increase the fidelity of-of valid, possible results. And then now we've learned more about Jane Doe. We've learned about her potential password usage history, maybe her password was, you know, nana's boys one, and now we know we can try nana's boys 1, 2, or 3. Right? As we try to move through organization. So pilfering through massive troves of data, and trying to figure out which ones are most likely valid is something we're doing with AI and OSINT.

David Moulton: And that data was available, sometimes those digital bits, we're giving them away, we're publishing them actively, whether part of our social media presence, our professional social presence, press, or websites, and you're collecting that data. And because of the speed and scale that AI can achieve, finding those relationships such that you're not looking through an infinite number of Jane Doe records, but very specific to a target that you would like to include in your ethical hacking, such that you can get to a point where you can say this is a weakness that is out there, and you need to take a mitigation, as part of a recommendation to us, so--

Ryan Barger: A hundred percent. It's something we use to try and get that initial access in an organization. We take the thousand, the million Jane Doe records, and turn them into a potential, you know, 50, and then use those to try and get into an organization for initial access.

David Moulton: Got it. So in terms of payload development, how does AI assist in creating new methods to evade end point detection or EDR systems? And do you have a good example for the audience?

Ryan Barger: Absolutely. So, I think we could say payload development as a predecessor to this, one of the things that AI is doing is it's leveling up the unskilled attacker, right? So an unskilled attacker attempting to do anything nowadays is able to be much more powerful than they were with that in a pre-AI era. So let's take that foundation and move into the concept of payload development. When you're developing a payload to a network, you usually start from a pre-existing repository on Git Hub, etc., that does something valuable that you need to do. You download that. The first thing you're trying to do is get past static signatures of EDR products. So, you have to change the code in a way that an EDR looks at the static file and says this is not malicious. It's not affiliated with that Git Hub repo that we all know is malicious. Right? So often, that's easy. It's removing key words, like Mimi Cats, and all that stuff, that are very highly affiliated with malicious files. But sometimes, it gets down into a very specific way that a certain DLL that's a rare DLL in the Windows library that is being called, it just is the usage of that library is decreasing the reputation of that file, and as a result, triggering it as malicious. And that is the type of static signature that is where it takes a really skilled attacker to start manipulating. Now, I actually ran into this right around the beginning of the AI boom, and I had to find a way to do what was a very complex set of logic, without using a specific DLL that Windows was already using in a payload. So I-look, I'm a very senior developer. I've been developing for over 20 years. But when you get into trying to, you know, use Windows in ways that it is not intended to be used, then it still becomes a difficult process to think through and plan. I just dropped into AI a specific snippet of code and said I would like to perform this exact same functionality without using this DLL, and Bob's you're uncle within seconds, it provided me two to three unique starting points with which to modify that code, and I successfully evaded signature from a static standpoint. Static, you know, file analysis, by doing so. So now, look, again, I've got 20-plus years of software development behind me, so I was able to craft the query to AI, etc., in a way that got me a good result, that I was able to interpret the results, and tweet the results, but still, I think the level of entry to do that type of technique of creating completely unique payloads is-- the bar to entry is much lower.

David Moulton: So, it sounds like if you're an unskilled attacker, it up-levels your game. If you're a very skilled attacker, it speeds up your game.

Ryan Barger: A hundred percent accurate. Very accurate.

David Moulton: Yeah, that's frightening. But it also means that on the ethical hacker side, it's speeding up your game so that you can find these things and help harden against those type of attacks.

Ryan Barger: A hundred percent agree.

David Moulton: Ryan, talk to me about some of the other ways that you're using AI in your attacks.

Ryan Barger: Sure. So it's the same general technique. Before we talked about OSINT analysis, and I was digging through massive troves of information to try and find those, you know, unique things to look at that are most likely valuable, it's the same thing when it comes to in the middle of the attack. In the middle of the attack, once you get that initial access, and you're moving through a network, it's the same game. You're collecting troves of data, and analyzing it. So, let me give an example for reconnaissance. We developed a tool that would go through and look at all the files on a specified set of file shares. Now, there's already tools that do similar types of things. In a pre-AI era, we would go out and look for things like P-dot text, or files that contained the word password, etc., to try and nail down files that are perhaps of interest. Now, if we start with that, which already gave us a long list of files to dig through, and we incorporate in analysis of the return data with LM, we can further hone down what is the actual files that we should run after and go look at, to further our attack. Let me give this example. Inside of most of the tools that are digging through a network in an automated fashion looking for files, they're literally looking for things like is the file called P-dot text? Or is it called web dot config? Because those are things I want to look at. However, the LM can look at the greater situation and say wait a minute, I see that this path, this file is located inside of a default user account on this directory, and I know that file, that file is not meaningful at all. We can even tell the LM to maybe open up and take a look at the first 100 lines of the file, and thereby increase the rating of how quickly we need to look at it. So LMs are capable of greatly increasing the speed with which we perform our initial reconnaissance and look for what I would call "loot" as an attacker, things that will help me further my attack.

David Moulton: Ryan, so you've got one example of how you're using AI to speed things up, but there have got to be other creative ways, other things that you're doing. Can you go a little deeper? You know, talk to our audience about the types of things that maybe they're not thinking about, but they should be?

Ryan Barger: Sure. Let's, instead of saying about how we move through the network, let's take it and go to something that is a supporting asset to that attack, that you sometimes don't think about. So, let's say I gain access to a network. Well, I'm not sitting inside of the building, you know, actually typing on a keyboard. I've got to get that communication channel out, and have a means to pivot into that network from an external resource. Well, I can't, you know, if it's not a high reputation resource, certain firewalls are going to catch me, and as a good guy, I can't do what some of the bad guys get to do, which is just go pop a random website, and pivot off of it's earned reputation. So what we have to do is we have to go out, we have to procure domains, set content on those domains that resemble the domain name, and then get it categorized appropriately. Very commonly, we'll try and simulate health-based domains, or financial domains, because those often won't be central to break and inspect rules on networks. But long story short, what we need to do is we need to go set up infrastructure, and have it sitting in on the internet, waiting to be used for an operation. And the downside to that is, a lot of times, our operations will go until we intentionally sometimes get caught. Becuase our job is to exercise blue, and in which case we've now burned the domain, right? XDRs, or whatever product is going to see that activity on that domain, and looking to increase it's reputation. So we do have to have basically a trove of infrastructure sitting on the Internet, ready to be used to pivot into organizations. We're using AI to increase the efficiency with which we can stand up that infrastructure. So if you buy a domain, you know, Joe's bank, Joe's National Bank dot com, we now can use AI to establish what is a unique front page, with unique content that is relevant to a financial industry. We can create "contact us" profiles and pictures relevant to employees of the bank. We can establish an entire background that helps to establish, you know, fidelity for this. Now, if you're inside of a SOC, you're a network defender, and you see suspicious activity, something gets triggered where you go hey, there's some weird activity going out to this, you know, website. And you go to that website, and you start moving around it, and it looks like a full-fledged banking website, you're going to say huh, there's a chance-there's a chance that you may look the other way from my attack, declare it a false positive, say oh, there must be something else going on there, looks fine, and so that's what we do. We establish, you know, reputable looking infrastructure to pivot through, and we're using AI to increase the efficiency with which we do that.

David Moulton: It sounds like work that is necessary, but boring, and you can turn that over to the robot to set it up, and then you've got this repository of valid, maybe human-tricking level content, out there, ready to deploy, at a moment's notice. So that's got to cut down the amount of administration time and you know, on some level, just the process of going through and clicking and setting up and deploying "contact us" pages, and "about us" pages, and all those different things that you may go like, I don't really want to do this. I'm a senior developer, with 20 years experience, why am I making fake websites? And yet, you need that. So that's an interesting other use of AI that you've got going on. [ Music ] Ryan, let's switch gears, and talk about some of the implications of deep fakes. Back in April, I had a chance to talk with our colleagues, Billy Hewitt, and Tony Hunt, about deep fakes, in episode 20, and what they talked to me about was really eye opening and a bit frightening, with what attackers are doing with deep fakes, what they're able to pull off. You're on the other side, and I'm wondering if you can share your perspective on that topic today.

Ryan Barger: For sure. From an attacker's perspective, the first thing that we've seen is you know, they're using AI to help create content for say a phishing email. Right? Maybe English isn't their first language, and so as a result, they're using these AI engines to help create more authentic looking content, that hits the target audience, without raising these suspicions, due to, you know, grammar issues, etc. Additionally in the same way that English isn't their first language, perhaps, because they're coming from an external attack vector. They might mask an accent by using deep fake capability to kind of increase the reputation of themselves as they're on the phone trying to Vish someone and to gain initial access. Now, from an offensive security perspective, we're doing a similar thing. But instead of masking our accent from being, you know, a foreign actor, we are instead trying to sometimes masquerade as someone that's already known inside the organization, so maybe we're trying to convince the Help Desk person that we are, again, Jane Doe and we can use live deep fake capability in order to mimic her as long as we've gained access to her audio somewhere else. So an example of this would be we were approached recently by an organization that wanted us to attempt a deep fake on one of their very, very senior executives. Well, as they are a large organization, and it's a very senior executive, there was YouTube video of them speaking on, you know, mad money, and all of this stuff, like there was enough content for us to be able to demonstrate an ability to capture his audio and potentially create a deep fake. And we were going to, as office security, our initial thought as always as a hacker is hey, how can I get initial access? But we actually thought for this customer it might be more interesting to demonstrate a fiscal impact, and in that way, we took this very senior member of the organization and recommended, hey, we can mimic this person, and have them try and drive a financial impact in your organization, so for example, they can post a message inside of the slack channels that indicates we should disavow association with a competitor, or with a, you know-sell stock, whatever it is that is trying to result in a financial move. That was the theoretical way that we were going to weaponize that attack. So deep fakes definitely provide a lot of vectors with which, you know, either Office of Security people, trying to assess risk, or bad guys trying to, you know, make a move in the market, there's a lot of ways that we can employ deep fake stuff to further the attack effort.

David Moulton: So you've mentioned how threat actors may use deep fakes, or the white hat community might use them. Are there other ways that they could potentially incorporate AI?

Ryan Barger: So, as a white hat community member, right? My job is to be ethical, and move through a network without really causing any damage, and also staying inside of scope for where I'm assigned to target. I think one of the places that an adversary will use AI in a way I haven't seen it occur yet, but I think the day is coming, is we have not had a worm occur since the AI boom, and I think that the day is inevitably going to come when there is a next big worm that is leveraging local language models to help further its attack.

David Moulton: Ryan, you're mentioning what I think is an AI worm. And if it's an intelligent worm, it can make decisions, maybe strategic decisions. Can you elaborate on how that might impact network security?

Ryan Barger: Absolutely. So let's first explain the difference between an AI driven worm, in theory, versus your standard worm. So let's go for the, you know, one of the last big ones, want to cry? Pretty much all of them boil down to the same general concept. They're going to land on a machine, they're going to assess one or two binary decisions, is there a machine next to me with port 445 open? If so, try and pivot. Right? There's a bottom line, worms aren't too intelligent. They are able to move continually and pilfer, but they're very reliant upon a very specific set of things being there in order for them to do so. If you want to talk about intelligent worms, you have to go, you know, back to Stuxnet, where it was looking for very specific boxes to pilfer to. But at the end of the day, as intelligent as that product was, it was still just a series of, you know, binary decisions, looking is there here, is this here, can I move, yes or no? And then doing so. Now, what a future worm could theoretically do is instead of moving just laterally and say I'm going to move to the box to my left, and the box to my right, it can say I'm going to look at my overall environment, okay, I've landed on a domain, I'm going to move up the chain and gain control of the entire domain. And then proliferate back down. And that could have a cascading effect, I think that would result in a very quick spread of that worm. Because as you can imagine, the devices especially in a remote work environment, that we're talking, that we're living in right now, a worm potentially going down and affecting all the devices inside of a network that are connected to a domain, that those devices are also connected to home networks. So now, it proliferates out from those vantage points, and so on and so forth. So, I think that its ability to make more conscious decisions about moving, about doing a creative attack that goes up and then out, versus just trying to pivot left and right, that's going to be a game changer with regards to the nature of worm-based attacks. It's going to be interesting.

David Moulton: Ryan, before we wrap up the conversation, let's talk about some of the less sexy things that are going on with AI. You run-you run a red team, and I know there is administrivia in any job. Can you talk to me about how you're using AI behind the scenes to speed things up in reporting or in analysis, so that we're more efficient inside the business?

Ryan Barger: Yeah, I think I want to hit on reporting, which is the first thing you mentioned, because it's everyone's least favorite part of hacking, right? Inevitably, a hacker can get giddy about their jobs. It's a 10-year-old kid's dream job to say I'm a hacker for a living, and then all of a sudden, one day you find yourself, you know, three days into writing a report, and then going oh my gosh, this is miserable. So everyone wants to try and eliminate or lessen the pain that comes with report writing. And focusing in on those really fun things, like the EDR evasion, like the moving through a network and finding unique zero days to help pivot, and new attack techniques. That's what we want to focus on. So how do we eliminate reporting? It was actually one of the first things we started working on here within our team, when the AI boom occurred. We'd actually generated a tool that we called TARDIS. For those of you that appreciate sci-fi references, that's the Doctor Who vehicle. So TARDIS is the Tensor Automated Report Drafting Information System, and what that does is it helps-it chews on data that is provided to it throughout the events, to help create that initial draft of report. It's doing so by chewing on raw logs, created by users, to help discern what the attack narrative is. It's also, you know, taking and create-starting from, you know, known good starting points and previous reports for verbiage, and kind of some canned verbiage that we keep around. And it's tweaking that based on what was found inside of a specific test event. So it's basically creating that good initial draft for an analyst to work through. So I think right after we feel that the latest version of TARDIS, one of my more senior analysts came up to me after he used it and said that is the quickest I've ever written a report in my life, which I think is a glowing five-star recommendation, you know, five out of four-star recommendations for the product doing its job, and by the way, the reports are amazing. I've been doing this for, you know, 20 years, and I'm looking at some of the reports, and they are top notch with much less effort. So it's definitely increasing efficiency for us, and removing the pain points, while still getting us, I think a report is one of the most-if not the most-important part of the test process, right? At the end of the day, the client doesn't care about what techniques we use to pivot, they care about getting the digestible and comprehensible report, and I think we're doing a good job generating those with the assistance of AI in a more efficient manner.

David Moulton: I talked to Nearzik [assumed spelling] recently, and one of the things he said is, you know, humans are going to have their role, AI is going to have its role, AI is going to be good at scaling things and going faster and humans have to be good at doing the things that machines can't do, and report writing sounds like it's one of the things humans don't want to do. And I like that distinction of saying let's give the machine some of the things they don't want to do. There's a great meme out there right now that says why is AI getting to write songs and make art and I still have to do my laundry? And Ryan, you flipped the script. You've given it the laundry task, and you get to do the fun stuff. So that's awesome. So as you just stated, humans are still a critical part of security. I'm curious how you see the role of the cybersecurity professional evolving. Maybe it's more soft skills. Maybe it's new skills. Maybe it's folks that are traditionally not part of the cybersecurity community coming in. What do you see in that future landscape?

Ryan Barger: So, I think AI is going to definitely have an impact in the off sec industry will look different. Whether it be six months or-people always tell me, well what do you think it's going to look like in five years? And I say since the AI boom, I have trouble predicting what it's going to look like in six months. So I think there is definitely going to be a change. The ways that I could summarize that are two. One is historically I take one tester, and I put that tester on you know, a specific event and say your job is to target Widget Inc. Incorporated, go after them. And I think the day is coming where that's not going to be the paradigm. Instead, that one tester will instead be orchestrating and guiding as AI conducts four or five test events under their oversight. So I think there is definitely going to be enough increased efficiency to where, you know, one tester should be able to do the work of say, you know, three to four testers. In the near future, I think that's coming. What does that look like? You said there's a human element. Right? There is a very prominent tool inside the Office of Security Industry, known as Bloodhounds, created by Specter Ops. Bloodhounds looks at the mass amount of data inside of a domain, and from that, based on a certain set of rules, starts to create potential attack paths for you to gain and increase privilege to proliferate throughout a domain, right? It takes this, you know, massive data increase, these nice diagrams that are easy to digest, also they present really well to an executive audience, to help them understand how a certain misconfiguration can lead to an inevitable compromise and further a kill chain. Okay, I think in the same way, AI will inevitably be able to assess what reconnaissance identifies inside of a network, create potential diagrams, that then the human has to discern what is the safe vector, or the vectors that go after that most likely achieves the desired outcome. Maybe my outcome is I want to get caught, right? Because I want to exercise my blue team. Maybe my-maybe I don't want to own the entire domain, I just want to get access to a very specific system and evade detection, and I think that you know, I think that AI is going to help us create those potential attack paths, but I think you're still going to need the human to do it. Because at the end of the day, a lot of what we're doing in off sec is using machines in a way they weren't intended to be used, and as a result, that's inherently dangerous. And you kind of do want a human in the middle making the conscious decision, do I pull the trigger. Do I send this around down range, because it may break something. So I see AI as helping to identify the attack pass in a more efficient manner, doing a lot of things to increase efficiency, but you still will want a human to assess that data, and pull the trigger, and move on attack.

David Moulton: Ryan, I often joke that the AI that we often talk about is artificial intelligence, but the Unit 42 team is the actual intelligence team, and I like this idea that the future of off sec coming out of your teams has actual intelligence applied to the power and scale and speed of artificial intelligence. That's a concern, but it's one that is lessened when there is responsible folks on this side taking care of things. So I always like to ask, what's the most important thing a listener should remember from our conversation?

Ryan Barger: So, I think that everything has this core foundation to the fact that we are definitely living in an AI boom, and I hit earlier on the fact that I can't picture what six months looks like, you know, six months from now looks like, and in the same way, I can't picture what five years looks like. So I think we should just make our decisions, whether they be cybersecurity based, whether they be, you know, design based, whatever you're doing in your organizations, you should be aware of the fact that this is a rapidly changing landscape. Also we hit on here something, you know, a key takeaway is the fact that there is an increased efficacy on my side as an ethical hacker, but at the same point in time, the adversary is also going to benefit from that same increased efficiency, so we are looking at potentially more dangerous threat landscape. And so it's time to really pause, and assess have I done everything to do due diligence in preparation for a potential coming wave of more advanced cyber attacks. So have I deployed the right tooling? Have I done penetration tests from an independent authority to assess my network? Right? Because at the end of the day, an AI-driven, I use the theoretical AI-driven worm. It's going to look for, off the bat, those top 10, 20 things that I'm going to look for as I'm moving through a network. And if it finds it, it's going to proliferate through. So have you done everything possible to try and identify that low-hanging fruit? And that allows for movement through your network? Have you done everything possible to try and increase detection, so that your meantime to detection from a compromise is as quick as possible. Maybe even automated response. If you start seeing attack techniques, can your network respond accordingly? So I think the takeaway is, you're in the middle of an AI boom, and you know, don't go back and concentrate on the same problems you've always had. Make sure you're spending time to look forward and think about the problems that are coming, that theoretically could be, again, much more, much more advanced. [ Music ]

David Moulton: Ryan, thanks for an awesome conversation today. I really appreciate your insights on AI, automation and off sec, worms, I didn't expect us to get into that, and into deep fakes, and so much more.

Ryan Barger: It has been a sincere pleasure. Thank you for having me, David. I appreciate the invite. And hopefully I was able to leave something of value for the listeners.

David Moulton: That's it for today. If you like what you heard, please subscribe wherever you listen, and leave us a review on Apple Podcasts, or Spotify. Your reviews and feedback really do help us understand what you want to hear about. I want to thank our Executive Producer, Michael Heller, our content and productions team, which include Kenny Miller, Joe Benecourt, and Virginia Tran. I edit Threat Vector, and Elliott Peltzman mixes our audio. We'll be back next week. Until then, stay secure, stay vigilant. Goodbye for now. [ Music ]