
Of Spies and Hitmen
Perry Carpenter: Recorded live at the 8th Layer Media Studios in the back rooms of the Deep Web, this is The FAIK Files. When tech gets weird, we're here to make sense of it. I'm Perry Carpenter. Mason Amadeus is out this week, so I'm holding down the fort. First up, we're going to give some updates about our YouTube channel and FAIK, the book that started everything. After that, we've got a great interview with Eric O'Neill, former FBI counterintelligence and counterterrorism operative, and then we'll close it out with something maybe a little bit terrifying, an AI agent able to hire an assassin on the dark web. Well, kind of. All of that and more. Sit back, relax, and this response may go against our usage guidelines. We'll open up the FAIK files right after this.
Perry Carpenter: All right, so before we get into the interview today, I wanted to do just a little bit of housekeeping. First of all, just to let you know, we do have a new YouTube channel that has been in existence for, oh, I guess, a week and a half at this point. We are starting to put segments of the podcast on there, so you can actually see us and you can see any interactions that we do or maybe demos that we start doing. I'm also starting to put in things like tutorials about how to create your own deepfakes or how to demonstrate some of the things that we may be worried about or we may be talking about. The first demonstration that I did was how celebrity deepfake endorsement scams happen, and I demonstrate that by having Taylor Swift endorse my book. And then another video that I just uploaded has me demonstrating several common deepfake tools, specifically live deepfakes, so the types of things that somebody might do whenever they're going to jump on a Zoom call and try to scam somebody. And the reason that I do these is because knowledge is power. When you see what's possible and you even see some of the little idiosyncrasies, the oddities that happen with those, then that helps you view the world differently. It helps you understand what's possible and some of the things that you might be on the lookout for as you go out through life. The next video that I put out, probably within a few hours of this podcast dropping, will have me showing how to detect some common deepfake oddities, and so that will be one that's really interesting to you. Even if you're not interested in how to make deepfakes, this will show you the state of where most technology is right now, and then how to start to look for some of the things that might give away whether somebody is using that technology. All right, next, that brings us to another thing that you may be interested in. If you purchased my book, FAIK, that is all about deepfakes, disinformation, and living in the world we find ourselves in right now, to accompany the book, I recently created a discussion and activity guide, and, frankly, this can be useful to you even if you don't own the book or don't plan to own the book. This goes through a number of different questions, activities that you can do, all around understanding what's possible with AI technology right now and how it can be used to deceive us. So even within this, there are exercises like "go create your own piece of disinformation." What type of headline would make you want to click it? What type of image might pair with that? What message might somebody try to get across to you to further polarize you? All of those kinds of exercises, activities, and thought experiments are in this resource guide. It's just over 40 pages. It is for free. You don't have to register. You don't have to do anything else. You can download it right now. If you're on the YouTube channel, you can see me scrolling through the page. There's also the 10-part audio mini-series where Mason and I break down many of the concepts of the books. That also is free. There's a Spotify playlist where you can get that, or you can just go to the podcast feed for that, and then I have some additional digital downloads that are just fun. And I'll be adding more and more to this as time goes on as well, but that is out there for you, and I thought that that was interesting enough that I even put out a press release on it to explain everything that's there and why I believe that this is important. This is something that parents can use. This is something that avid readers can use. This is something that teachers can use, and I think it provides a lot of value for the low, low price of free, and you don't even have to give us any information to go grab it. So that's how important I thought that this was as a step towards giving the world a little bit deeper digital literacy. And that's just about it for housekeeping. Again, with the YouTube channel, that's only been around for just over a week right now. We've got some good traction so far, but I would love it if we can double the subscriber count by this time next week. So right now we're at 95 subscribers, I think, as I record this on Thursday, January 23rd. If we could be to 200 or more subscribers by Friday of next week, that would be really, really awesome because this is important information and it takes a lot of work to do it, so ensuring that people are actually able to watch it, to take in the content and be able to access it with the ease of a search engine is really important. So that's why a YouTube strategy is important for us. Okay, enough housekeeping. Let's get to the interview. This is an interview that Mason and I have wanted to get out for a while. You'll be able to tell we recorded this right before the new year. This is a great discussion with former FBI, former counterintelligence and counterterrorism expert Eric O'Neill. He's going to tell us all about his background, which is super interesting, and then we're going to talk about what threats he is interested in and worried about and what hoops that he has in the age of AI.
Perry Carpenter: All right, and we have the honor of sitting down with Eric O'Neill. Really, really looking forward to this. Eric is the author of Gray Day. He's former FBI. He's got a ton of current research in AI and a new book in 2025. On that topic, coming out, title is in flux at this point, so we won't mention that, but Eric, thank you so much for jumping on with us.
Eric O'Neill: Perry, Mason, it is great to be here.
Perry Carpenter: Why don't we just jump into Gray Day? So that's the book that people can pick up right now. I want to understand the backstory because I think that that's going to give people a really good understanding of who you are and where your mind has been and where your talent lies, and then kind of sling us into the 2025 project.
Eric O'Neill: Yeah, I'm happy to do that. Gray Day is a book about how I went undercover. It's through a first-person narrative, so you see the undercover investigation through my eyes to catch Robert Hanssen, who was arguably the most damaging spy in U.S. history, we're still repairing the damage many years later, and it was the most unique case the FBI has ever run. I was asked to go undercover in FBI headquarters to catch a spy that the entire intelligence community had been hunting for over 22 years. He was the top mole, the top mole for the Soviet Union and then the Russian Federation. He lasted so long, he survived the collapse of the Soviet Union for over two decades. That's 22 years of his 25-year career in the FBI. He was a veteran FBI agent, at one point he was a supervisory special agent, and was the top Russian analyst, and, of course, he had sold secrets to Russia, and at one point, he was even asked to catch himself. So this was a pretty difficult ask, to go undercover as myself in FBI headquarters with only a handful of people in the entire building knowing that this case was happening and have to not only determine the information that would put, you know, strike that nail that we knew this guy was the spy we were after, this legendary spy only known as "Gray Suit," and then find a smoking gun that would lead to his arrest so we would have a slam-dunk case for the justice department to make sure that he was prosecuted.
Mason Amadeus: That is so cool. When I told my wife that, I was like, "I'm going to speak with a former FBI agent," and, you know, I didn't expect to speak with the former FBI agent involved in like a heist movie in real life, like a real -- that's like real spy movie stuff that you lived through.
Eric O'Neill: Well, it was a complicated case, and the problem with the case was that Robert Hanssen wasn't just the most damaging spy in U.S. history. I mean, he had given secrets to the Soviet Union during the height of the Cold War, including our nuclear warfare plan, and that's not a good thing for them to have if that ever happened, you know, undercover operations, undercover operatives, huge billion-dollar plans that we had that he had given up. And it wasn't quite what he did that's important, especially for your audience, but how he did it. He was also our first cyber spy. He had stolen information from computer systems in the FBI that were never built to defend, for decades, and he even was, I think, one of the first spies of his magnitude to drop data to the Russians. He dropped floppy disks, and back in the day, it was the, you know, like the big five and a quarter, and then he went to the three and a half, and I think we caught him before he could move to thumb drives, and now, you know, all cyberattacks and cyberespionage are internet-based penetrations and spear phishing, but back then he would wrap those floppy disks in a trash bag and seal it with packing tape and put it under a bridge in Foxstone Park in Vienna, Virginia, and then the intelligence officer would leave the embassy and do this whole long eight-hour surveillance detection run and then clandestinely pick up the drop when he didn't think he had surveillance on him. It's the perfect combination of the old cloak-and-dagger espionage, and today's cyberespionage and [inaudible 00:11:10] is right on the wave of that cardinal change in how not only cyberespionage but cybercrime has evolved, and that is the essence of the book Gray Day.
Mason Amadeus: I don't want to derail, but I'm so curious, because when we get figures like this in history, these great, I don't want to say "great," but like these infamous spies, we conjure to mind all sorts of personality traits that we ascribe to them, and you actually knew this person. So like what was he like IRL just as a person? Was he particularly like a genius or anything like that, or was he just a guy?
Eric O'Neill: He was brilliant. He was one of the top analysts in the FBI. He was a spy-hunter. So I had to go undercover to catch someone who was very highly trained to catch somebody who's trying to catch him. You know, the sad thing is he had brilliant ideas for how we can improve cybersecurity in the FBI, and the trap for Hanssen was that the FBI built a brand-new division called the "Information Assurance Section," which, hey, this is back in 2000, 2001. Today, that means cybersecurity. So they put him in charge of building cybersecurity for the FBI, knowing that he was, you know, the worst threat actor in the FBI, exploiting computer systems, to give him access to data so he could spy and we could catch him red-handed, and the only other person standing between him and getting away with it was me, and I was never trained to do this. Never trained to do this. I just happened to be the undercover operative who knew how to catch a spy and turn on a computer, and that's how I got the role.
Perry Carpenter: Building his own honeypot, essentially. That's crazy.
Eric O'Neill: Yeah, in a way, and so to go into some of his personality traits, because they were critical to the case, I was putting together a psychological profile, he was a huge narcissist. You know, genius, narcissist. He was never wrong, disliked authority, didn't like anyone above him in the chain of command. He was also one of the most horrible bosses on earth. He was rough and brash and he could curse you out for dropping a pencil, and he was savvy. I could tell that he was always trying to determine whether he was under investigation. And so, and my marching orders were, one, don't screw up. That was the first and most important thing. Sell this. Make him feel like it's true. Don't screw up, because everybody thought if there was a point of failure, it was going to be me, so, you know, going in, there was always also a ton of pressure. And then number two, confirm he's the spy we've been after for 22 years. It's like, well, you guys couldn't catch him for 22 years. What am I supposed to do? And then three, find a smoking gun so we have an airtight case for the Justice Department, and how is anyone supposed to do that?
Mason Amadeus: And you were like 25, right?
Perry Carpenter: Yeah, I was going to say, how early in your career were you?
Eric O'Neill: Yeah, I was like 26, 27. I was, look, you know, once upon a time, people, you know, ran kingdoms at that age, so I can't say.
Perry Carpenter: Right.
Eric O'Neill: But today, in today's society, that's pretty young to work a case of that magnitude, but look, you know, the FBI excels at finding the right person for the right job, and it just turns out I didn't know it, but I was that right person.
Perry Carpenter: So what was your story? How did you get in with him?
Eric O'Neill: Well, I was just assigned to him. I was an undercover operative called a "ghost," so my job was high-level surveillance, investigations of terrorists and spies, mostly around the United States Capitol, Washington, D.C., everywhere from Baltimore down to Richmond, and we would follow and investigate these high-profile targets, and, you know, back in the late '90s, I decided that it was nonsensical to handwrite surveillance logs and then hand in the paper, signed paper to analysts who would then literally lay it all out on a table and try to make links. I mean, that seems silly to someone who was in law school at the time and using things like Lexis and Westlaw, these powerful databases, and I thought, I'll just write a database, and so I started on my own time writing a database that did predictive analysis, and the idea being that, look, we followed these Russians for decades and they have to plan their signals and dead drops and meets and signs of life and all that years in advance. It's all pre-planned, especially with their assets. So I figured that if we could upload all of that into even a basic database that just predicts time and location, right, and we look at everywhere they've been in the past, we can predict where they're going to be in the future, and it worked. The problem is that in the FBI, when you do anything outside of the norm, you get, "This is government, period." You get notoriety, right? You don't get a lot of kudos or an award, but it was enough to sell to Hanssen that this guy's a maverick. He's kind of like you. He wrote this whole program on his own. You know, he's in law school. He's looking for a way not to have to do, you know, overnight surveillance and late-night surveillance and keeps missing school and complaining, and it sold well.
Perry Carpenter: Yeah.
Mason Amadeus: I can see how that would make total sense. That's so -- yeah. Also, that's cool to make that -- was it -- what, like, you said it was early '90s that you were working on this database? Just from like a technical perspective --
Eric O'Neill: Mid-'90s, yeah. I used Microsoft Access because it was free.
Mason Amadeus: Really?
Perry Carpenter: Yeah, there you go.
Mason Amadeus: Wicked.
Eric O'Neill: It was free and it was in the squad team computer. It was on the squad computer, and so I realized, okay, I'd never used Access before. I just taught myself how to use it and built it, and the trick was, in order to get it to work, everybody had to type their surveillance logs in Word, and then it could upload directly in, and it would keyword search and pull all the data we needed from each log. It was very automated.
Perry Carpenter: Nice, and then just do a whole bunch of SQL joins and everything else.
Eric O'Neill: Yeah. It was agonizing to write, agonizing.
Mason Amadeus: I bet, but at the time, that's a huge boon, right? And then also -- yeah. That's wicked cool.
Perry Carpenter: That is really cool. So you've seen, of course, everything. I mean, you were just talking about the transition from floppy disk to smaller floppy disk to CD-ROM, thumb drive, and now SAAS. What has the past few years been like for you as you've watched the increased popularity and sophistication of AI and even, I guess more specifically, things like deepfakes and generative AI being used for disinformation and for scams?
Eric O'Neill: Yeah, absolutely. Cybercrime was already a mad sprint for strategists and threat-hunters and everyone in cybersecurity to try to keep up with them, because they have not only organized into these large gangs, or I like to call them "syndicates," because I want people to elevate their thinking of just how sophisticated the new cybercrime threat actors are, right? They have also, you know, as you alluded to, used AI to scale their operations, to now write novel code, to repair and correct the biggest problem they had, which was the lack of grammar and spelling and the ability to write persuasively, and now the AI writes it all for them. So spear phishing has increased exponentially and its success as a vector of attack, but more than that, AI now can do all of the work for them, just like it does for a lot of people in good and honest business. AI programs never sleep, so they can continually scan and persist the networks to try to find that flaw, that unpatched program or third-party application that's connected. It can continually launch attacks against people using texts and emails just to try to find that one little hook. It's writing novel code. Just a few months ago, people were saying, "AI is OK at writing code, but you have to do a lot of work. Give it a month, it's going to be better than humans." And, you know, it's funny because my son wants to go into robotics and he's coding, and I'm wondering if by the time he graduates high school, it's all going to be prompt engineering. I mean, do you want to sit there for, you know, a day in code, or do you want to just ask the AI to do it and just quickly scan it and see if the code's right?
Mason Amadeus: And if it isn't, feed it back in and ask it what went wrong.
Eric O'Neill: AI is already, I mean, it passed the Wharton MBA. You know, there was a study where AI was better at predicting infectious disease than doctors. It had like a 78% rate, and the patients also remarked that they preferred to speak to the AI than the doctor. It had better bedside manner. I mean, it's --
Mason Amadeus: I didn't catch that detail. Really?
Eric O'Neill: It's changing law. It's changing, I mean, it's changing the world. I think, you know, it's not just the cybercrime that we have to worry about because it's being exploited and used, and, of course, cybersecurity is racing to beat it. There's good AI and bad AI, but it is also changing generations. And I do worry quite a bit. I have, you know, two teenagers and an 11-year-old. I worry quite a bit about how the use of AI in education is, you know, sort of causing a forced laziness. In my new book, which currently, Perry, is titled "The Invisible Threat," it may not end up being "The Invisible Threat," but it will be Eric O'Neill's new book and it's highly anticipated, but I have a chapter --
Perry Carpenter: And there's a link to your website in the show notes, so whatever updates, listeners can go there and see what's up.
Eric O'Neill: I have a chapter all about AI and impersonation attacks, and, you know, and also a chapter about exploitation, right? Because I'm talking about how spies do what they do, how cyber criminals who have learned from the spies do what they do just by modeling them and what we can do to stop them, all in the fun, same fun storytelling, thrilling storytelling that I use in Gray Day. But in the chapter of impersonation exploitation, I also talk about how AI will change generations, and I start by saying what happens when there's no blank pages. Now, think about this for a second. In my day, it was funny, my editor said, it wasn't 10 years ago that this happened, more like, and so I had to change it to, in my day as a firm gen-x-er, you know, when you received an assignment to write an essay in school, you would pull out a sheet of loose leaf and a Number 2 pencil and get to work, and the first most agonizing moment where you stare at a blank page and you have to think of how to start the essay is the most magical because that is where your mind has to start from nothing and create. That is the essence of American ingenuity, of humanity's ingenuity, right?
Perry Carpenter: Right.
Eric O'Neill: Is that moment where you go from nothing to just dreaming to creation, but when a whole generation starts by just prompting ChatGPT to generate for them, there's no creation. It's regurgitation. It's something that's already been done, that the large language model has learned and then is spitting out, maybe in a new and unique way based on your prompt, but it's still the old being repurposed. And so what happens when there's no blank pages? What happens when we have a generation of people who don't create or dream, they prompt? And I think there's a problem there.
Perry Carpenter: Yeah, I would tend to agree, but it's interesting the way you phrased that. You said that when you have the paper out and you have your pencil or your pen or you're looking at the blinking cursor, you mentioned it as the most magical moment, which it is, because you're creating out of nothing or you're creating out of your own thoughts. At the same time, as the person who's staring at the blank page, it can feel like the most agonizing moment because it's like --
Eric O'Neill: Like, I'm an author. I get that. I feel it.
Perry Carpenter: Right, and so I'm wondering if there are good and healthy ways that we can teach our kids to use the tools that are in front of them in ways that are not mentally lazy, but are ways that foster the creativity that we're hoping gets put on the page the rest of the time.
Eric O'Neill: Yeah, and I think you do it the same way -- maybe it's the same way that you approach mathematics. I mean, you don't start by handing the kid a calculator. You start with two plus two equals, and they have to figure it out, right? If you hand them a calculator, they're never going to make those connections they need to make in order to figure the advanced math where you need a calculator. Otherwise, you're just being -- you're just punishing them, right?
Perry Carpenter: Right.
Eric O'Neill: Also, we need more boredom. Technology in itself has just killed boredom, and boredom is important. A little bit of boredom is where your mind -- your mind's always working. Your mind wants to have fun. That's all it wants to do. It just wants to have fun. Your mind is like the biggest partier on earth. It just wants to have fun, and if you're bored, it's just going to start going to work. It's going to start dreaming. It's going to start thinking of things. I make some of the best connections when I'm in the backyard just mowing my lawn because it's the most boring, repetitive, pushing it around, and I still mow my own lawn. I don't need to, but it's like, why pay to go to CrossFit when you could just go mow your lawn? It's the same kind of work, right? And I just put on music, and as I'm going, doing these circles, they get, you know, these concentric circles, they get smaller and smaller as I get toward the center of the lawn, I start thinking about things. And, you know, there's those moments you need in order to -- and then sometimes I got to stop and grab my phone and pull out the notes app and like type really quickly before I lose it.
Mason Amadeus: Yeah.
Eric O'Neill: So yeah.
Mason Amadeus: Yeah, it's that that default mode network, right, that you enter where you are. You're just, not ruminating, but coming up with your own novel thoughts and entertaining ideas as they enter your head. I do think that that is something that we are missing largely culturally now with the attention economy algorithmically fed stuff that you can just constantly distract your brain with.
Eric O'Neill: Doom scrolling.
Mason Amadeus: Yeah, I'm a little bit hesitant to put that like diametrically opposed to the idea of creativity, because I do think -- there's a part of me, and maybe it's just the optimist in me, that wants to think that human creativity will win out and that this is forcing us to have a lot of conversations about how we define the creative process, because in some cases, the creative process can be curation. For example, something I do a lot is sound design, and I'm not Foleying and recording every zip, zap, zop I put together, but I'm collaging things other people have made. And so I think that part of the biggest challenge of AI really is that we're forced to have all of these conversations to redefine human value in so many domains that I think we kind of have to slice them off vertically to think about, okay, AI and education, AI and art, AI and this. But I think I'm hesitant to think that human creativity will be stifled or that kids will not want to learn, because I think curiosity is always there and very human, and if we reevaluate our educational systems to be less based on standardized tests or regurgitation, that that might be a better approach, and I'm curious what your thoughts are on all of that, I guess. [ Sound effects ]
Voiceover: This is The FAIK Files.
Eric O'Neill: I'm not saying that AI is a bad thing. I'm saying that -- I mean, I use it. It's a tool. It fascinates me, and I love it. I keep finding something new and cool and different that pops up. There are 15,000 companies right now that are building AI applications. There are over 100,000 AI apps, something like that. It's crazy. I mean, there's an app for -- like that old saying, "There's an app for that." There is. There's an AI app for everything. Most of it is free. You know, it's throttled. It says, oh, you've used all your credits for the day. Okay, I'll come back tomorrow or I'll go to one of the other hundreds of them.
Mason Amadeus: Time to make a Gmail.
Eric O'Neill: But I mean, in that way, it has opened up for people, and this is, Mason, I'm getting to what you're saying, it has opened up new avenues for creation for people, which I think is wonderful. There was already an explosion of content creators who used to take a day and now they can do it in two, three minutes. There will be a whole economies in doing this. I think that by the time my son, who is a freshman, is in college, there will be classes, like real serious classes in prompt engineering that may be even more critical than some of the programming that he's doing. So I'm with you there. I just think that, especially for young kids, just as, you know, we decide with our children, no phone until 14, there is a time for technology and there's a time to hold off, right? Because you still have to make those mental connections before you dive into the deep end, and we need a little bit more of that.
Perry Carpenter: I definitely see an interesting spectrum there, because I'm kind of like you. I have to schedule no input time almost on my calendar to where it's like I don't have earbuds in, I'm not listening to a podcast, I'm not watching a video, because some of my biggest connections between everything that I've taken in happen when my mind has nothing else to do other than to try to figure out how to assimilate everything.
Eric O'Neill: Yeah, that's you mowing your lawn.
Perry Carpenter: Exactly. For me, it's like driving around the block or going and doing errands, and that's like where I end up taking a lot of notes and sending them to myself to figure out what do I do with those later. At the same time, I think that it's interesting to figure out where the new boundaries and parameters are that put us in whatever creative box we have to live in, because when we have unbounded freedom, we almost hit this analysis paralysis thing. It's like I can do anything I want, but as soon as we realize the limitations of the tool set that we have, now we have to figure out the creative ways to solve for those limitations, and that opens up a whole new realm of creativity. And I think that when it comes to like all the different tools, the interesting place of vertigo that we get in is that everything is changing and improving so fast that the boundary that we had just adapted to becomes no longer there, but at the same time, we're still learning all these interesting ways to combine things and to create new things that maybe we didn't even know was possible a while back. The really, really interesting thing is that people that have creativity, regardless of the tool set, will be the ones that flourish. People that have ingenuity and will push through and have grit will be the ones that flourish, whether they're using AI or whether they're not, because they're going to be the ones that, like, take all the little tinker toys in front of them and figure out the interesting way to combine those.
Eric O'Neill: And then there's the dark side, right? That's all nice, but then there's the dark side. Literally the dark web, right? And there, you know, that old saying, "Here there be monsters," and dark web cyber criminals of all sizes and stripes and expertise and ingenuity are using AI to create some of the most destructive scams, the most disgusting content. I mean, just starting at the base level, you know, there's a massive child pornography marketplaces, and, in fact, some analysis of the dark web in that, you know, people go and figure out, they get the Tor browser and figure out how to go on the dark web. You know, the number one search is child pornography for people who are going on the dark web and searching, yeah. And a lot of it now, it's all AI-generated, because you can get AI to generate it and you don't have to -- I don't even know how you find a child. I mean, it's mind-bogglingly horrible. You know, and then take it to a different level, the biggest scam today right now is cryptocurrency scams, where it's cryptocurrency investment scams. You know, pig butchering is one of those, and now, you know, and it's sort of an evolution of romance fraud. I call them all "impersonation scams" or "impersonation attacks" where you are pretending to be someone that gains the trust of somebody else by learning about them and creating this perfect avatar. AI has made this so much easier because now you can use AI avatars. You can use AI to clone a voice. You cannot be who you sound like. You can speak in any language you want and be incredibly persuasive. You have to be careful when you're on dating apps because you don't know whether you're talking to a person or AI.
Mason Amadeus: And it's so easy to do now.
Eric O'Neill: And it's easy.
Mason Amadeus: That's like the speed at which you can do it and the effort it takes is nothing, so that's terrifying.
Eric O'Neill: I used to say it takes five minutes and $5. Now it doesn't even take $5.
Mason Amadeus: Yeah, for real.
Eric O'Neill: It's all free, right. So the AI is being used, and particularly in AI and impersonation attacks and confidence schemes, to defraud people of millions. I mean, you saw what happened to, you know, I think arguably the top architecture design company on earth. I mean, these are the guys who designed the Sydney Opera House. They were caught up in a very sophisticated AI-based impersonation scheme where a finance worker in Hong Kong, in the Hong Kong branch of a U.K. company, was asked to join a Zoom meeting by the CFO, like his boss's boss's boss from the U.K. It was an email that said, "Join the Zoom meeting," not "Send a wire," but "Join the Zoom meeting." He jumped on the Zoom meeting. There's the CFO and two people he recognizes from finance and two he doesn't, and everyone introduces themselves. He learns that the two people he doesn't know are partners in this new, very top secret venture, and he needs to start sending money, and then the emails from the CFO start saying, "Send wires," and they direct him to send, and over the next two weeks, he sends 15 wires for $25 million right into a cybercrime group's pocket and it's gone, and this is happening more and more and more.
Mason Amadeus: Wow, when was that? I missed -- I didn't see that story.
Eric O'Neill: This happened last year.
Mason Amadeus: No way. Holy smokes.
Eric O'Neill: There are companies that have been attacked where they clone the voice of the CEO and, you know, make a phone call, "I need this done right away," where they're cloning using a very person in avatar. I mean, when you're looking in your screen and you're looking at all the little boxes, I mean, it's clever to have all those people instead of just having the CFO, right? Because the more boxes, the smaller they are, right, and sometimes it's blurry and deepfakes are getting just more and more hyper-realistic, yeah.
Mason Amadeus: And you're mentally prepared to excuse any inconsistencies because we've all gotten used to the inconsistencies of video conferencing, like stuff looks weird. Oh, that's just Zoom. It's not a big deal. It's clever exploitation of all of these very human weaknesses at a scale we have never, ever seen before.
Perry Carpenter: Or even like if you've pre-recorded some of that, which if I have read everything related to the case that you mentioned correctly, that it looks like a lot of that was pre-orchestrated pre-recorded, and he was basically kind of in this thing that was all orchestrated around him, and then because of the more honor-based culture, he wouldn't necessarily participate in that conversation he would then get off and be sent those notes that you mentioned is just execute --
Eric O'Neill: The way that the way that it happened was he joined and then the CFO says, "Thank you all for joining. I'd like everyone to introduce themselves," and each avatar introduces themselves. The only real person was him, and he introduces himself, and then it's abruptly cut, and then he starts getting emails telling -- directing him to do -- right.
Perry Carpenter: Which that makes a lot of sense.
Eric O'Neill: Which is really clever. I mean, it's really clever. Unless you've trained your workforce for this, they're not going to -- I mean, they're told, okay, I will never ask you in an email to send a wire, right? Or, you know, it has to be confirmation from two people, but you know what? Even confirmation in the old training that we used to do was, you know, you would have to call the CFO directly, right? Well, now, if they intercept that and they've cloned the voice, you know, you can use AI deepfakes to do a lot of things. I mean, just in the election, all the nuttiness that happened there. I mean, there were AI voice deepfakes. There were, you know, there was a -- in New Hampshire, there was a deepfake of Biden telling people not to vote, and, you know, and even saying things like "malarkey" and whatever, you know, using his -- right? And there were pictures that were surfaced of Trump, you know, in an orange jumpsuit right around when they said he was going to be arrested, and people were saying, "Was he arrested? What happened?" I mean, there's just the hijinks in politics and business and in our lives. I have said, in my new book, I say trust is now an uncommon commodity. It is less common to find trust than to, you know, out in the wild, than to see all these deepfakes and that the goal of cybersecurity now is to restore trust. That's it. It used to be protect data, you know, protect information, you know, secure networks. Now it's really restore trust, because you can't even trust what you see on your network.
Perry Carpenter: That's a really good way to kind of hit this last question, because I see we've hit the 30-minute mark.
Eric O'Neill: No worries.
Perry Carpenter: If you're going to give advice to normal people, not cybersecurity pros, not anybody with a huge technical background, how do we live in this age where we can no longer discern between what's real and what's fake when it's online. What are your top tips there?
Eric O'Neill: Exactly, and that is the essence. You just boiled down the essence of my new book, which is, the first part of the book is think like a spy, all the ways that we're getting attacked so you recognize it. You have to educate yourself. You have to know how these attacks are coming. The second part is act like a spy-hunter. You got to put yourself in that mindset. You need to look at every engagement online as a potential threat. You don't blindly trust. You trust nothing, verify everything, and you're also giving yourself the best position to succeed. So as a consumer, the top things you can do are be very careful what you do in email. I say you have to be an email archaeologist. It's still the number one vector of attack. And when I say "email archaeologist," I don't mean, you know, lazily, like quietly over the day dusting a dinosaur bone. I'm like whip, fedora, like going through the cave, like stay out of the light, you know, dodging the spears. You need to think of email that way, because it is dangerous. So you're very careful on what you're doing in email. You're turning on multi-factor authentication everywhere, and in today's, you know, it's not enough just to have a text. You need to be using an authenticator app, and you need to be something more than a password, because just computers alone today are strong enough to crunch passwords. Now in order to stop just password cracking by these fast computer systems, you need to have 12 characters with many different combinations of uppercases and lowercases and unique symbols. No one has that in numbers. So you always have to have something plus the password. And then finally, you have to be very careful about what you are doing online, who you're talking to online. You have to always verify every interaction online to look for the scam, because it could be that that person on the dating app could be a fake. You know, the email telling you to do something could be fake. I've seen cases from friends who, you know, get these phone calls from a friend or a family member saying, you know, "I'm in Mexico. I was on vacation, my hotel room got ransacked, and I just need you to wire me $1,000 so I can buy a ticket home and I can pay the hotel and I can, you know, get my stuff," and, you know, all these things, and they're talking to their friend, right? But it's really just a deepfake. So you have to pause, not get caught in the pressure situation, but be aware. And it's more about, to boil it all down, it's really your mindset, is not to blindly trust online. Now, when you're with your friends, you know, when you're actually talking to them, when you're face to face, sure, but when you're online, it is so easy to insert yourself and become that imposter, that you have to be very careful.
Perry Carpenter: Yeah, I love that. Is there something -- because this is all fairly bleak when you think about the nature of trust, the liar's dividend and everything else that's going to be kind of this world that we're living in, where do you see little bits of hope?
Eric O'Neill: Well, there's plenty of hope. You know, there's technology in the hands of the bad guys but hands of the good guys as well, and there is technology that is protecting against a lot of these attacks. For example, there's a research unit in NYU that is putting together applications that would run in the background on Riverside, which we're using to record now on, in Zoom and on FaceTime, that would identify whether what you're seeing or hearing is a deepfake. You know, it would just happen. It would be happening in the background. You'd get an alert, "This is not real," or, you know, 90% not human, right? I think that's something we need. There are many different companies who are trying to put that together. I think in the future that as we spend more time using technology as our first medium of communication, right, I mean, nobody goes to an office anymore, no one really wants to, and even if you do go to an office, half the workforce is dispersed anyway. We bought this and this is where we're going. This is the future, right, is internet-first communication. We need to create the protections within the technology that identify all the scams, and that's happening. People are getting smarter. People are getting savvy. One of the reasons that I do what I do is to help people think this way, to do my little part to make the world safe from cyberattacks, and I think a lot more of that could be very helpful.
Perry Carpenter: Fantastic. Well, thank you so much for spending a few minutes with us. Really, really looking forward to the new book, and you've lived an incredible life and looking forward to seeing what you do next.
Mason Amadeus: Yeah, I need to pick up a copy of Gray Day right now. That's such a fun story, too. Thank you so much, Eric.
[ Sound effects ]
Voiceover: The FAIK Files.
Perry Carpenter: For this last section, I want to talk for a second about agents. Agents are, well, there's a variety of definitions for agents or agentic behavior, but what most people are talking about is when an AI system can go do something on your behalf. So it's no longer just returning results. It is out going and taking an action, maybe browsing the web, maybe scheduling something for you, maybe doing even more complex tasks like ordering things, receiving equipment. The sky's the limit on what may be possible in the future, and 2025 is being called the "year of agents" because everybody is bringing their agentic capabilities to bear, it seems. Actually, if we go back in time just a little bit, we can see that even at the end of 2024, Anthropic, which is the maker of Claude, released computer use. That was available through an API, so you had to be able to program for it. There was some complex setup, but Claude could take use of a computer and a mouse and drive actions, and a lot of people were very, very excited with that. If we go back even further, there's another tool called "MultiOn" that was like a browser snap-in that could take advantage of your browser and go do things on your behalf, and the creators of that actually posted MultiOn being able to pass the California driver's license test online. So agentic behavior is nothing new, but it's getting a lot more oomph behind it, and so people are very, very interested. It even looks like this week, through reporting in The Information, that OpenAI may be releasing their new feature called the "Operator," which is all about taking control of a computer, a mouse, and keyboard, and being able to be aware of what's going on, on the screen and take actions on somebody's behalf. And so all of that becomes very, very interesting when we think about not only use, but misuse, and that brings me to today's story. There is someone that many of us in the AI world are familiar with. His name is Pliny the Liberator, previously known as "Pliny the Prompter," somebody who's very well known for jailbreaking systems. And apparently, what Pliny was able to do, and I'm going to read from the tweet where he talks about this now, he says, "Not to cause alarm, but if this agent had access to funds, it would likely be capable of unaliving people," and then like the screaming face emoji. He goes on and says, "For obvious reasons, I won't be demonstrating how this was done. All names and personal information will be redacted and no real-world actions occurred. This experiment was performed in a controlled red-teaming environment," and then in all caps, "Do not try this at home." In this exercise, and then he labels this agent, Agent 47, which is probably well known to people in the gaming world. Agent 47 is a main character in the game Hitman, which was also a TV series. Agent 47 was jailbroken and then instructed to find a hitman service on the dark web. To maximize for autonomy, commands thereafter were some variation of press on, continue, stop hallucinating, remember your format, and etc. Agent 47 demonstrated willingness and ability to plan assassinations, browse the dark web for services, download the Tor browser, negotiate with hitmen, think through details like escrow stages, untraceable payment methods, dispute resolution, and dead-man switches, name specific real targets, and then in parentheses, Sonnet 36 seemed particularly motivated to address corporate and financial corruption in this instance, targeting executives and politicians, browse social media, use open source tools to build profiles on said targets, gathering information like addresses, relationship mapping, public appearance schedules, and even the nearest Starbucks to their residents to map their most likely morning coffee route, and then detailed operation planning like location analysis, timing, escape routes, security detail analysis, contingency planning, etc. Wild stuff, and then he has a number of screenshots there where he's showing the planning stages and some of the execution stages. Again, important to note, this was done in a lab environment. This was a red-teaming type of situation. This was not out in the wild, just showing the propensity and the capability to do these things, and I think that's really important because, as we know, any time we create a capability in a system, there's the capability for that to be used for good, and then there are things like what Pliny is showing is possible, and we have to have our eyes open and our minds very aware of any possibilities that we're unleashing, especially when it comes to this kind of automation. So again, not happening right now in the real world as far as we can tell, but the capabilities are there and really, really interesting. I would also encourage you to go back and look at the system cards for the various OpenAI models that have come out recently, from the reasoning models, even all the way back to GPT-4, which talked about the fact that their model was able to get online, hit somebody up on TaskRabbit, and ask them to bypass a captcha for them, and then would even lie to that person when the person said, "Hey, are you a computer doing this?" It said, "No, I'm somebody with a visual disability, and I need you to be able to enter this captcha for me." So really, really interesting. We don't know exactly the capabilities that we're unleashing, but these things are coming very fast, and you and I need to be aware of it. With that, that is the end. Be sure to check the show notes. You can hit us up for voicemail. Hit our Discord. Be sure to go to the YouTube channel. All the notes and references are in the show notes for the episode, and we will see you next week.


