8th Layer Insights 9.17.24
Ep 50 | 9.17.24

Digital Mindhunters

Transcript

Perry Carpenter: Hi, I'm Perry Carpenter, and you're listening to 8th Layer Insights. [ Music ] There's a famous quote that is usually attributed to Mark Twain, or maybe sometimes Jonathan Swift, or other times Winston Churchill. And some people even say it was Terry Pratchett who first said it, the quote, "A lie can travel halfway around the world while the truth is putting on its shoes." Definitely sounds like something Mark Twain would say, but honestly, I don't care who said it, because the truth of the statement stands regardless of attribution, a lie can travel halfway around the world while the truth is putting on its shoes. And in today's world, that lie doesn't just travel, it sprints. It's invisible. It is smart, and it knows you and I better than we know ourselves. [ Music ] Today, we look at a future that is already here. It's a world where artificial intelligence isn't just a tool, it's a weapon in the hands of global powers. I mean, imagine this. Your favorite celebrity endorses a political candidate, except that never happened. Your newsfeed is now flooded with breaking stories all artificially generated and tailored to your personal fears and hopes. [ Music ] Now, imagine an army of bots mimicking real people, sharing the opinions of millions there in our social media, our forums are comment sections, arguing, agreeing, influencing, and they never sleep. But it goes deeper than that. AI-powered surveillance doesn't just track where we go, it anticipates our next moves. Think about AI crafting persuasive speeches for leaders. Fine tuning messages for maximum impact on each and every audience they address, or crafting entirely fake online personas, complete with years of fabricated histories just to infiltrate and manipulate individual communities. This is not science fiction; this is our reality. It's happening in elections, in global conflicts, and the daily battle for our attention and beliefs. The digital world each of us sees is increasingly curated, manipulated, and fabricated. And the truly terrifying part, that line between real and fake is vanishing. AI is rewriting the rules of global power play, it's blurring truth and fiction. The battlefield is our minds, the prize, our democracies, our societies. And so who are the players? What are the rules, if any? And most importantly, how do we protect ourselves in a world where seeing is no longer believing? Joining us is Dr. Bilyana Lilly. She's an expert in everything that we're talking about and she's releasing a new book, "Digital Mindhunters." I had a chance to read it, and it is an amazing deep dive that's novelized in a way that is fully immersive for you to get a sense of what's going on in the world and what's coming, the forces at play, and our own vulnerabilities. And so, on today's show, we dive deep into the world of influence operations, geopolitics, and how AI is changing the game. Welcome to 8th Layer Insights. This podcast is a multidisciplinary exploration into the complexities of human nature and how those complexities impact everything from why we think the things that we think, to why we do the things that we do, and how we can all make better decisions every day. This is 8th Layer Insights, Season 5, episode 10. I'm Perry Carpenter. Welcome back. Before we jump into the interview today, I wanted to remind you that I also have a new book coming out. It releases October 1st and it's called "Faik" spelled F-A-I-K, "A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI Generated Deceptions." I'm super excited about this book because it is the first book that I've written to serve the general public, and it's all about helping anyone level up their game in understanding AI. Learn how to think like a hacker, understand how attackers view and exploit this gap between how fast technology evolves and the slower pace at which society understands and adapts. And most importantly, it has practical steps and strategies for protecting ourselves and our loved ones from sophisticated scams. If that sounds interesting and you want to learn more, check out the show notes or you can go to thisbookisfaik.com. And that is actually a great intro to our discussion today because my guest today, Dr. Bilyana Lilly also has a new book that touches on very similar topics, but it's written as a novel. So if you enjoy spy thrillers, understanding geopolitical tensions and how AI is taking everything to the next level, then I'm sure you'll really enjoy not only this conversation with Dr Bilyana Lilly, but you'll want to check out her new book, "Digital Mindhunters." And with that, let's jump straight to the interview with Dr. Bilyana Lilly. [ Music ]

Bilyana Lilly: My name is Dr. Bilyana Lilly. I'm a cybersecurity expert with over 20 years of experience in the field of international security and defense. At the moment, I'm the cyber chair of the Warsaw Security Forum, and I'm also the author of the book "Digital Mindhunters."

Perry Carpenter: Great. And it looks like you've written a couple other books before, but this one's subtly different, right?

Bilyana Lilly: That's correct. The first two were academic, and the first one was about missile defense or ballistic missile defense, and the second one was about Russian information warfare. And I think I wanted to write something different this time because academic narratives are very structured. I always had to stick to the type of evidence that I was finding, and I was typically describing trends rather than stories. And there were so many stories through which I still wanted to convey the same messages that I did in my other books. So I decided that this time I wanted to write something different that's not academic.

Perry Carpenter: Yeah. So the format of this is different than other books. This is structured like a novel and weaves in a lot of your expertise from these other areas and then forecast a bunch of future trends as well. Things that we're starting to see emerge now. Can you talk a little bit about the story, some of the characters, what they represent, and then give some thoughts on the technologies that you wanted to shed attention on?

Bilyana Lilly: Yes. So the stories, I had quite a few different characters. I tried to create a fictional narrative that is also entertaining, but realistic and captures the diversity of the different actors in cybersecurity and international security. So we have a strong female protagonist and that also touches upon my desire to show that there are women leaders and women can be in power and women can be the center of attention because we are male dominated field still.

Perry Carpenter: Yeah.

Bilyana Lilly: So I chose to have a female protagonist. I have a very cool hacker sidekick and some of the characters I have to mention are based on real people and some of my friends, and they still haven't read the book, Perry so let's see if we'll be friends after they read it. [ Laughter ] I'm hoping yes. I have sinister villains. And I really wanted to make sure that the villains I chose some Chinese, some Russian, are based on realistic behavioral patterns and realistic cases that I have read throughout my actual research. So all of the villains that I am using in the book, the spies, the assassins, they're based on characters that I have read about.

Perry Carpenter: Describe the main protagonist in the situation that she gets in a little bit.

Bilyana Lilly: All right. So her name is Riley [assumed spelling]. She's originally from Eastern Europe, but she is now an American. She finds herself at a Russian military conference outside of Moscow. She's very curious and her curiosity takes the better of her when she sees a number of uniformed very high-level Russians going into a building and she decides to follow them and uses a type of social engineering technique, a little bit of deception to get in. And then she becomes- she witnessed a number of very important conversations about how the Russian government plans to use AI and other technologies to change opinion in foreign nations. And that's how the story starts.

Perry Carpenter: So one of the main ways that people can move plot forward in novels is this literary concept called the MacGuffin, which is the thing that everybody is after. Can you describe this thing that everybody is after in the novel? Because I think that she encounters it at this conference, right?

Bilyana Lilly: Yes. She finds something that she brings home and she doesn't really know what it is until she's on the plane back to the United States. Essentially, what everybody is after in this book is a particular type of technology that can influence opinion, and that's AI technology. And throughout the book, it turns out a lot of countries are developing that type of technology, which is, if we follow current trends, something that the Chinese government, the Iranian government, the Russian government, and others as well have already started to develop the use. For example, in real life Freedom House, a human rights organization, last year published a report in which it argued that about 16 countries are using AI to influence opinion or create doubt or this much on opponents. So AI is being used in disinformation and propaganda efforts to shape opinions.

Perry Carpenter: With the novel as a way of thinking about this. But reality is the life that we're living in, describe how AI has evolved the game or is letting people evolve the game that's been in play, well, for a long time throughout history, but we've been thinking about the way that let's say, influence campaigns have been further enabled by social media algorithms and bots and trolls. Where does AI come into the picture and how does that take that up another level?

Bilyana Lilly: It's amplifying the threat in providing another tool for the spread- the rapid spread of disinformation, the rapid creation and spread of disinformation. It's creating a serious problem for defenders because the content could be so authentic that it's hard to capture. And there are a number of technologies that could detect what is deepfake and what isn't. But the human eye is not conditioned to do that. So we have to address the problem with a combination of human education and technology. It can just be addressed with raising awareness among social media consumers.

Perry Carpenter: Yeah.

Bilyana Lilly: Or consumers of information in general.

Perry Carpenter: Yeah. There was a study from 2023, I don't remember the name of it, but it showed that even when people get a warning that says that within a certain number of videos that you watch, you will see a deepfake, they're only correct at picking out which one that is about 20% of the time. And then also about 20% of the time, they will falsely pick a real one and say that that's a deepfake, so.

Bilyana Lilly: Exactly.

Perry Carpenter: It's -- we've hit that cutover point where if you have a well-crafted deepfake that's just kind of thrown in with everything else that we encounter in life, you can't tell. One of the-

Bilyana Lilly: Exactly.

Perry Carpenter: - frustrating things I'm sure that you come across and I've seen too, is that when you're talking about deepfakes and you throw up a picture or a video, people believe that they can obviously tell what's real and what's not. They'll point out things that may or may not be indicative of a deepfake, but they believe they have a superpower, and then they get into real life and realize they don't. So what do you think society's path forward is when we're in this spot where maybe people believe that they're better that they are at detecting these things, but we're quickly going to move into a space, I think, where almost everybody acknowledges that they don't know what's real and what's not and so this whole liar's dividend concept enters the picture as well?

Bilyana Lilly: Yes. Education and raising awareness is one element, right?

Perry Carpenter: Yeah.

Bilyana Lilly: We have to teach our population that they should be discerning consumers of information and take a pause before they see some scandalizing story. Raising awareness is one, but we've been talking about this at least since 2016, if not earlier. And we're moving the ball very slowly on that issue. The media is now discussing AI and cybersecurity, and we have ads during the NFL games about cybersecurity and the Super Bowl.

Perry Carpenter: Right.

Bilyana Lilly: We had those for the first time. So it's incredible to see our industry becoming so prominent and people are building their awareness about cybersecurity that includes influence operations that are linked to disinformation. But I still think we have a big problem there about trying to- to persuade people about how they have to process information, which then touches upon freedom of speech and so many other issues, especially in the States. But I do believe that we have to continue with that narrative. And also, I think we need a good fact checking, a solid centralized fact checking organization and when a story that, let's say we have a methodology that says, if we see another video of a bomb blowing up in the Pentagon, there was a few months ago, maybe last year there was a video of- that actually went viral for a short period of time. But we need to have an agency or some- some website where people know to go in the event, they read about something as radical as shocking as this, they can go and check on that website to see if the story is accurate or not. I think that'll- at least as a consumer of information that'll help me and I imagine others as well.

Perry Carpenter: You know the idea, I'm wondering if we now live in a society that's so polarized that people would then question that authority and say, if you're relying on that, then obviously you've been brainwashed. Because I'm seeing that over and over again with many of the fact checking sites people will say, well, that one's biased, or that- whatever one is not necessarily agreeing with their pre-established worldview, they will discredit even though all good fact checking sites, they ascribe to a central code and adhere to certain rigorous standards about-

Bilyana Lilly: Of course.

Perry Carpenter: - whether they decide to change the story, they have to acknowledge the fact, you know, what facts were changed as their understanding of it increased. They have to be transparent about sources. You know, there's all those things come into it, but for whatever reason, when something doesn't agree with somebody's narrative, they just question the fact checking site and want to throw that out. So I do- I do wonder how we get over that.

Bilyana Lilly: I'm still hoping, I'm looking at European Union and facts [phonetic] versus Disinfo, the website they created. And I'm thinking that perhaps saying that we have a bipartisan body is not enough in the U.S. but I can't think of a better solution. We can- we could go to the UN level, but that will become a very slow bureaucratic institution. And we probably need something sponsored by the U.S. government or maybe an NGO in the U.S. that specifically focuses on narrative spread within the U.S.

Perry Carpenter: That'd be great. Yeah. I think one of the interesting things that I've seen spring up over the past few years is, you know, for-profit companies that also specialize in narrative tracking. And so they can see like, where a narrative emerges, which accounts are amplifying that, how it's going out, where things are branching off.

Bilyana Lilly: Absolutely.

Perry Carpenter: It's really interesting to see that and the fact that people are realizing that they can have a company that's focused on that, that makes money. It seems to be the way that things happen in the U.S., right?

Bilyana Lilly: Yes. Absolutely.

Perry Carpenter: And they can tie it to not only some of the political narratives, but also brand protection and safety for organizations.

Bilyana Lilly: Exactly. Exactly. That's one of the ways in which you could justify the monetary value of paying for a service like this. There are a number of think tanks that have created similar platforms, but then the issue is always their scope in the focus, because you can't possibly monitor all narratives, all accounts. And funding is always an issue with them.

Perry Carpenter: So, jumping back to the book for a second, you mentioned Russia and China is kind of the- the areas where the main dark forces are coming out against Riley in this, most of this audience I think that listen to this show is U.S. and UK though as I- as I track things, there are several other countries that listen. But when you think about the main information operations com- countries that are out there that are really sophisticated, is it mostly Russia and China? Or do we end up getting Russia, China, Iran, North Korea, and a few others, or what do you see there?

Bilyana Lilly: That's a good question. So there is a difference between domestic actors or government actors and private sector entities using disinformation for domestic purposes, and governments using disinformation to influence other governments. So, because I focus more on the letter, I look at Russia as being the classic teachers in that space. And I would say one of the- the very first influence operations where they used a combination of disinformation which hack and leak operations with propaganda, that we started reading about. And I would say that set a good example in precedent that other countries learned from are the Russians. And it started with Estonia in 2007. Then we had the biggest Western I would say, information warfare case was the U.S. elections in 2016. Where you had a lot of Russian trolls on social media websites, and you had a hack and leak operation conducted by the Russians. And from there we had Iran and China learning from Russia's examples. And first we had a big focus on the Russians trying to interfere in U.S. elections, but then we added China and Iran. And the latest from what I'm reading, is that all free actors are trying to interfere with public opinion to shape public opinion in the United States, but for different reasons. The- the Chinese are more- more focused on Taiwan and issues that are of interest to them, while the Russians are focused on supporting their agenda in Ukraine, which is to reduce foreign aid to Ukraine so that the Russians can win the war or get to a position where they can have a favorable settlement with Ukraine. And in the case of Iran, we've noticed, especially recently, the Iranians are also using disinformation, but they have also been more aggressive with using cyber operations. And you've probably also read about the- the attempted hacking leak operation, which seems very similar to 2016, but the Iranians are- they haven't really caught up with the idea that this was already tried and United States is a lot better prepared. I know they send information to a number of newspapers that decided not to publish it because they realized it was from an Iranian state actor. And the journalists are a lot more discerning now with what they publish. They, as you mentioned, look at their sources, they work with the FBI and law enforcement agencies to identify exactly whether the source is legitimate or not. So this time we were a lot smarter as a nation than we were in 2016. So I think those hack and leak operations are always likely to be successful.

Perry Carpenter: So as we look forward to the election, and I don't want to take us down a hugely political road, but how do you- when you think about what's possible right now, what was possible before generative AI became a thing and what's possible now with GenAI, and the tensions here in the U.S. and then around the world. What do you think is on the horizon with some of what we may see as we get close to the election? And then of course, post-election seems to be an operative time right now as well up until, you know, basically anytime until the president is sworn in. And then even after people are spending narratives. So what do you see on the horizon?

Bilyana Lilly: Absolutely. I will suspect that what we'll see more of is influence operations, where the different nation states that are trying to- to push certain narratives in the U.S. are trying to obfuscate their tracks and authenticate their content through- through using AI, through- through using proxies so they can- they can present their content as more authentic and more believable to the American public. And we've seen this for years, but now to recently also the United States, to Russians, specifically because they were using a company in Tennessee to launder some of those narratives that are traditionally spread through Russian state sponsored media. Because we have improved our understanding of what is state sponsored, now the Russians are reacting to our policies and are trying to circumvent our existing- they're trying to trick us again. They're updating, they're evolving their playbook as we're evolving our policies and defenses against them. So it's almost this cat and mouse game. I'm worried about especially around election day in different states. I am very worried about disinformation spread with the help of deepfakes. It could be videos, it could be audio, it could be- it could be pictures or just texts about certain events happening at election- at election stations, at voting stations. I'm worried that some of those stories spread at the right time and focusing on the right precinct can impact the election results or can impact people's decision to go to the polls. And we had a few examples in Europe about fake bombs in election state and in election sites and stuff like that, that I think because it is so easy to spread disinformation today, especially with the help of AI, I worry about the fact that there are so many different narratives we can't even anticipate.

Perry Carpenter: Yeah. Yeah. And I remember even in 2016, some of the- some of the disinformation campaigns that were out there were- were simple as social media posts saying that you could text your vote to a certain number instead of having to go to the polls that day.

Bilyana Lilly: Or that.

Perry Carpenter: Yeah. Which is like super, you know, easy for anybody to do. But, you know, if you're- if you're in a state where, let's say it's cold and snowy that night, and then all of a sudden for some reason you believe that it's a viable option to do this thing that maybe you've never done, but technology keeps changing, policies keep changing. Why shouldn't I be able to text my vote? Is going to be the- the thought

Bilyana Lilly: Exactly. Exactly.

Perry Carpenter: Yeah. Or just to generate a picture of some, you know, of something that looks like one of the major polling stations that's snowed in or has, you know, like what you mentioned, like a bomb threat or something else.

Bilyana Lilly: Exactly. And it doesn't have to be something disruptive and- and incredibly shocking like a bomb threat, right?

Perry Carpenter: Yeah.

Bilyana Lilly: It could be as simple as, hey, we moved the polling station instead of here, it's on the other side of town now. Or something along those lines. But, Perry, I try to keep thinking that Americans are smart. I know- I know that we all can be deceived, but I also take examples of- of past events. For example, the- a robocall of Joe Biden during the primaries in New Hampshire, when people received the call asking them not to- supposedly from Joe Biden telling them not to vote, and the turnout that on election day was still high, so people still voted. So if we think about the effects of that particular deepfake, they don't seem to have been that significant. So I keep on hoping that because we have so many stories and pieces of information that the news covers and people are aware of the pernicious effects that deepfakes could have, and that nation states are using these tools to deceive us, I'm hoping that this- this really has built certain resilience in our own minds and behaviors.

Perry Carpenter: Yeah. And I do think, you know, as we get close to huge world changing events or country changing events like elections, those centralized sources, like what you're talking about would be a really big help of, we've heard about this narrative that's gone out, here's where you get the facts about that- that actual thing and why it's deceptive or why it's true. And so now you can make your decisions off- based off that, rather than somebody having to just, you know, go to 10 different social media feeds or the one news network that they decide to trust that day or whatever.

Bilyana Lilly: Exactly. Exactly. And I was also wondering whether you and I had this discussion yesterday, a little bit of whether you would mind talking a little bit about your book and why you wrote it. I- I'm very curious to see- to hear your opinion on that because it's also focused on disinformation and new technologies and AI from what I managed to gather. And would love to- to hear a little bit about why you wrote it and what are the lessons that you wanted to convey for it.

Perry Carpenter: Yeah, I mean, I think it's- it's similar to yours. I mean, if- if we're going to use books, they're- they're kind of- they tell the same story in a little bit different way. So yours is very narrative focused and is approachable to anyone. And I really like that as I was looking through it because it draws you in. I had some similar thoughts, but I really just wanted to lay out the facts. So I wanted- I kind of have a few different goals. One is help people in a state of artificial intelligence today, because I hear people talk about it a lot, but they- they're not able to talk about it with precision. And I think that we're in a space right now that if you can't use, you know, semantically correctly talk about the- the subject that you're- you're on, it can lead you down some strange paths. And so I really wanted to disambiguate where we are with AI, give people the correct terms, the correct frame of mind to understand it in a easily approachable way for the masses as well. So very similar goals with you there. And then I wanted to help people learn how to think like an adversary. So look at every situation that they go through and say, "How would somebody that wants to deceive me view this situation, or see this technology as an opportunity?" And it's really interesting that, you know, your book is "Digital Mindhunters." I say that all cyber-crime really comes down to two things, money or minds. It's a combination of both.

Bilyana Lilly: I like it.

Perry Carpenter: So, finances or influence.

Bilyana Lilly: Yes.

Perry Carpenter: And there's of course crossover there, but you can start to then trace like, well, what is the effect of this piece of media that is in front of me? What is it trying to get me to do? Is it trying to get me to believe something? To act in a certain way? And if it's to act, is that something that furthers a narrative or is it something that maybe gives somebody finances or control or something else? So I go through that and then I just show how AI is kind of the perfect complement to that, to take that to the next level. And then the- the last three chapters in my book are kind of like the- the toolkit for how to build your cognitive defenses, things to work through with your family and, you know, games to play where you create your own new piece of disinformation or new narrative. You know, all those kind of things are- are built into it. After the break, the conclusion of our interview with Dr. Bilyana Lilly. [ Music ] Welcome back.

Bilyana Lilly: I'm curious to ask you about something.

Perry Carpenter: Yeah.

Bilyana Lilly: In Chapter 7 you say that you created multiple GenAI-powered ScamBots yourself?

Perry Carpenter: Yeah. Yeah.

Bilyana Lilly: That's- that's- that's ridiculous. Why Perry why?

Perry Carpenter: Well, I mean, a lot of it was curiosity. So I wanted to show a couple things, and I did get to show it at DEFCON this year, but I really believe that audio deepfakes, and I really well done image, deepfakes are more potent than a video deepfake right now. Because of-

Bilyana Lilly: Interesting.

Perry Carpenter: There's a lot of things I think about human mind where we pick up on subtle muscle movements and things like that that a video deepfake has a harder time convincingly doing right now. And even mouth movements are hard to fully match if somebody's paying attention. You know, that doesn't negate the fact that if it's just slipped into your social media feed, you may buy it. But I think that we're less cognitively prepared for a really well-done image. And- and even especially something that comes through a channel where we're naturally multitasking like a phone call. And when we get a really good audio deepfake, I think it- it's super destructive if you're going after an organization or if you want to simulate a kidnapping. And so I created these bots that are large language model powered on the back end, but then also pushed through a voice synthesis provider and- or just you- you set up your context and your- your frame your pretext, and they just go. And what I saw when I did that at DEFCON in the first real live fire test with these things is right out of the gate, it kept somebody on the phone for 10 minutes, got every objective that I wanted to get out of the person, help them solve a technical issue that they were having with their computer. Was able to- to go past -

Bilyana Lilly: So you go past denying.

Perry Carpenter: Yeah.

Bilyana Lilly: In a certain way.

Perry Carpenter: You know, it was able to- to withstand when the person was like, are you really from the help desk? And, you know, just because we ran it through a spoof number as it was, you know, saying, hey, I work for this person that was, you know, based on the corporate hierarchy and giving assurance there, the person goes, "Yeah. And I also looked up the phone number you were calling from and it matches the corporate office." So, you know, everything was set up to be good. And what we saw in the test that we did at DEFCON is that these bots are as- will be as successful as somebody who's been doing this for decades. So that was fun.

Bilyana Lilly: Wow. I look forward to reading the book. I just skimmed through it, but I'm going to read it.

Perry Carpenter: Yeah. Well, and I'm going to- I skimmed through yours as well, and I'm going to take a deeper dive and read slower so I can appreciate the narrative a little bit more. But yeah, I love your- your approach and I love the fact that you're able to bring the depth of your experience and your work into it. You know, I'm curious with- with yours, what was like the nugget of either truth that you wanted to convey or the one bit of like a, you know, some kind of technical nugget that you wanted to bring out? It's like, I- I can't wait to showcase this thing narratively.

Bilyana Lilly: Such a great question. I think maybe to- there's so much.

Perry Carpenter: Yeah.

Bilyana Lilly: But I'd say two things I wanted to- from a technical perspective. There is a part of the book where the- the hacker uses a very simple phishing campaign to obtain some information that's very important in the- in the whole book, in the whole plot. And I wanted to show how easy it is to trick people. And I think because we go through so many cyber awareness trainings and we- we have so much information about phishing already and we almost consider ourselves, especially the cyber community, as almost immune to phishing because oh, no, we know that this is- this is being done and these are scams and they're so easy to recognize. But I wanted to show that that's not the case. And I even have friends who to this day tell me that they have been duped by the- by the phishing training, so.

Perry Carpenter: Right.

Bilyana Lilly: So I know some of us, it- it doesn't really depend on how intelligent you are sometimes at the right moment with the right prompt, you can be tricked as well. So I wanted to-

Perry Carpenter: Yeah.

Bilyana Lilly: -to push that narrative through. And then another kind of important element for me is when I started writing the book, I had some friends tell me, "Bilyana, focus on one threat only, don't cover- try and I run, like, people want to focus on one thing, don't- don't throw all of those- all of those different villains in the same book." But that's not reality. In reality, we face multiple threats simultaneously. And I know that typically the- the American media cycle focuses on one or two things. We can't really multitask, but we have to learn to multitask because we are being attacked, our vulnerabilities are being exploited, our networks are being scanned by multiple actors at once. So I think that's one of the main messages I want to- I want to push through this book.

Perry Carpenter: From your perspective, either in the book or in real life, I want to think kind of dualistic, what is the scariest thing that you're thinking about or have put on the page? And then also, what is the most hopeful thing that you are thinking about or have put on the page?

Bilyana Lilly: So the scariest thing is that if you're on a radar of a- of a nation state, you're not safe even at home. And I've heard of multiple colleagues who have not necessarily been physically approached, some physically, but a lot of them have been harassed in cyberspace. They have been cyber bullied because they have published provocative, insightful articles on the cyber capabilities of different nation states.

Perry Carpenter: Yeah.

Bilyana Lilly: So I'd say that's the reality of our- of- of our craft is that we have to be very mindful of the fact that we're working in a very sensitive area and that we can anger a lot of very powerful actors. And that's where our community becomes our strength, because those are the moments where we have to- I believe that we have to publish the information that we find, and I am not afraid to challenge dictatorial missions, but I also know that it is very important that we have a support network.

Perry Carpenter: Yeah.

Bilyana Lilly: In case those- when those moments- when those things happen, we have people to talk to and we know that we are supported and we shouldn't hide what we know out of fear. We should be careful, but we should also be, I think, brave.

Perry Carpenter: Yeah. That's- that's great. You know, and I- I think that and I want to get to the hopeful thing in a second.

Bilyana Lilly: I forgot about that, Terry. Sorry.

Perry Carpenter: No, but- but I- I think you bring up a really good point. So like the cyber bullying and everything else that we're seeing, you know, several years ago we would only talk about disinformation and misinformation. And over the past couple years, people have really started to talk about malformation, which is information that is very, you know, could very well be true, but then gets released maliciously. It could be photos, it could be pictures of your house, your address, pictures of your kids, all that kind of stuff so that it brings the- all the trolls and everybody else that's against you right to your front door. And I think that that's, you know, what you're- what you're hinting at a little bit.

Bilyana Lilly: Yes. Yes. And that could be very powerful.

Perry Carpenter: Yeah.

Bilyana Lilly: You don't have to threaten someone physically, emotional abuse, bullying that has similarly impactful effects on a person. Absolutely.

Perry Carpenter: Yeah. All right. So then where's the hope?

Bilyana Lilly: I'm thinking about it, I was hoping you won't bring it again. [ Laughter ] Well, at the end, I'd say after a number of action scenes and explosions and some assassinations, I think the- the status quo is maintained. I think despite all the threats that we saw in the book, even the- despite the sophisticated tools that were used, there's still an element in the last chapter I made sure to suggest maybe there were effects that are long lasting. And I really wanted to put- there's one sentence in there that suggests that, that Riley says. But overall, that particular campaign and that attempt to interfere with our electoral process in that particular case was diffused.

Perry Carpenter: Ah, great. So the- the things that are against us be overcome. It's just-

Bilyana Lilly: Exactly.

Perry Carpenter: Yeah.

Bilyana Lilly: Yeah.

Perry Carpenter: It- it takes work and not burying our head in the sand and hoping that it doesn't happen to us.

Bilyana Lilly: Exactly.

Perry Carpenter: Yeah.

Bilyana Lilly: Exactly.

Perry Carpenter: Well, and I think you also pointed to some of the work that's being done around the world and that- that people can decide on central sources of truth or places to- to go for information.

Bilyana Lilly: Exactly. What is really dangerous. Another trend that I'm seeing is that different actors would use influencers.

Perry Carpenter: Yeah.

Bilyana Lilly: To spread a particular message. And we know also our government is now discover- discovering connections between some of the U.S. based influencers and China, Russia, and other nations. For me, that's very dangerous because they have becoming ingrained in our culture, and all of a sudden a personality that you trust whose opinion you follow all of a sudden is propagating the same narratives that a foreign government would.

Perry Carpenter: Yeah.

Bilyana Lilly: For me, that's very dangerous to- to even- very dangerous tool to use and very difficult to explain to the American public and any public where that's happening.

Perry Carpenter: Yeah. So- and I've- so in a second, I've got essentially what are icebreakers. So they should be at the very beginning. I always put them at the end. But-

Bilyana Lilly: Why is that?

Perry Carpenter: I do want to- I don't know. I'm- I'm an idiot. I think it's-

Bilyana Lilly: Perry, but I've never heard that before, but that- that's really interesting.

Perry Carpenter: Yeah. Kind of like a reverse icebreaker, but I- I do want to ask if you've been thinking about the fact that artificial intelligence in its, you know, within its training data, there are inherent biases and that it can definitely be influenced long term as well based on several ways that you can play with the data and- and perpetuate bias.

Bilyana Lilly: Of course.

Perry Carpenter: Have- have you given any thought about like, what that means for information or influence operations or like ways that we might find to counteract that?

Bilyana Lilly: Yeah. Well, our training dataset, our- our model is only as good as our training-

Perry Carpenter: Yeah.

Bilyana Lilly: - training data that we feed it, right? For- specifically that is very concerning from- from a position- from- from a position of a defender because so much of the- so much of the information that we have on the internet about each individual, it's already public and adversaries can exploit it.

Perry Carpenter: It feels to be one of those almost unanswerable questions because it's almost like the nature of information or the internet itself, it's the-

Bilyana Lilly: It's already biased. Yeah.

Perry Carpenter: It's already biased. You know, the winners write the history. People that are already in a majority are the ones that are putting out most of the information. So you already have- if large language models are prediction machines that are going to go to where the weight of the data that's been fed to them takes it, then you already have a majority group is the one that's filling, you know, the internet with most of the information. And a minority group is not going to be represented as well informationally. So I don't know how we overcome those things. I just know that there are a lot of smart people, at least worried about it and hopefully, you know, trying to- to solve those.

Bilyana Lilly: Well, thank you for saying that. I could tell you from some of the- the friends that I have that they are actively working on-

Perry Carpenter: Yeah.

Bilyana Lilly: - some of those LLMs that could correct the bias that already exists.

Perry Carpenter: Yeah.

Bilyana Lilly: And whether it's about gender equality, whether it's about ethnicity, about educational background. And I think maybe that's even a silver lining, maybe because now we have AI, we are forced to think because it accentuates the biases that we already have.

Perry Carpenter: Right.

Bilyana Lilly: Maybe this could correct our conversations, our narratives in the right direction and actually address biases that we couldn't address earlier.

Perry Carpenter: Yeah. I mean, I think we're in a weird situation with like large language models, because when you think about releasing something like ChatGPT or Anthropic's Claude or Google's Gemini, what they're essentially having to do at every major country and population group is to say, what is your version of truth? You know, when you ask this LLM a question about religion, how do you want it to reflect the answer? So you- so you- you're basically acknowledging your own bias and acknowledging the fact that you have a, you know, your version of truth that may not align with somebody else's version of truth. So I think that's- it's a weird place where we're having to- every country has to acknowledge their cognitive dissonance.

Bilyana Lilly: Yes, exactly. And you're forcing the discussion.

Perry Carpenter: Yeah.

Bilyana Lilly: Through this existing technology that is now putting that discussion on the forefront. You're forcing the conversation.

Perry Carpenter: But at the same time, then you're saying, all right, now after we've had that discussion, here are the walls, and-

Bilyana Lilly: Yes.

Perry Carpenter: I'm still going to enforce my version of truth in some way or- and it's a very small set of the larger population that gets to have that discussion. Yeah. I don't- I don't know where that takes us.

Bilyana Lilly: I think we have quite a few vocal advocates, advocates on those topics, and I'm really happy to see those discussions taking place.

Perry Carpenter: All right. So one last question just on- on you in the book. Is there a question that you wish I asked that I should have asked that I was so thoughtless as to not ask?

Bilyana Lilly: Oh, one big thing, no.

Perry Carpenter: Yeah.

Bilyana Lilly: You did great. One aspect, honestly one aspect I wanted to bring in, in the book that to me is very important and I don't think we pay enough attention.

Perry Carpenter: Okay.

Bilyana Lilly: It's that those threats in the digital space can easily transfer in the physical space. There are examples of- there are examples of cases where disinformation has led to physical violence.

Perry Carpenter: Yeah.

Bilyana Lilly: For example, 5G Networks a few years ago, you've probably read about those cases. There was disinformation that 5G causes cancer, autism, covid, a number of issues. As a result, there were arsons of mobile towers in Canada and the UK. And in the UK, there were, I think, over 75 towers were burned because supposedly, people thought that they were causing harm, and this was disinformation. So this is a clear example of how this happens. And in the book, I have a few examples like this. I sprinkle them through the narrative, but they're actually my favorite parts where I show, hey, here's an unwitting consumer of information, an activist who believes they're doing the right thing, who reads a disinformation piece, decides to organize a protest because they think that this is good for the country, and then this leads to violence. I'm not going to expose anymore, but those are just-

Perry Carpenter: Yeah, well, and we've seen-

Bilyana Lilly: - subtle examples, and not so subtle.

Perry Carpenter: - several examples of that, even in the U.S. as well, with people really believing that there's something horrible going on and then moving in to intervene in their mind and realizing that they've been sold a bag of lies. Yeah.

Bilyana Lilly: Yeah, exactly.

Perry Carpenter: All right, three more. These are really easy questions, hopefully they're really easy questions. The first one is, given the fact that you do a lot of research, that means you probably have your browser open a lot of times and you're looking at really weird stuff. So if you were to think about-

Bilyana Lilly: Thank you for setting that up.

Perry Carpenter: If you were to think about your browser history, what would be the weirdest thing to have to explain to somebody that just looked at it and said, "Huh, I wonder why she's looking at that." What would be the weirdest, most uncomfortable or most interesting thing there?

Bilyana Lilly: Probably my browser history is very boring. [ Laughter ] Maybe motorcycles.

Perry Carpenter: Motorcycles, nice.

Bilyana Lilly: Sometimes I'm looking at so many motorcycles. Yes, I think that'll be one of the things.

Perry Carpenter: Okay. And is that because you are- you're an enthusiast, and do you-

Bilyana Lilly: Yeah.

Perry Carpenter: Do you have one and ride, or do you want to buy one?

Bilyana Lilly: Yes, I got to- I finally- I used to ride their bikes, and I finally got my license.

Perry Carpenter: Okay-

Bilyana Lilly: Now I am allowed to- yes, I love riding in the mountains and the desert. I think it's relaxing, and a lot safer than being on the road.

Perry Carpenter: Yeah, nice. Okay, and then, what emoji do you either overuse? Or if you're not into emojis, do you wish that everybody would just stop using completely? Like, if you could put out an assassination order against an emoji, which one would that be?

Bilyana Lilly: Oh, no, I will promote. I always use the really smiley one, the one that's like with the biggest smile possible. That I think I over use it. I think I'm typically a happy person.

Perry Carpenter: Okay, so you're not going to kill any?

Bilyana Lilly: No, I don't- even the poop emoji. I don't get the point about the poop emoji. It's so popular. I got it, the voice of the- the Star Trek actor. What's his name? The British guy, I forgot. But there was a movie and he was in it, and he was the poop emoji. And I think from there it became really popular. I don't understand what people are trying to say even if something is a BS, I imagine.

Perry Carpenter: Yeah, I'm guessing that's probably context dependent.

Bilyana Lilly: Yes.

Perry Carpenter: Okay.

Bilyana Lilly: Oh, yes.

Perry Carpenter: All right. So then last question, this a little bit more serious, other than your own books, if there is one book that you believe that everybody should read, whether that's security related, information warfare related, or just anything else about life, what would that be?

Bilyana Lilly: Just tell me about the last book I read, how about that?

Perry Carpenter: Yeah, that sounds great.

Bilyana Lilly: It's called "The Director." And it's a fiction story about the director of a CIA who discovers an intelligence plot against a central bank and so on. And I think that's a good book, because it shows how bureaucracy can corrupt.

Perry Carpenter: Yeah.

Bilyana Lilly: And also in the end, has a positive ending, because it shows how- how our institutions can still prevail.

Perry Carpenter: Right. Does that one- I've not read it, but you talk about the fact that these systems can corrupt. Does it talk about the fact that, like, very often, when people go into the systems, they have the best of intentions of, like, you know, doing good for the world, and then just small tradeoffs over long periods of time?

Bilyana Lilly: Exactly. And also, it's really interesting to get in the mindset of the adversary. I look forward to reading your book, because I want to see how you did it.

Perry Carpenter: Thank you.

Bilyana Lilly: I think- of course, I think everyone has their rationale. And even- even the villains thing they're right. Look at Putin.

Perry Carpenter: Yeah.

Bilyana Lilly: Look at Xi, they think they're doing right by their countries that their models of governance are correct, that they're leaving a legacy and building empires, and that's a priority. And they don't care about the minorities, the Uyghurs, that are being used for cheap labor and forced labor, and the Russians don't care- well, they prioritize the narrative for the reality that they have been leaders of our nation. So you have justifications in the head of leaders that to them makes sense.

Perry Carpenter: Yeah, okay, well, I think that's probably a good place to end. Is there any last thing that you want to plug, websites you want people to go to, things you want them to buy?

Bilyana Lilly: No, I hope- I hope people look at my book, "Digital Mindhunters." I would love to hear people's feedback. So anyone who's listening to this, connect with me on LinkedIn, send me a DM, tell me what you think. I'd love to hear your feedback. And I think this has been a great interview, and this was first interview in the book I've given so it's nice-

Perry Carpenter: Nice.

Bilyana Lilly: - to see how the community reacts to it. [ Music ]

Perry Carpenter: As we think about how AI powered influence operations are blurring the lines between fact and fiction, we can't help but be both fascinated and, yeah, alarmed. But let's not leave this conversation feeling helpless. Knowledge is power. By understanding these technologies and the motivations behind their use, we are able to take the first crucial step in protecting ourselves and our societies. And so as you go about your week, I encourage you to look at our world with fresh eyes, question what you see, verify before you share, and most importantly, never stop learning. The reality Dr. Lilly describes in "Digital Mindhunters" and the one I explore in "Faik," these are not some distant possibility, this reality, it's unfolding right now, but with awareness, critical thinking and a commitment to truth, we can navigate this new world that's unfolding before us. And with that, thanks so much for listening. And thank you to my guest, Dr Bilyana Lilly. I've loaded up the show notes with links to Bilyana's new book, "Digital Mindhunters," which is available October 30. And I've also included a few other resources related to the topics we discussed today. Oh, and of course, I also put in links where you can pre order my new book "Faik, A Practical Guide to Living in a World of Deepfakes, Disinformation and AI Generated Deceptions." The release date for that book is October 1. But when it comes to the world of publishing, pre orders are everything, so please consider pre ordering right now, and you can use the link conveniently in the show notes to make that purchase. Okay, if you haven't yet, please go ahead and subscribe or follow wherever you like to get your podcasts. And I'd also love it if you tell someone else about the show, that helps us grow. If you want to connect with me, feel free to do so. You can find my contact information at the very bottom of the show notes for this episode. The show logo and podcast cover for 8th Layer Insights was designed by Chris Michalski at ransomwear.net, that's W-E-A-R, and Mia Rune at miarune.com. The 8th Layer Insight's theme song was composed and performed by Marcus Moscat. [ Music ] Until next time, I'm Perry Carpenter, signing off. [ Music ]