Hacking Humans 10.24.19
Ep 71 | 10.24.19

The ability to fundamentally deceive someone.

Transcript

Henry Ajder: [00:00:00] Deep fakes threaten any process where audiovisual media is used to inform key decision-making or key communications. 

Dave Bittner: [00:00:08]  Hello, everyone, and welcome to the CyberWire's "Hacking Humans" podcast, where each week, we look behind the social engineering scams, phishing schemes and criminal exploits that are making headlines and taking a heavy toll on organizations around the world. I'm Dave Bittner from the CyberWire. And joining me is Joe Carrigan from the Johns Hopkins University Information Security Institute. Hello, Joe. 

Joe Carrigan: [00:00:26]  Hi, Dave. 

Dave Bittner: [00:00:27]  We got some good stories to share this week. And later in the show, we've got Henry Ajder from Deeptrace Labs. They're developing technology to root out deep fakes, and he's going to share their findings. But first, a word from our sponsors at KnowBe4. What does over a decade of experience in the CIA teach you about healthy paranoia, and how can you use that knowledge to make your organization more of a hard target? Stay tuned. Because later in the show, we'll share information on an exclusive webinar from KnowBe4 that has the inside scoop. 

Dave Bittner: [00:01:03]  And we are back. Joe, why don't you start things off for us this week? 

Joe Carrigan: [00:01:06]  Dave, my story comes from WalesOnline this week. 

Dave Bittner: [00:01:09]  Blue whales? Killer whales? 

Joe Carrigan: [00:01:11]  Wales, the country. 

Dave Bittner: [00:01:12]  I see. 

Joe Carrigan: [00:01:12]  Right. Part of the United Kingdom. 

Dave Bittner: [00:01:13]  (Laughter) OK. 

Joe Carrigan: [00:01:14]  (Laughter). The story's about a mom who gets a phone call from, quote, "the local police station." And there's a guy on the phone saying that there's a warrant out for her arrest because she owes a thousand pounds in unpaid taxes. And of course, they state, if you don't pay within the hour, we're going to send somebody to arrest you. So she says, what about my family? And the guy on the phone says, don't worry about your kids. And she goes, I didn't say anything about kids. But the caller knew that she had four children. 

Dave Bittner: [00:01:39]  Wow. 

Joe Carrigan: [00:01:40]  And he knew a lot of other information about her, as well. So he had already done some research on this woman before he called her. He said, that if she didn't pay, that her kids would be taken away from her and she would be locked up for at least five years for a 1,000-pound tax debt. 

Dave Bittner: [00:01:54]  Seems excessive. 

Joe Carrigan: [00:01:55]  It does. This guy was a really hard seller, though. He stayed on the phone with her and was insistent about staying on the phone with her for the entire process. And she actually did commit the transactions. It was just, go withdraw the money from your bank account and put it into these other bank accounts. 

Dave Bittner: [00:02:08]  Transfer the money. 

Joe Carrigan: [00:02:09]  Yeah. Transfer the money. But she did it physically, right? So she actually withdrew the money and put it into another bank account. I don't know if it's at a different bank. I would assume that it was. 

Dave Bittner: [00:02:17]  Yeah. I've never really thought about that. If I - can I just, you know, walk into a bank that's not mine and say, I'd like to deposit a thousand pounds in account number such and such? I guess I can. 

Joe Carrigan: [00:02:27]  Yeah. You can. 

Dave Bittner: [00:02:28]  Never really thought about that. 

Joe Carrigan: [00:02:29]  Yep. She completes the transactions, and then she's like, I'd better double-check with the police, right? Called and reported this to the police. I don't know how it goes down, but she says that she is told it's a scam. Now, the good news is she does eventually get her money back. She was able to get a refund of the money. 

Dave Bittner: [00:02:45]  Well, that's good. 

Joe Carrigan: [00:02:46]  OK? But here is the important part of the story. At least, what I think is the important part of the story. The next day, the same guy calls her and starts in again with this whole spiel, right? 

Dave Bittner: [00:02:57]  Really? 

Joe Carrigan: [00:02:58]  Now, she knows it's a scam. She's been told by the cops it's a scam. The banks know it's a scam. All of this stuff. But she starts doubting herself because this guy is so convincing. 

Dave Bittner: [00:03:09]  Wow. 

Joe Carrigan: [00:03:09]  This guy is on the phone with her telling her that she's going to go to jail again, and she starts getting that fear. This is what I think is the crux of it. I haven't had a story like this - you haven't had a story like this on this show. We always talk about - this thing has all the elements, right? I'm going to short-circuit your thinking. I'm going to represent the power of something huge and scare you with something that's important to you, like your kids. And you can make it all go away by giving me a thousand dollars. And that is so powerful that even after she knows it's a scam, she still gets that little seed of doubt and fear in her head. And that's what they feed on. 

Dave Bittner: [00:03:41]  He's so convincing. 

Joe Carrigan: [00:03:42]  He's so convincing. 

Dave Bittner: [00:03:43]  He can plant that seed of doubt in her mind. 

Joe Carrigan: [00:03:45]  Yeah. 

Dave Bittner: [00:03:46]  Wow. And... 

Joe Carrigan: [00:03:47]  I'm glad she was able to get her money back. And I'm also glad that she talked about this because this is the kind of thing we need to talk about to inoculate ourselves as a society against these kind of attacks. But these guys are getting better and better. Their skills are growing. Any skill that you practice at, you'll get better at. Right? 

Dave Bittner: [00:04:01]  Yeah. It's remarkable to me that this guy is so persistent. Because you presume that when he called back the second day, she would have said, hey, I spoke to the police... 

Joe Carrigan: [00:04:11]  Right. 

Dave Bittner: [00:04:11]  ...And they told me that this is all a scam. Knowing the police could be on his trail... 

Joe Carrigan: [00:04:16]  Right. 

Dave Bittner: [00:04:17]  ...He doesn't care. 

Joe Carrigan: [00:04:17]  Right. Well, he's probably not in country. 

Dave Bittner: [00:04:19]  Yeah. Exactly. 

Joe Carrigan: [00:04:20]  So he doesn't care. You're right. 

Dave Bittner: [00:04:22]  Yeah. Now, a happy ending in the end? All's well that ends well? 

Joe Carrigan: [00:04:25]  She got her money back, and she didn't give him the money on the second and third call. 

Dave Bittner: [00:04:29]  Well, that's good. Wow. Yeah. 

Joe Carrigan: [00:04:30]  He was persistent. He called three times. 

Dave Bittner: [00:04:32]  I guess this is one of those situations where you say, how could anyone be so foolish, or how can anyone be so gullible, or that would never happen to me. 

Joe Carrigan: [00:04:40]  Well, she... 

Dave Bittner: [00:04:41]  But then... 

Joe Carrigan: [00:04:42]  Yeah. She actually - there's a video on this page. I watched the video, and she says the same thing. She said, I thought of myself as a headstrong person and thought this would never happen to me, but it did. It's an interesting story. You should go watch the video, see her telling the story and read the article. It's a short article. It's very interesting to me how this works psychologically even after you know - you know that this is a scam. 

Dave Bittner: [00:05:04]  Right. Who you going to believe? 

Joe Carrigan: [00:05:06]  Right. 

Dave Bittner: [00:05:06]  Your lying eyes. 

Joe Carrigan: [00:05:07]  (Laughter) Right. 

Dave Bittner: [00:05:09]  (Laughter). 

Joe Carrigan: [00:05:09]  The threat of something so severe as losing your kids has a real pull. It's a real trigger for a lot of people. 

Dave Bittner: [00:05:17]  Absolutely. All right. Well, my story this week has to do with something that I would categorize as an ongoing annoyance when it comes to things online. I have a feeling I'm going to push one of your buttons here, Joe. 

Joe Carrigan: [00:05:31]  (Laughter). 

Dave Bittner: [00:05:31]  These are... 

Joe Carrigan: [00:05:31]  You've been doing that a lot lately, Dave. 

Dave Bittner: [00:05:32]  (Laughter). 

Joe Carrigan: [00:05:32]  (Laughter). 

Dave Bittner: [00:05:32]  Well, now I know where they are. 

Joe Carrigan: [00:05:35]  Right. 

Dave Bittner: [00:05:35]  It's awfully easy and fun to do. When you sign up for something online and then you find down the line that it is difficult or impossible to terminate your relationship with that online organization. 

Joe Carrigan: [00:05:50]  Like a gym membership, almost. 

Dave Bittner: [00:05:52]  Like a gym member - (laughter) yeah. That's a good analogy. Like a gym membership. So there is an online directory. I put the link here. It's from an organization called backgroundchecks.org. And they have a section of their website called Just Delete Me, and it's a directory of links. If you want to delete your account from various web services, this directory - for example, if you wanted to delete your account from AOL, you could look up AOL, and it'll have a link where you can go. It'll take you right to the page to delete your account. But it also lists how difficult it is to delete your account from different organizations. So for example, looking at some of the ones early in the list here. Adobe. It says Adobe is hard to delete your account from. 

Joe Carrigan: [00:06:37]  Right. 

Dave Bittner: [00:06:37]  'Cause you have to call them in order to delete your account. 

Joe Carrigan: [00:06:41]  Really? 

Dave Bittner: [00:06:41]  (Laughter). 

Joe Carrigan: [00:06:42]  You can't just delete it online? 

Dave Bittner: [00:06:42]  Well, it does say you can - alternatively, you can send them an email. But they do list it as being hard. Animal Crossing. Familiar with that? It's an online game. That's listed as impossible. 

Joe Carrigan: [00:06:53]  No. What is Animal Crossing? 

Dave Bittner: [00:06:54]  It's a game. It's, like - it's a game on Facebook, I think. 

Joe Carrigan: [00:06:57]  OK. 

Dave Bittner: [00:06:58]  It's impossible to delete your account. They say, we do not delete or terminate accounts on Animal Crossing. If you no longer wish to use the site, you may delete all personal information from your profile and then stop logging in. But you'll still be there. 

Joe Carrigan: [00:07:10]  Yeah. OK. 

Dave Bittner: [00:07:10]  (Laughter). 

Joe Carrigan: [00:07:11]  I have a solution for this, though. 

Dave Bittner: [00:07:12]  Yeah? 

Joe Carrigan: [00:07:13]  Is when you're signing up you just use a bunch of fake information. 

Dave Bittner: [00:07:16]  Aha. Well, Joe, funny you should say that. 

Joe Carrigan: [00:07:18]  Yes? 

Dave Bittner: [00:07:19]  Because part of this website is a fake identity generator. 

Joe Carrigan: [00:07:22]  Ooh. 

Dave Bittner: [00:07:23]  (Laughter). 

Joe Carrigan: [00:07:23]  Really? 

Dave Bittner: [00:07:23]  Right (laughter). So if you want to use one of these servers - you know, so here's what you do. You go, you look on this directory. You say, I'm thinking about signing up for an Animal Crossing account. I think it's a fun game. My friends have recommended it to me. But first, I should go check on this website to see how difficult it is, should the time come when I no longer want to play this game. And you see it's impossible. So then you say, ah, well, no problem. I will just spin up a fake identity, and away we go. So... 

Joe Carrigan: [00:07:52]  All right. Good. 

Dave Bittner: [00:07:52]  There's different ratings here. There's easy, medium, hard and impossible. So it's a nice little directory. Also, it could be a real time-saver. If you want to delete an account, a lot of these have direct links to the page where the account deletion would be. Another thing, too - this could be a nice tool to go through and just sort of speed up an audit of some of those lingering accounts that you forget you have. 

Joe Carrigan: [00:08:17]  Yeah. 

Dave Bittner: [00:08:18]  Go through and look through this list and say, you know, I may have an account on, I don't know, GitHub or GoDaddy or, you know, somewhere that I don't use anymore. And... 

Joe Carrigan: [00:08:28]  Right. 

Dave Bittner: [00:08:29]  ...Might as well go through and just see if I'm still on there. 

Joe Carrigan: [00:08:32]  It is impossible to delete a Netflix account. 

Dave Bittner: [00:08:34]  Is that right? 

Joe Carrigan: [00:08:35]  Yes. 

Dave Bittner: [00:08:35]  Did not know that. Yeah. Yeah. 

Joe Carrigan: [00:08:37]  So there is one where you can't really use a fake identity, right, 'cause you need a credit card. Maybe you can get one of those cash-based credit cards. 

Dave Bittner: [00:08:43]  Yeah. And there are other services online that'll spin up a decoy credit card number for you (laughter). 

Joe Carrigan: [00:08:48]  Right. 

Dave Bittner: [00:08:48]  I mean, it's to your actual credit card number, but it's not your actual credit card. 

Joe Carrigan: [00:08:52]  Right. It's a temporary credit card number that you can just destroy. 

Dave Bittner: [00:08:55]  Yep. 

Joe Carrigan: [00:08:56]  I think that still has, like, auditing requirements on it. So you still have to give me your real name and stuff like that. 

Dave Bittner: [00:09:00]  Yeah. Could be. It's a tangled web we weave, Joe, with all these different things. But, yeah, Just Delete Me. It looks like an interesting little service there worth checking out. All right, Joe. Well, that is my story. It is time to move on to our Catch of the Day. 

0:09:16:(SOUNDBITE OF REELING IN FISHING LINE) 

Dave Bittner: [00:09:20]  Joe, this is a Catch of the Day that was sent to us from someone online. And this one is interesting in that there is almost no punctuation. (Laughter). So... 

Joe Carrigan: [00:09:33]  Was it written by E. E. Cummings? 

Dave Bittner: [00:09:35]  (Laughter). Yeah. I don't know. Yeah. Exactly (laughter). I'm going to do my best here to read through this. Here we go. It starts out, (reading) honey, thank you for your mail and accepting to help me out, and I pray that almighty God will pay you back since you listened to the cry of a widow like me. The federal government of Nigeria have seized all my husband's properties. But the only money that is left for me and my children is this 6.5 million that was deposited in a custody in London, which I want you to receive and deposit to your bank account and arrange for all the area you invest in it for me since I don't have anybody I can run to, and now God has brought to you from heaven to help me out. All I need from you now is to assure me that the money will be safe in your hand and you will not betray me by the time you receive the money from your custody. I need the full assurance from you that you will not act funny because I'm relying on backing hope leaving that you will put me through and invest this funds for me as soon as you receive the funds and deposit into your account. You deduct your 30% first and any expenses you commit to this transaction in other to make sure that the funds get used to successful to will be deducted from the 5% which will be maked out for any expenses that may come up on the process of the transaction which you receive back immediately. The funds get to you. Please kindly let me know the area you invest the funds in your reply to this email, and also forward the copy of your international passport to and your direct telephone number with complete address. I forward here the attach of my family pictures for you to know whom you are dealing with, and also the attach of how the funds were packaged for the security custody. As soon as you get back to me with your required information, I will finalize the arrangement with security custody and let you know when the company delivery agent will arrive your country with funds, which we will not disclose the content of the consignment to the deliver agent because the consignment was deposited to the custody as personal effect. I will give you more details as soon as you get back to me with your information. I would like to receive a copy of your international passport, complete address and direct phone number in your reply. May God bless from you help. Miriam Avaja (ph). 

Joe Carrigan: [00:11:10]  (Laughter). Well, I feel terrible for Miriam Avaja. (Laughter). 

Dave Bittner: [00:11:12]  You know, first thing she needs to do is go out and buy a typewriter that has a period on it. 

Joe Carrigan: [00:11:16]  Not only did the government take all her money, but they took all of her punctuation. 

Dave Bittner: [00:11:19]  They took all of her punctuation away. Yes. 

Joe Carrigan: [00:11:21]  But they left her with just $6 million. 

Dave Bittner: [00:11:25]  How do you survive on that? 

Joe Carrigan: [00:11:26]  I don't know. 

Dave Bittner: [00:11:27]  I don't know. I guess in Nigeria, $6.5 million doesn't go as far as it used to. 

Joe Carrigan: [00:11:30]  No. 

Dave Bittner: [00:11:31]  (Laughter). 

Joe Carrigan: [00:11:31]  It'd go a long way here in the U.S. 

Dave Bittner: [00:11:33]  Yeah. Exactly. Once again, anybody who wants to deposit $6.5 million in my bank account, reach out. 

Joe Carrigan: [00:11:39]  Right. 

Dave Bittner: [00:11:39]  We'll make it happen. Joe, I'll give you 30%. What do you think? 

Joe Carrigan: [00:11:44]  Hey. I think that's great, Dave. I'll take it. 

Dave Bittner: [00:11:46]  (Laughter). 

Joe Carrigan: [00:11:46]  (Laughter). 

Dave Bittner: [00:11:46]  All right. That is our Catch of the Day. Coming up next, we've got Henry Ajder from Deeptrace Labs. They're developing technology to get a upper hand on deep fakes, and he's going to share what they've found. 

Dave Bittner: [00:11:58]  But first a message from our sponsors, KnowBe4. Having spent over a decade as part of the CIA's Center for Cyber Intelligence and the Counterterrorism Mission Center, Rosa Smothers knows the ins and outs of leading cyber operations against terrorists and nation-state adversaries. She's seen firsthand how the bad guys operate. She knows the threat they pose. And she can tell you how to use that knowledge to make organizations like yours a hard target. Get the inside spy scoop and find out why Rosa, now KnowBe4's senior vice president of cyber operations, encourages organizations like yours to maintain a healthy sense of paranoia. Go to knowbe4.com/cia to learn more about this exclusive webinar. That's knowbe4.com/cia. And we thank KnowBe4 for sponsoring our show. 

Dave Bittner: [00:12:56]  And we're back. Joe, I recently had the pleasure of speaking with Henry Ajder. He is from Deeptrace Labs. They are a company that has started up to try to get ahead of this issue with deepfakes. They're developing some interesting technology, and they're also studying the problem. And Henry joins us to share what they found. Here's my conversation with Henry Ajder. 

Henry Ajder: [00:13:17]  Deepfakes as a phrase came around in November of 2017 on a subreddit. And the user who created the subreddit had the name deepfake. So that was where the term was first coined. But the technology behind deepfakes, at least some of the common uses that we see now, are by no means from 2017. They, in fact, originate from an earlier date of around 2015, when the kind of technology that supports deepfakes, which is called a generative adversarial network - a kind of deep-learning neural network - was invented. People have been using these networks to generate synthetic media or fake images and videos for a while. 

Henry Ajder: [00:13:52]  The timing for deepfakes, in so far as November 2017, is important, though, because that's kind of when this technology was at a level where it was realistic enough that people could use it and kind of get that level of entertainment from it and also that it was accessible enough, namely the open-source code repositories - people were making kind of accessible kind of software from these GANs, these networks - was becoming mainstream or at least becoming much more accessible on platforms like Reddit. Since November of 2017, we've come a long way since then. It's almost been two years. 

Henry Ajder: [00:14:27]  The term initially was used for just describing what we now call deepfake pornography. So that was, you know, using this software to synthetically swap the face of one individual onto the body of another individual, typically a celebrity, or rather a celebrity's face was swapped onto pornographic footage. But since then, we've seen the term evolve and become much more general in its usage to describe not just face swaps and images but also videos where, for example, synthetic lip synchronization takes place, where an audio track is produced and lip movements are synthetically altered to match that, or synthetic facial reenactment, where in real time someone's facial movements are recreated in, like, an avatar of another person. 

Henry Ajder: [00:15:14]  We've seen deepfake voice audio, so synthetic audio was being created based on data of someone else's organic voice and people being able to create custom voice audio of people saying things they've never said. And we've also seen general images, highly realistic synthetic images of nonexistent people emerging as well. So all of these kinds of synthetic imagery using synthetic forms of media are being now broadly described as deepfakes. 

Dave Bittner: [00:15:40]  Broadly speaking, what is the peril here? What is the thing that we're worried about on an individual level and, I suppose, as a society as well? 

Henry Ajder: [00:15:48]  So I think the key problem that we are seeing with deepfakes is the ability to fundamentally deceive someone in various different use cases. So, of course, in the initial use case of pornography - which in our work we found to be by far the most prominent use case which is currently being deployed - we are talking about, you know, people being shamed, a new form of revenge pornography where people are being synthetically inserted into pornographic scenes and that's being released or tactically spread, the immense kind of psychological and emotional harm that that can cause, especially for women. We found 100% of deepfake pornography involves women. 

Henry Ajder: [00:16:25]  More generally, we're looking at advanced misinformation and disinformation capabilities, which in turn influences democratic processes and political processes. Some of the examples which are frequently referenced is, you know, synthetic tapes of Trump's voice being used or another politician's voice being used to say something they've never said, which could then tactically be leaked to the press through an investigative reporter or something like this on the eve of an election and cause, you know, significant damage to trust in political processes and in the mainstream media. 

Henry Ajder: [00:16:55]  And also in the cybersecurity space, we're seeing growing concern about the ability of synthetic voice audio in particular right now to enhance social engineering attacks, such as fraud and kind of impersonation attacks, where someone could use synthetic voice audio to impersonate a CEO or another kind of C-suite executive or something like this to move money or to make key business decisions. So, you know, there are multiple vectors. I think it's general level, whereas deepfakes threats any process where audio visual media is used to inform key decision-making or key communications. 

Dave Bittner: [00:17:31]  So you all at Deeptrace are trying to come at this problem. Can you describe to us - what are you all up to? 

Henry Ajder: [00:17:38]  So what we're doing at Deeptrace is we see our work as building an antivirus for deepfakes. So as I mentioned from the use cases just now, some of the uses of deepfakes could mirror cybersecurity attacks. But also, we see, for example, fake audio or images or videos circulating on social media or on other platforms as something which needs a defensive layer and a layer of protection between individuals and organizations and the media they consume. So by analogy, if you get sent a suspicious-looking link or document through a work email typically or through a platform with antivirus, you'll get a notification, or you'll get some kind of warning that what you may be opening could be a malicious or could not be what it appears to be. So we see a similar parallel with the emergence of deep fakes, which is that the threats that deepfakes pose mean that we may, in the near future, not be able to reliably trust our own perceptual abilities or capacity to identify what is real and what is not. 

Henry Ajder: [00:18:35]  So that's why we describe it as an antivirus for deepfakes. What does that consist of? So at the moment, our focus is on developing tools for both monitoring and detecting deepfakes. We are specifically focusing right now on images and videos, and so synthetic video and images generated by these networks. And the way that these work is we essentially look for, or rather the systems we are training look for, kind of digital fingerprints which are left behind on these synthetically generated outputs, these videos or images, which the human eye can't pick up on. And through training these models, these detection systems we're developing, these systems learn to kind of spot these telltale signs, these forensic clues, and can provide a kind of confidence score on the authenticity of an image or a video alongside that detection. 

Henry Ajder: [00:19:19]  So as I said, we're also developing monitoring tools which can be used to help both organizations, companies and individuals kind of keep track of perhaps their images being used or their brand logos being used or things like this in context where we've detected synthetic manipulation or entirely synthetic content. 

Dave Bittner: [00:19:35]  I remember decades ago when Photoshop was first introduced and we heard similar concerns, that we would never be able to trust photographs that we saw in newspapers or anywhere in print again. What's different about deepfakes when it comes to audio and video? Because it does strike me that there is something different about this. 

Henry Ajder: [00:19:55]  I think, you know, a lot of us are highly unaware of just how much Photoshop goes into a lot of the media that we do consume on a daily basis. In the case of deepfakes, I think it's worth saying that there has been quite a lot of very sensational coverage surrounding deepfakes - people saying things like, deepfakes are going to cause World War III through fake videos of politicians saying they're going to nuke another country, or that one fake video could cause the U.S. 2020 election to fall into abject chaos. 

Henry Ajder: [00:20:22]  In terms of why I think it's different, I think - you know, we've had CGI - you think of "Jurassic Park" or think of other films where you've had kind of CGI dinosaurs or things that clearly aren't real. They haven't affected us because we've been able to suspend our belief in a way. And also, CGI was something that, you know, was very difficult to do well. Only a very small handful of people could do it. It was very expensive and very time consuming. 

Henry Ajder: [00:20:44]  What deepfakes represent is commodification of kind of highly realistic synthetic media. You know, we are seeing, as I said, these tools are spreading on kind of open-source code websites. They're being developed into apps, which have got user-friendly interfaces. In our report, we found services, professional kind of, like, service portals, being set up essentially as business to create deepfakes. I think the difference is that at no point in history have we had the ability to synthetically manipulate audiovisual media in this way and having those capabilities open to so many people. 

Henry Ajder: [00:21:18]  I think there's been a naivete - almost a sacred quality that people have attributed to video and audio, which perhaps previously would have been tenable - I think now is outdated. And when I speak to people about deepfakes and when someone asks me what I do and I show them, there's a real kind of visceral experience that these people go through, which is - I am getting fooled. Even though I know this is fake, if I didn't know it was fake, I would be none the wiser. And so I think that's where there's this real kind of difference, is that deepfakes do present unprecedented capabilities for synthetic media manipulation. These are more sophisticated than Photoshop and impact mediums that have not been available for synthetic manipulation, as they are now with deepfakes. 

Dave Bittner: [00:21:57]  What are your recommendations for people out there to be able to calibrate an appropriate amount of concern for this? 

Henry Ajder: [00:22:07]  I think, you know, as I said, it's important to state that we have time. It's not like deepfakes are perfect right now, that all synthetic media generated by these technologies are indistinguishable from authentic media and that everyone in the land, you know, can get hold of them. That's not the case. You know, the majority of deepfakes online aren't the best quality. And although some are of very good quality, and especially some of the still images, video in particular has a bit of a way to go before I think we're going to be seeing highly, highly convincing deepfake audio. 

Henry Ajder: [00:22:37]  However, it is still in the very near future, and I think it's something that we need to prepare for now. To paraphrase our friends at WITNESS, which is a human rights organization that works with people to use digital evidence and digital media to report on human rights abuses, our friend Sam Gregory - he says now is the time to prepare, not panic. And I think that's really important, and that's what we're trying to do at Deeptrace. 

Henry Ajder: [00:23:00]  What can people do to prepare for deepfakes? As our technology and as our work on our technology suggests, we think in the future it will be the case that with the naked eye or the naked ear, so to speak, it's especially hard if individuals are consuming these videos or audio on social media platforms, where images are scrolling past rapidly, and the kind of nature of those experiences are not about studying each piece of media very closely. So that's where we think detection tools will play an important role, certainly not the only role. We don't believe there is a silver bullet in detection for solving kind of the issue of deepfakes. And this is where we think education is going to be crucial. That's something that people can do, is try and research new kinds of techniques which are being developed, watch videos of deepfakes which are available, which can demonstrate some of the techniques being used. 

Henry Ajder: [00:23:52]  I think one thing that's important to mention and one thing that's important to really emphasize here is that it's somewhat hazardous to say that there are telltale signs of deepfakes that people can look out for. The reason for this being is that, you know, deepfakes are constantly evolving, and the techniques for generating them are continually improving. A good example of why this is problematic is that, back in November, I think, of 2018, a paper came out saying that blinking was a good way of telling if a video was a deepfake. If it couldn't blink very well, then it was probably fake because, obviously, the images used to train these algorithms are mostly people with their eyes open. And so that was used as a kind of an example of, you know, OK, this is one of the telltale signs. The problem was that the techniques rapidly evolved, and blinking is no longer a huge issue for a lot of the techniques for generating deepfakes. But a lot of the media coverage surrounding this kind of, you know, look for blinking as a telltale sign, remains online to this day. And we still get people saying, oh, you know, but I can just see if an image or a video - if the person blinks or not. And obviously, that's no longer a viable way of discerning whether something is a deepfake or not. 

Henry Ajder: [00:25:01]  So in some respects, giving out kind of advice on what to look out for could actually be more damaging because instills a false sense of confidence as to how you can detect if something is fake or not. So this is why we think that detection is very important in combination with educational processes and a general - decide to move towards encouraging a critical assessment of what you consume online, particularly when it comes to audiovisual media. 

Dave Bittner: [00:25:27]  Joe, what do you think? 

Joe Carrigan: [00:25:28]  You know, Henry says that these things have been around for a while, and he talks about the general adversarial network's technology. And for a while, he means four years, since 2015 (laughter). 

Dave Bittner: [00:25:38]  Yeah. 

Joe Carrigan: [00:25:39]  That's not for a while. I mean, that's relatively recently, I would think, right? 

Dave Bittner: [00:25:43]  Well, I don't know. In computer terms, I guess (laughter). 

Joe Carrigan: [00:25:45]  Exactly, in computer terms - it's kind of what I'm getting at. These things move quickly, and completely synthetic media is now a reality, and good synthetic media is coming soon. Circulating this fake media is definitely something that needs to be defended against. I completely understand the reason for his company existing, and I think it's a good market to be in. His point about Photoshop was interesting. We do have a lot of Photoshop imagery that we see on a daily basis, particularly in advertising. 

Dave Bittner: [00:26:12]  Right. 

Joe Carrigan: [00:26:13]  Right? Advertising is not going to show you images of people who haven't been photoshopped. Just not the case. 

Dave Bittner: [00:26:19]  I know, from the video side of things, there have been plenty of plug-ins available to basically smooth out people's skin tones in a similar way, just... 

Joe Carrigan: [00:26:27]  Right. 

Dave Bittner: [00:26:28]  ...Take out blemishes. But it works on moving images. 

Joe Carrigan: [00:26:31]  Well, a moving image is just a collection of still images, right? 

Dave Bittner: [00:26:33]  Yep. 

Joe Carrigan: [00:26:34]  I'm glad to hear Henry say the 2020 election is probably not going to be impacted by a deepfake. But at the rate things are going, I'm wondering about the 2024 election and how that's going to work (laughter). And we talked about using it in cinema. You know, I can suspend a lot of belief in - when I walk into a movie theater. It's one of the reasons I really enjoy the movie, "Dude, Where's My Car?" Right? 

Dave Bittner: [00:26:54]  (Laughter). 

Joe Carrigan: [00:26:54]  Because I can suspend enough disbelief that I can believe that two idiots can save the universe from destruction. So... 

Dave Bittner: [00:27:02]  Yeah. 

Joe Carrigan: [00:27:03]  ...I enjoy the movie. I think it's funny. 

Dave Bittner: [00:27:05]  You know, my first recollection of something deepfake-esque... 

Joe Carrigan: [00:27:08]  Yeah. 

Dave Bittner: [00:27:09]  ...Was the movie "Titanic," which was 1997. 

Joe Carrigan: [00:27:13]  Yeah. 

Dave Bittner: [00:27:13]  That was the first incident I can remember of them putting the actors' faces on stunt people's bodies, on moving imagery. 

Joe Carrigan: [00:27:23]  OK. Right. 

Dave Bittner: [00:27:23]  So you had a stunt person running at the camera, you know, full frame. You could see them clear as day. And they mapped the actors' faces onto the stunt people's bodies in a convincing way, where you didn't know it wasn't the actor because why would you think that? 

Joe Carrigan: [00:27:38]  Right. Yeah. 

Dave Bittner: [00:27:39]  You know, you say, boy, those actors are doing their own stunts. 

Joe Carrigan: [00:27:41]  Right. 

Dave Bittner: [00:27:41]  But they were not. 

Joe Carrigan: [00:27:42]  Well - and that's OK. That's perfectly fine. 

Dave Bittner: [00:27:44]  Sure. 

Joe Carrigan: [00:27:44]  Right? To have somebody important say, I'm going to nuke this other country, you know, on the news, that's not OK. (Laughter) Right? 

Dave Bittner: [00:27:51]  Yeah. 

Joe Carrigan: [00:27:52]  That's bad. 

Dave Bittner: [00:27:53]  Yeah. 

Joe Carrigan: [00:27:53]  Couple of things I'd like to be said is prepare, don't panic. 

Dave Bittner: [00:27:56]  Right (laughter). Yeah. 

Joe Carrigan: [00:27:57]  That's always a good thing no matter what it is, but particularly true here. Also, I like one of the things you said about don't look for telltale signs because as soon as it becomes a telltale sign, they'll fix the algorithms and it won't be a telltale sign anymore. And there's little more dangerous than a false sense of security, you know. 

Dave Bittner: [00:28:14]  Yeah. 

Joe Carrigan: [00:28:15]  And that's what you're going to equip yourself with when you look for these telltale signs, is a false sense of security. 

Dave Bittner: [00:28:19]  It makes me wonder. I've heard of people who investigate crimes, where they'll keep certain techniques in their back pocket. There are things that they look for; they won't tell anybody what they are because they don't want that information to get out there. They want to be able to tell that something is a certain way or a fake or, you know, something like that. But they want to keep that information to themselves. And I wonder if there's value in that with something like this, where the people behind the scenes who are doing the research maybe share that information. But soon as you put it out there, the bad guys are going to say, oh, watch this. 

Joe Carrigan: [00:28:53]  Right. Yeah. Maybe they can get rid of the artifacts that they're detecting with their AI or their machine learning in these tools and then generate a deepfake that is truly indistinguishable from a genuine piece of media. 

Dave Bittner: [00:29:04]  Yeah. Seems like a bit of an arms race. 

Joe Carrigan: [00:29:06]  It is, yeah. 

Dave Bittner: [00:29:07]  A rapidly developing arms race. 

Joe Carrigan: [00:29:09]  Right (laughter). 

Dave Bittner: [00:29:10]  Yeah. 

Joe Carrigan: [00:29:11]  Media literacy is key - I agree with that statement. 

Dave Bittner: [00:29:13]  Yeah, absolutely. Absolutely. 

Joe Carrigan: [00:29:15]  Yep. 

Dave Bittner: [00:29:15]  All right. Well, that is our show. We want to thank all of you for listening, and we also want to thank our sponsors at KnowBe4. They are the social engineering experts and the pioneers of new-school security awareness training. Be sure to take advantage of their free phishing test, which you can find at knowbe4.com/phishtest. Think of KnowBe4 for your security training. We want to thank the Johns Hopkins University Information Security Institute for their participation. You can learn more at isi.jhu.edu. 

Dave Bittner: [00:29:41]  The "Hacking Humans" podcast is proudly produced in Maryland at the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our coordinating producer is Jennifer Eiben. Our executive editor is Peter Kilpe. I'm Dave Bitner. 

Joe Carrigan: [00:29:54]  And I'm Joe Carrigan. 

Dave Bittner: [00:29:55]  Thanks for listening