Hacking Humans 9.19.19
Ep 66 | 9.19.19

Algorithms controlling truth in our society.

Transcript

Matt Price: [00:00:00] We may get into this world where we essentially are having AIs creating deepfakes and AIs trying to detect them. So then now these algorithms are controlling the basis of truth in our society. 

Dave Bittner: [00:00:12]  Hello, everyone, and welcome to the CyberWire's "Hacking Humans" podcast, where each week we look behind the social engineering scams, phishing schemes and criminal exploits that are making headlines and taking a heavy toll on organizations around the world. I'm Dave Bittner from the CyberWire. And joining me is our special guest, Graham Cluley. Graham, welcome to the show. 

Graham Cluley: [00:00:31]  Thank you very much. Delighted to be here. 

Dave Bittner: [00:00:33]  Joe is away this week, but we expect him to return next week. We've got some interesting stories to share this week. And later in the show, we have my interview with Matt Price from ZeroFOX. He's been tracking the development of deepfake technology, and he's going to share his insights on that. 

Dave Bittner: [00:00:48]  But first, a word from our sponsors at KnowBe4. So who's got the advantage in cybersecurity, the attacker or the defender? Intelligent people differ on this, but the conventional wisdom is that the advantage goes to the attacker. But why is this? Stay with us, and we'll have some insights from our sponsor, KnowBe4, that puts it all into perspective. 

Dave Bittner: [00:01:16]  And we are back. Graham, I'm going to kick things off this week. And actually, a little change of pace - I have some good news to share this week. 

Graham Cluley: [00:01:23]  Marvelous. 

00:01:23:(LAUGHTER) 

Dave Bittner: [00:01:25]  This is the story of Operation reWired, which is something that the FBI put out word about this week. This is a months-long, multiagency effort to disrupt and dismantle international business email compromise schemes. And it sounds as though they've had some success here. They had 281 arrests - 74 in the U.S., many in Nigeria - shocker. 

Graham Cluley: [00:01:50]  (Laughter). 

Dave Bittner: [00:01:51]  ...Turkey, Ghana, France, Italy, Japan, Kenya, Malaysia, and the United States. They say that they've seized almost $4 million and $118 million in fraudulent wire transfers. So I think, interesting that the good guys have won this time. What is your take on this? 

Graham Cluley: [00:02:09]  Well, this kind of scam is so easy to pull off, isn't it? I'm not really surprised that so many of these arrests have come from Nigeria, which, of course, are the origin of so many of those advanced fee fraud emails which we used to get. You know, when you get a prince or some inheritance coming through to you? 

Dave Bittner: [00:02:25]  Right. 

Graham Cluley: [00:02:26]  And quite often, these business email compromises aren't that much more sophisticated today. They can just be a forged email claiming to come from your boss asking you to move more money around. Or more recently, obviously, they're hacking into emails and using that information to appear even more convincing. 

Dave Bittner: [00:02:42]  Yeah. One thing I wonder about with these is, is this just a drop in the bucket? I suspect that it is. But, I don't know about you, but there's been times when I've been in a - maybe a dark mood, and I've thought to myself, you know, so few people get caught with these things; am I in the wrong business? 

Graham Cluley: [00:03:00]  (Laughter). 

Dave Bittner: [00:03:02]  (Laughter) Like, it seems as though you see these people making tens of thousands of dollars a day, and you think, gosh, it's the time of the month when my credit card bill is due... 

Graham Cluley: [00:03:10]  (Laughter). 

Dave Bittner: [00:03:11]  ...What if I just sent out a few ransomware things? Who would know? Who would get caught? You know? 

Graham Cluley: [00:03:16]  I think the listeners to "Hacking Humans" would know, Dave, 'cause you've just announced it to the world, which perhaps wasn't that wise, was it? 

Dave Bittner: [00:03:20]  (Laughter). 

Graham Cluley: [00:03:22]  If this was your grand plot, maybe you should have kept it to yourself, not when the microphone's turned on. 

Dave Bittner: [00:03:27]  Well, I... 

Graham Cluley: [00:03:27]  (Laughter). 

Dave Bittner: [00:03:28]  Yes. Good point, Graham, good point. 

Graham Cluley: [00:03:30]  I think a lot of people probably think like you. People who work in the infosec community think, you know, sometimes, crumbs, these criminals are making so much money with attacks sometimes which aren't that sophisticated. Of course, it's not just the fear of getting caught which stops us from doing it, but I think many of us actually have a moral compass, and many of us actually think, no, that's actually wrong. And we feel very strongly that these sort of things are wrong and shouldn't be done. And so you couldn't pay me any amount of money and make me scam some little old lady or a business out of a million dollars. 

Dave Bittner: [00:04:03]  What's that like, Graham? What does that feel like? I want to know. 

Graham Cluley: [00:04:05]  Being good. 

0:04:05:(LAUGHTER) 

Graham Cluley: [00:04:07]  Look, I say that. Obviously, I'm secretly... 

Dave Bittner: [00:04:11]  No. You're right. 

Graham Cluley: [00:04:11]  ...In a little volcano with my feet in the air. 

Dave Bittner: [00:04:13]  Breaking news tomorrow. 

Graham Cluley: [00:04:14]  (Laughter). 

Dave Bittner: [00:04:14]  Right. They're hauling you away in handcuffs. Absolutely, you're right. And I think, you know, clearly, that is a big part of it, is, it's just wrong. But I wonder how much these sorts of things are symbolic in that they plant that seed, they plant that little bit of doubt for people that, hey, it's not free and clear. There is a chance that you're going to get caught. There are people actively out there trying to shut these things down. 

Graham Cluley: [00:04:38]  I'm not sure that arrests like this act as such a huge disincentive, even if people do end up going to jail for a long time. I think some of the people behind these schemes probably come from quite impoverished backgrounds which has led them to crime in the first place. And in some ways, is it better that they are scamming people via the internet rather than mugging old ladies or doing something more violent? I don't know. You know, if you're going to do a - I mean, I hate to say, you know, it's a better kind of crime than some other kinds of crime. Still does real damage and can destroy people's livelihoods, obviously... 

Dave Bittner: [00:05:15]  Yeah. 

Graham Cluley: [00:05:15]  ...And cause problems from that point of view. But like you, I do suspect there are an awful lot more people doing this kind of thing from all sorts of countries around the world. 

Dave Bittner: [00:05:24]  Yeah. Well, it's good news, and it's always nice to see the good guys have a win like this. That's my story this week. Graham, what do you have to share with us? 

Graham Cluley: [00:05:32]  Well, I've got something a little bit peculiar. You know, I mean, on "Hacking Humans" and reading the media headlines, we're also hearing about people's online accounts getting hacked, folks getting phished. You know, the mistakes which they're making. Maybe they're reusing the same password in multiple places. And so the bad guys are breaking into accounts. So the sort of advice which we give people all the time is, make sure you've turned on your two-factor authentication. You know, it's one of the best pieces of advice we can give people... 

Dave Bittner: [00:05:59]  Right. 

Graham Cluley: [00:05:59]  ...As to how to use a piece of technology to remain safer online. And there are a number of companies which help you do this. There's some free apps, obviously, which you can store on your smartphone. But one company is Duo Security. They're actually part of Cisco now, I think. They offer cloud-based multifactor authentication. And I actually use them on my own website. When I try to log in to the back end of my website, Duo sends a little push to my mobile, like a little button which pops up to confirm, yes, it's really me who's logging in. 

Graham Cluley: [00:06:26]  You know, it's fantastic. So if someone else tried to log in, if they happened to get my password, I would get a warning, and I would say, no, don't let them in. But some people think that's a real hassle. Some people think, ugh, what a nuisance that my company makes me authenticate before I log in to this company service, or into this website or onto this account. 

Dave Bittner: [00:06:45]  Sprain your finger. Get blisters. 

Graham Cluley: [00:06:47]  It's so exhausting. I've got so little time. I can't afford those few milliseconds pressing a button. And to address that need, a brand-new website has popped up claiming to take away that hardship. And this website has called itself DontDuo.com. And it's really rather peculiar. Let me just read some of the text from this website. It says, for the cost of a small coffee, DontDuo automatically accepts incoming Duo authentication requests, saving you an average of three hours per month. And it says, we'll make it more seamless, how you log in to your site. You just log in with your username and password. You don't have to worry about the authentication. 

Graham Cluley: [00:07:29]  So effectively what you are doing is, you are setting up your authentication, your two-factor authentication, with a phone number which the DontDuo website provides to you. And they have - I don't know. They've either got teams of (laughter) people in the back room or, more likely, some computer system. Every time a request comes in to authenticate, they automatically authenticate on your behalf. Now, Dave, can you see any problems with this? 

Dave Bittner: [00:07:55]  (Laughter) This seems absolutely bonkers to me. 

Graham Cluley: [00:07:58]  (Laughter) Bonkers is the word. 

Dave Bittner: [00:07:59]  You're out - so you're outsourcing... 

Graham Cluley: [00:08:02]  Yes. 

Dave Bittner: [00:08:02]  ...The button. You're outsourcing your second factor. Who would - OK. I'm trying to understand... 

Graham Cluley: [00:08:10]  (Laughter). 

Dave Bittner: [00:08:10]  ...Who - is it? I guess this is for the corporate person who has had this second-factor foisted upon them. And... 

Graham Cluley: [00:08:17]  I think so. 

Dave Bittner: [00:08:18]  ...It's a hassle, and they just want to get around it? 

Graham Cluley: [00:08:21]  I think that's exactly it. It's when IT has enforced two-factor authentication upon you, and you greatly begrudge it. And you're so bloody minded you are prepared to pay (laughter) a website to handle those authentication requests for you. And of course, the risk is that if any Tom, Dick, or Harry tries to log in to your account as well, they aren't going to know at DontDuo whether that is really you or Tom, Dick or Harry. And so they will automatically authenticate it. You've basically eradicated your two-factor authenticate - now, I should stress, I don't know if DontDuo.com is a joke or not. It might well be (laughter). Maybe they're trying to prove a point. Maybe they're just being crazy. 

Dave Bittner: [00:09:01]  Mmm hmm. 

Graham Cluley: [00:09:02]  The website looks a bit professional, but then when you look at it - I looked it into a little bit of detail. They've got terms and conditions and privacy, but they've been autogenerated by a form online. They just sort of cut and paste their terms and conditions. There's no contact information. If it is a joke, they're not making it terribly obvious that it's a joke. And there is a risk some people might believe this is for real. 

Dave Bittner: [00:09:24]  Ugh. What a thing, that we're at this point where we don't know if something like this is a joke or not. 

Graham Cluley: [00:09:29]  Indeed. You know, this is - none of us know if we're left or right, up or down these days, do we? The world is so cricked, it's topsy-turvy. 

Dave Bittner: [00:09:36]  (Laughter). 

Graham Cluley: [00:09:37]  But the analogy I would give, though, regarding a site like this, or any service which offers to authenticate for you, is you wouldn't hand over the keys to your kingdom to some nutter on the bus. So why would an attractively designed website make you feel any more confidence in its handling of something like this? So turn on your two-FA, folks, and do it yourself. Don't get someone else to do it for you. 

Dave Bittner: [00:10:02]  Well, and how absurd for the folks who are in charge of security for companies that they'll have to add a line to their dos and don'ts that say we forbid you from using services like DontDuo. 

Graham Cluley: [00:10:15]  Well, you know what? I've actually - 'cause I tweeted about this, and some people in response actually sent me a screenshot. And they've already put inside their web-filtering software, they have marked that website as hacking. So they're basically automatically blocking it to prevent their users from going to it because they're obviously worried their users might sign up for this. 

Dave Bittner: [00:10:34]  All right. Well, it's a cautionary tale, joke or not. Don't be casual about your two-factor, right? 

Graham Cluley: [00:10:40]  No. I mean, you know, two-factor is something that we should be embracing, you know? We need more of it. We need more websites supporting it. And I hate the idea of any service which actually turns it into treacle. 

Dave Bittner: [00:10:51]  Yeah. All right. Well, it's a good story. But now it's time to move on to our Catch of the Day. 

0:10:57:(SOUNDBITE OF REELING IN FISHING LINE) 

Dave Bittner: [00:11:00]  Our Catch of the Day was sent in by a listener. And evidently, this is claiming to come from Brazil. And Graham, if you've listened to "Hacking Humans" at all, you know that I am a master of dialects. 

Graham Cluley: [00:11:14]  You are. (Laughter) Very good at the old accents? 

Dave Bittner: [00:11:16]  Yes. 

Graham Cluley: [00:11:16]  I'm looking forward to this one. 

Dave Bittner: [00:11:18]  Yes. It's a gift, really. So I'm going to do my best Brazilian accent with this one. It goes a little bit like this. 

Dave Bittner: [00:11:24]  (Reading) Dear beloved one, my name is Cristiano Ronaldo dos Aveiro, a Portuguese professional footballer who plays as a forward for Serie A club Juventus and captains the Portugal national team. I am a good professional soccer player, a good merchant. I have several industrial companies and good share in various banks in the world. I spend all my life as a professional soccer player in Series A club Juventus and captains the Portugal national team. I am writing this letter to people who are really in need of help to contact my attorney urgently so that she can make available preparation on your request, especially people who want to play soccer, parents who lost their job, women of the day who are divorced by their husband and cannot survive from feeding their self. Please contact my attorney and stop weeping. Probably let me know what you really need money for. Yours sincerely, Cristiano Ronaldo. 

Graham Cluley: [00:12:18]  (Laughter) First of all, round of applause for the accent. 

0:12:21:(APPLAUSE) 

Dave Bittner: [00:12:22]  Spot on, isn't it, Graham? 

Graham Cluley: [00:12:24]  It actually reminded me a little bit of Ricardo Montalban. 

Dave Bittner: [00:12:27]  Well... 

Graham Cluley: [00:12:28]  I could have been that little guy. Remember the guy who jumps up and down and goes, the plane, the plane (laughter). 

Dave Bittner: [00:12:31]  Tattoo, yes. Tattoo, yes. Herve Villechaize, that's right. 

Graham Cluley: [00:12:33]  I think Ricardo was maybe Mexican rather than Brazilian, but, hey, you know, right kind of direction. 

Dave Bittner: [00:12:38]  Well, part of the world, yeah. 

Graham Cluley: [00:12:40]  (Laughter) Exactly. I mean, seriously? I mean, Ronaldo is a superstar in many people's eyes, isn't he? 

Dave Bittner: [00:12:49]  I think so. 

Graham Cluley: [00:12:49]  I mean, my son adores him. He thinks he's fantastic. Despite my trying to put him off him, he thinks he's just, like, wonderful. 

Dave Bittner: [00:12:57]  (Laughter). 

Graham Cluley: [00:12:58]  I'm saying, no, he's a terrible human being. All the time, he pretends he's being fouled. As if he would ever give money away. Oh, yes, he's a big cheat. 

Dave Bittner: [00:13:04]  I see. 

Graham Cluley: [00:13:06]  Am I allowed to say that? (Laughter). 

Dave Bittner: [00:13:06]  All right, OK. 

Graham Cluley: [00:13:07]  In my opinion. 

Dave Bittner: [00:13:07]  See, this is not something that I follow because, as you know, we are the one nation in the world who prefers to play with a pointy ball rather than a round one. 

Graham Cluley: [00:13:17]  Yes, yes. 

0:13:17:(LAUGHTER) 

Graham Cluley: [00:13:18]  No, Ronaldo is either the greatest footballer in the world or the second greatest, him and a chap called, I think, Messi, if I can mention the two that... 

Dave Bittner: [00:13:26]  Oh, yes, OK. I've heard of him. 

Graham Cluley: [00:13:29]  So they do have a ridiculous amount of money, and they probably should share it around a bit more. But I suspect this one isn't entirely for real. 

Dave Bittner: [00:13:37]  Yeah, you think? 

0:13:38:(LAUGHTER) 

Dave Bittner: [00:13:41]  All right. Well, that is our Catch of the Day. Coming up next, we've got my interview with Matt Price from ZeroFOX. He's been tracking the development of deepfake technology, and he's going to share some of his insights on that. 

Dave Bittner: [00:13:52]  But first, a message from our sponsors, KnowBe4. Now let's return to our sponsor's question about the attackers' advantage. Why did the experts think this is so? It's not like a military operation where the defender is thought to have most of the advantages. In cyberspace, the attacker can just keep trying and probing at low risk and low cost, and the attacker only has to be successful once. And as KnowBe4 points out, email filters designed to keep malicious spam out have a 10.5% failure rate. That sounds pretty good. Who wouldn't want to bat nearly .900? But this isn't baseball. If your technical defenses fail in 1 out of 10 tries, you're out of luck and out of business. The last line of defense is your human firewall. Test that firewall with KnowBe4's free phishing test, which you can order up at knowbe4.com/phishtest. That's knowbe4.com/phishtest. 

Dave Bittner: [00:14:58]  And we are back. Next up, we've got my interview with Matt Price. He's from ZeroFOX. And he and his team have been looking at deepfake technology, which has been getting a lot of notice in the news lately. So here's my interview with Matt Price. 

Matt Price: [00:15:12]  Deepfakes are a relatively new thing. The first true publication that came out of academia was in late 2016, I believe, and the first tool that was really released that a fairly tech-savvy user could use to create a deepfake was not released till late 2017. So this is still, like, a really new development. And there's been - even in, like, the past six months, there's been insane developments in terms of just how to create high-fidelity deepfakes, how to do it with only one image and, now, how you can actually manipulate what people are saying by just typing in text, just like you would in a chat, and then the person in the video will then mimic back that text that you have written. If you happen to see the deepfake that they did of Zuckerberg, that's actually what they're doing with the text transcripts. 

Dave Bittner: [00:15:57]  I mean, it's interesting looking back at the history of it because I remember - I mean, it's been a while now that you could see, you know, digital doubles in movies and things like that. I think even, you know, look 20 years or so, looking back to movies like "Titanic," where they were doing some groundbreaking work on that, but I suppose part of what's happening now is just the accessibility of these tools. 

Matt Price: [00:16:17]  It used to cost millions of dollars to the movie studios to do what we can now just do today in a few hours on a machine with a couple of powerful GPUs. So just kind of putting together the whole toolchain, like the advances that we've had in neural networks, the advances that we've had in hardware, particularly on the GPU side, have just enabled anybody now to really create these kind of deepfakes. 

Dave Bittner: [00:16:38]  And is it purely a matter that the processing power is available or is it also progress on the actual software itself, people making breakthroughs with that? 

Matt Price: [00:16:49]  Both, definitely the combination of them. Like, the software really would not work today without the hardware, so the hardware had to happen first. But then once the hardware kind of got to the stage where it is now, the algorithms and the models were able to kind of catch up with the various techniques that academics are using today. 

Dave Bittner: [00:17:07]  Well, so let's walk through it together. I mean, suppose someone out there was looking to do a virtual version of me. And, you know, being a podcast host, there's a fair amount of, certainly, audio of me out there but also video as well. How would they go about doing it? 

Matt Price: [00:17:24]  One of the first things that they'd want to determine is exactly what they want you to do. If we're just talking about manipulating your - the actual video and not any of the audio, what I would go about is go in and try to find some fairly high-fidelity images of you online. So you'd go and do that data gathering process, and then what you do is you essentially feed those videos into just one of the kind of open-source deepfake model frameworks that are out there, like DeepFaceLab for example. You then feed those images into it, along with the source video. 

Matt Price: [00:17:56]  So maybe the source video is a news presenter. So then you go and find a video of a news presenter. And then what you do is you associate your face with the newscaster's face in that video. And then what the model does is, it starts to map your face onto that news presenter over many, many iterations, like, usually, tens of thousands. Then, eventually, what you end up with is your face on that newscaster. 

Dave Bittner: [00:18:23]  And then once you've gone through that iterative process, is that - do you basically have a virtual puppet of me? 

Matt Price: [00:18:30]  For that video, yes. It's not something - at least not right now, it's not something where I could then just take what that model has learned and just apply it anywhere. It tends - at least, today, it's very specific to that video and that mapping that it learned from your face to that target's face. 

Dave Bittner: [00:18:45]  And so what are the threats here? What do you all have your eyes on there at ZeroFOX in terms of ways that folks might be using this against people? 

Matt Price: [00:18:55]  Deepfakes are already being used against people, primarily in the porn industry right now. And that's really, I think, where deepfakes really started to hit the mainstream, was when people started learning that celebrities or even just, like, other, like, just, you know, normal, everyday people were getting mapped onto porn stars' faces. So that was kind of where deepfakes started off with, which, obviously, that's a problem. Especially for, like, someone, like, you know, like, an average person, if they get mapped onto a porn video and then someone happens to find that, and they're interviewing for new jobs. That could be a serious problem 'cause that's your reputation. 

Matt Price: [00:19:28]  So that's, like, problems that we're already seeing today. The things that start to get really scary from, like, a country perspective or a societal perspective are influence campaigns. And we already saw, like, what Russia was able to do in some of the previous elections without deepfake technology just because they're able to distort the truth. So now with this deepfake technology, like, countries like Russia, possibly China, and even organized crime can now create these deepfakes and essentially sow confusion and change what people perceive as the truth, which is a major problem for any kind of, like, democratic society where you have to have a basis for truth in order to have your discussions and your arguments and make determinations on how to move forward. 

Dave Bittner: [00:20:11]  I would imagine also just the uncertainty of no longer being able to trust that something you're seeing on video is legitimate. 

Matt Price: [00:20:22]  Yeah. I mean, you've probably heard the popular phrase, like, seeing is believing. 

Dave Bittner: [00:20:25]  Right. 

Matt Price: [00:20:26]  That is really no longer the case. And honestly, through the research I've been doing, I've somewhat become paranoid in that I now start, like, really looking at videos. Even when I'm watching TV now, I'm looking at the people. I'm like, is this a deepfake? Is what I'm seeing, like, the actual truth? I mean, it really can distort, like, how you interpret things and how you approach just what you're seeing out there today. 

Dave Bittner: [00:20:46]  Now, what are possible solutions to this? Do we end up with some sort of, you know, chain of custody of video footage? Dare I say some sort of blockchain kind of thing? 

Matt Price: [00:20:58]  Possibly. The problem with that, though, is that it requires everyone that's producing video to buy into that protection chain, essentially. Which, I don't think is going to be the case. That requires everyone from, like, hardware and software vendors to essentially get on board and then release this over time. Even if that - everyone does do that, it's still going to be many years, if not tens of years, before that could really start to matter where you can actually track the custody of a video and what it's gone through. 

Matt Price: [00:21:24]  So I'm not sure that is the ultimate solution. A lot of the solutions right now are just focused on detecting deepfakes. And that's actually what we've been working on, as well, here at ZeroFOX. And this is also somewhat concerning because we may get into this world where we essentially are having AIs creating deepfakes and AIs trying to detect them. So then now these algorithms are controlling the basis of truth in our society, which is a problem in and of itself, as well. 

Dave Bittner: [00:21:51]  Right. 'Cause I could see you getting into - then you have AIs trying to get past the AIs that are trying to detect things, and 'round and 'round you go. 

Matt Price: [00:22:01]  Exactly. 

Dave Bittner: [00:22:02]  So what are some of the ways that you all are tracking in terms of being able to detect these sorts of things? 

Matt Price: [00:22:08]  What we're trying to do is, we're actually trying to build out a set of what I call weak detectors. So individually, none of these detectors by themselves will give you what I would consider a high-fidelity result, like, this video is a deepfake. But when you start looking at the results from many of these detectors when run over a video, like an aggregate, they give you a very good idea - has this video been digitally manipulated or not? So that's how we've been tackling it. 

Matt Price: [00:22:30]  So DARPA's been funding research into deepfakes and their detection for the past about two to three years, and there's actually been a number of papers that have come out of that. And a lot of those are focused on essentially building one fairly powerful model, generally looking at the temporal effects in videos to detect if a video is a deepfake or not. 

Dave Bittner: [00:22:51]  You know, I know, for example, on Twitter, there are tools available where you can have the tool analyze a Twitter account and come back at you with a likelihood that that account is a bot or not. Do you see similar types of things popping up, in terms of being able to decide whether or not something's a deepfake? 

Matt Price: [00:23:08]  I think right now that's where the general consensus of people working on this problem - I think that's where this is headed. 

Dave Bittner: [00:23:14]  Graham, what do you think about all this? Lots to unpack. 

Graham Cluley: [00:23:18]  You know, this whole deepfake stuff is horrifying, really, isn't it? It feels like a huge challenge, not just to us in the infosec community, but really to society in general. Because the amount of influence which something like this could have upon people, damaging not just brands and reputations, but also being used for propaganda purposes, it's horrific. 

Dave Bittner: [00:23:40]  I agree. Part of me wonders, though, you know, we survived Photoshop, right? 

Graham Cluley: [00:23:45]  Yes. But - we did. And I do remember the first time I saw a Photoshopped image. You know, it was, like, a crowd of us. There was a group photo, and one of us had been Photoshopped into the shape of a banana. 

Dave Bittner: [00:23:57]  (Laughter). 

Graham Cluley: [00:23:57]  And it was a very cool - it was, like, a man-sized banana with a head sticking out. And I thought, my goodness, that's amazing. I was really impressed by that. But there's something so much more believable by moving video or - and also because the world has changed since Photoshop first came along because now we're all consuming information so quickly, so rapidly. And we have the ability to re-share it via social networks. As we know, you know, people don't bother to check the authenticity of these things. They will pass them on just in case it's true. 

Dave Bittner: [00:24:33]  Right. 

Graham Cluley: [00:24:34]  And we thought you should know this. And the lie spreads around the world much more quickly than the correction ever will. 

Dave Bittner: [00:24:40]  Well, and I think it also speaks to that point that we've heard about these information operations that we've seen coming out of Russia and other countries. 

Graham Cluley: [00:24:49]  Yeah. 

Dave Bittner: [00:24:49]  That it may not necessarily be what they're spreading as much as just the uncertainty that it injects into people's minds, or they don't know what they can trust. 

Graham Cluley: [00:25:01]  Because you end up just not being able to trust anything, do you? I mean, that is the fear. It was very interesting to hear, during the interview, some of the ways in which they're trying to tackle this. 'Cause I do think it's an enormous challenge because there's always going to be this problem of false negatives and false positives as well, you know, both of which you want to keep to an absolute minimum. 

Graham Cluley: [00:25:20]  But this idea of detecting whether something might be a deepfake by, for instance, trying to determine whether the tweeting account might be a bot or not, I'm not sure if that's really good enough because although that may be used to initially cede the deepfake, innocent people then begin to re-share a video or download it and post it on a different social network, and those are real people who are maybe passing it on with good intentions and aren't bots under the control of some evil mastermind. 

Dave Bittner: [00:25:50]  Right. 

Graham Cluley: [00:25:50]  And so I think, just looking at the origin, it might be handy for the social networks themselves if they want to weed these out, but I'm not sure it's so useful for you and I because the dispersion continues through regular folks. 

Dave Bittner: [00:26:02]  Yeah. And, you know, we touched a little bit on this whole notion of some sort of chain of custody. But I just think, in today's world of sharing things, I don't see how that's practical. 

Graham Cluley: [00:26:13]  No. I really don't know what the answer is to this. It'd be interesting to see where it goes, but I have this horrible feeling that things are just going to get worse, and the media will become less trustworthy, sadly, as a consequence. And that's probably not something which we want happening. But let's talk about the real problem here, Dave - virtual Dave Bittner. 

Dave Bittner: [00:26:32]  Oh, my. 

Graham Cluley: [00:26:33]  I mean, yeah. We're both podcasters. 

Dave Bittner: [00:26:35]  (Laughter). 

Graham Cluley: [00:26:36]  Our voices are very easy to fake 'cause of the amount of material that's out there. The good news for us is that, because we're podcasters, we're also completely broke. So we haven't got much to steal from us. 

Dave Bittner: [00:26:45]  Right, there's nothing to steal. That's true. Right, OK. 

0:26:48:(LAUGHTER) 

Graham Cluley: [00:26:48]  But people in the public eye or company execs, well, there could be much more damage done - couldn't there? - through something like this. It's - I've seen a couple of reports beginning to come out about deepfaked audio in CEO scams. 

Dave Bittner: [00:27:02]  Yeah. 

Graham Cluley: [00:27:03]  I think Symantec, a couple of months ago, said that seems in the - I wasn't really clear how the victims knew it was a deepfake, how it then - rather than just someone who is really good at impressions. I mean, we've heard your fantastic accent already, earlier on. 

Dave Bittner: [00:27:15]  Right. 

0:27:16:(LAUGHTER) 

Dave Bittner: [00:27:17]  Well - yeah. Joe and I talked about this in a recent show, that there was this story where the insurance company and the company who got it... 

Graham Cluley: [00:27:24]  Right. 

Dave Bittner: [00:27:24]  ...Claimed it was some sort of deepfake. And I said, I'm skeptical, because... 

Graham Cluley: [00:27:28]  Right. 

Dave Bittner: [00:27:28]  ...It's just easier to hire someone who's a good mimic. 

Graham Cluley: [00:27:31]  Totally. 

Dave Bittner: [00:27:33]  Do it that way. Why do it the hard way? 

Graham Cluley: [00:27:36]  You see - OK, so listen. Here I am working in the company, as if they'd give me responsibility over money or something like that (laughter). But if I was to move money into someone else's account and then it's found out that I did it, would I say, oh, it was probably a guy who was doing a convincing Brazilian accent? 

Dave Bittner: [00:27:52]  Right. 

Graham Cluley: [00:27:52]  Or would I say, they deepfaked Ronaldo... 

Dave Bittner: [00:27:56]  Precisely. 

Graham Cluley: [00:27:57]  ...And that's how it happened. In a way, it kind of excuses you a little bit. It's like, oh, well, there's no way you could have known it was - if it was a deepfake. 

Dave Bittner: [00:28:05]  Right, throw your hands up. What could we have done? 

Graham Cluley: [00:28:08]  Yeah. 

Dave Bittner: [00:28:08]  There's no possible way. All right. Well, our thanks to Matt Price from ZeroFOX for joining us, and a big thanks to Graham Cluley for joining us. Graham, thank you so much for filling in this week. 

Graham Cluley: [00:28:20]  It's been a pleasure. 

Dave Bittner: [00:28:21]  Where can folks find out more about you? Tell us a little bit about your podcast and where people can find that. 

Graham Cluley: [00:28:27]  Oh, yes. We do a little podcast called "Smashing Security," found in all good podcast apps and quite a few crummy ones as well. And we have guests each week where we talk about what's going - we have some very good guests. Did you know that, Dave Bittner? I had sent a... 

Dave Bittner: [00:28:40]  I have heard rumor that some of your guests are outstanding and others merely adequate. 

Graham Cluley: [00:28:46]  (Laughter) Exactly. Naming no names, but yes, exactly. 

Dave Bittner: [00:28:49]  (Laughter). 

Graham Cluley: [00:28:50]  So yes, we do - I do a weekly podcast all about security, "Smashing Security." And also, hang out on Twitter at @gcluley, so follow me there. 

Dave Bittner: [00:28:58]  And your co-host on "Smashing Security" is someone our listeners would be quite familiar with. 

Graham Cluley: [00:29:03]  Yes. 

Dave Bittner: [00:29:03]  Carole Theriault. 

Graham Cluley: [00:29:05]  The very glamorous Carole Theriault, yes. 

Dave Bittner: [00:29:07]  Yes, the one and only. All right. Well, Graham, again, thanks so much for filling in. Joe should be back next week. We want to thank all of you for listening. 

Dave Bittner: [00:29:14]  And of course, we want to thank our sponsors at KnowBe4. They are the social engineering experts and the pioneers of new-school security awareness training. Be sure to take advantage of their free phishing test, which you can find at knowbe4.com/phishtest. Think of KnowBe4 for your security training. 

Dave Bittner: [00:29:30]  Thanks to the Johns Hopkins University Information Security Institute for their participation. You can learn more at isi.jhu.edu. 

Dave Bittner: [00:29:38]  The "Hacking Humans" podcast is proudly produced in Maryland at the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our coordinating producer is Jennifer Eiben. Our executive editor is Peter Kilpe. I'm Dave Bittner. 

Graham Cluley: [00:29:52]  And I'm Graham Cluley. 

Dave Bittner: [00:29:53]  Thanks for listening.