Hacking Humans 6.20.24
Ep 295 | 6.20.24

From dark shadows to main stage.

Transcript

Because it's not just this person standing in front of two-dimensional greenscreen anymore at this point. It's a person in a fully, three-dimensional environment.

Dave Bittner: Greeting to all and a warm welcome to the "Hacking Humans" podcast brought to you by N2K CyberWire. Every week, we look into the world of social engineering scams, phishing plots and criminal activities that are grabbing headlines and causing harm to organizations all over the world. I'm Dave Bittner and joining me is Joe Carrigan from the Johns Hopkins University Information Security Institute. Hey there, Joe.

Joe Carrigan: Hi, Dave.

Dave Bittner: We got some good stories to share this week. And later in the show, my conversation with Brandon Kovacs. He's Senior Red Team Consultant at Bishop Fox. We're talking about live deepfakes. [ Music ] All right, Joe. Before we dig into our stories here, we got a little bit of follow up. What do we got going on here?

Joe Carrigan: We do indeed, Dave. Adina writes in and says, "I listened with interest to the CyberWire story on June 6th where publishers took Google to court over basically enabling people to find pirated content. I have been wondering if software shops could do the same to address malware serving ads that claim to provide their software, like is happening to Arc Web Browser, as discussed in the January -- or June 8th episode of Research Saturday? Could software companies sue Google, not only for enabling bad actors to impersonate them and their copyrighted software? There must be some case to be made there. Since Google is getting paid for this, their responsibility should be even clearer." I agree. "What do you guys think? Have there been any cases of this happening yet and is there something software makers should do to pursue? Thanks for all you." So, this is a good question and a good idea, I think. Not only could software manufacturers sue, but perhaps hotels could sue because Dave, that's one of the ones that got me. Hotel companies. Although I think for hotel companies, when you can -- I don't think there's anything bad about when I search like, hotels.com on Google, which why don't I just go to hotels.com, right?

Dave Bittner: Yes.

Joe Carrigan: Or when I search Marriott on Google on whatever the name of the hotel site is, that a competitor can't just buy ads on those search returns.

Dave Bittner: Right.

Joe Carrigan: I don't think that's copyright infringement, but--

Dave Bittner: No.

Joe Carrigan: -pretending to be somebody and selling -- serving out malicious software, that's certainly copyright infringement, and Google is participating.

Dave Bittner: Yes. So, a couple things. We -- actually, we got a message from a listener about how -- I believe that it was the listener's mother who had had some home improvement work done and was having some issues with the work, and so did a Google search on the company that did the work.

Joe Carrigan: Right.

Dave Bittner: And it turns out the company's competitor had bought up all the searches to that company's name. So, this person did the search, thing popped up and it was the competitor. Our listener's mother called the competitor thinking she was calling the company that did the work.

Joe Carrigan: I've done this numerous times.

Dave Bittner: Oh, really?

Joe Carrigan: Yes. Happens to me all the time.

Dave Bittner: And it took her a long time to figure out, like, "Wait a minute? This isn't who I want to be talking to?" And you know, "Why are these people trying to sell me something else?"

Joe Carrigan: Right.

Dave Bittner: You know, that sort of a thing. So, that certainly happens all the time. In terms of pirated software, I mean, they could try. And there is a -- there's a trade organization that deals with pirated software. Basically, they had a phone number you could call. The joke was that it was 1-800-RATFINK was the phone number you could call. Like, if your company was using pirated copies of Microsoft Office or--

Joe Carrigan: Right.

Dave Bittner: -pirated copies of Adobe Photoshop or something like that.

Joe Carrigan: Yes.

Dave Bittner: And--

Joe Carrigan: And you'd get some kind of money back from them, right?

Dave Bittner: -you'd get -- yes, there was some kind of a bounty, that sort of thing. You know, I don't know the degree to which the Googles of the world get to hide behind Section 230 of the Communications Decency Act that says that they're just a platform, and so therefore not liable for this. And I'm not sure that if they get -- the fact that they're getting paid, makes a difference. I'm trying to, you know, analogize it and say -- in the old days of classified ads in the newspaper, if I ran a classified ad and it turned out that ad was a scam, is the newspaper responsible? Could someone sue the newspaper?

Joe Carrigan: Right.

Dave Bittner: I mean first of all, as our colleague Ben Yelin would say over on the "Caveat" podcast, "You can sue anybody for anything." Right? Like, it doesn't--

Joe Carrigan: Yes.

Dave Bittner: -whether or not you have a case, or you're going to be spinning your wheels or wasting your money, or possibly get charged the other party's attorney fees for wasting their time.

Joe Carrigan: Right.

Dave Bittner: I would imagine, given that the software industry has gone so far as to spin up a trade organization to fight software piracy, that this is certainly on their radar, and they've considered it. But I think there's a limited amount of return on investment on that sort of thing. Now, I agree that I wish there was something that could be done to Google and Facebook and the other companies who are allowing these fraudulent ads to run.

Joe Carrigan: Right.

Dave Bittner: And I'm sure if we asked Google they'd say, "You should see the stuff that we do block."

Joe Carrigan: Right.

Dave Bittner: Right? So, to be fair, it's not like they're not blocking, but boy, is there room for improvement? And as you and I have pointed out many times, it's against their interest--

Joe Carrigan: Right.

Dave Bittner: -because they're making money. They have a perverse incentive. They make money off of every ad. And so, I agree with out listener here that it would be great if maybe our legislators could put some teeth into something that would make the Googles of the world or the Facebooks of the world stand up and take notice and have to do something about it, to make it not worth their interest, or maybe come at it a different way. Make it worth their time to block these things.

Joe Carrigan: Right. That would be the thing.

Dave Bittner: So, it's a problem. Well, thanks to our listener for writing in. We do appreciate that, and of course, we would love to hear from you. If you want to email us, our address is hackinghumans@n2k.com. All right, Joe, let's move into our stories here. And I'm actually going to continue using some listener feedback here. I'm going to use it--

Joe Carrigan: Excellent.

Dave Bittner: -as my story.

Joe Carrigan: Good.

Dave Bittner: This came from a listener named Tony who wrote in with a very interesting tale. And Tony wrote in and said, "Hi, Dave, Joe and Maria. I had an interesting experience recently, but I'm not sure where it fits. It's not really a Catch-of-the-Day but possibly closer to a cautionary tale." Tony goes on and says, "The night before I was to travel overseas, I was working through backup plans for dealing with a lost or stolen phone, an iPhone, while I was away. Previously, I had worked out the best solution was to buy a new phone and use the Trusted Number feature to get back into my iCloud account." So, it sounds like what Tony is describing here is basically maybe having a burner phone.

Joe Carrigan: Is he planning on his iPhone being stolen?

Dave Bittner: Well, I think he's just planning against--

Joe Carrigan: Or planning for the eventuality of that? Or--?

Dave Bittner: -he's planning -- he's hedging his bets, I think.

Joe Carrigan: Okay.

Dave Bittner: He's planning against the possibility of that happening.

Joe Carrigan: So, he knows -- maybe he's going to England where guys on bikes just snatch iPhones?

Dave Bittner: Sure.

Joe Carrigan: Right?

Dave Bittner: So, Tony goes on and says, "An alternative solution was to take a second phone, which had already been connected to my Apple account and keep it powered off and separate from my primary phone. I have an old iPhone, which I did a factory reset, and started to set up with the basic apps I thought I'd need: email, password manager, VPN, and my banking app. While I was setting up the banking app, I discovered I had been locked out of my internet banking account.

Joe Carrigan: Interesting.

Dave Bittner: I checked on my primary phone and I was locked out of the banking app there as well. At this point, I realized the VPN was still running on the second phone, so I turned it off but still couldn't log in. At that point, my phone rang with an interstate number I did not recognize. I've listened to many "Hacking Humans" episodes, and immediately thought it might be a scammer who had coincidentally got access to my phone." So, Tony has done us the favor or interjecting questions along the way to his story.

Joe Carrigan: Okay.

Dave Bittner: So, the first question is, "Should I answer the phone?"

Joe Carrigan: Right.

Dave Bittner: Joe?

Joe Carrigan: I say yes, but only because I know what -- I think I know what's coming.

Dave Bittner: Okay.

Joe Carrigan: But, I mean, generally speaking, it's -- what is it? Are they spoofing his bank numbers or just giving him -- just calling him?

Dave Bittner: It's not a number he recognizes.

Joe Carrigan: Yes.

Dave Bittner: It doesn't come up as anything. Just a random number.

Joe Carrigan: Okay, in that case, I say no.

Dave Bittner: Yes. I would agree. I generally don't -- well, I don't answer.

Joe Carrigan: Yes, I don't answer phones. But I get mad when people don't answer my calls, right?

Dave Bittner: Well, sure. I generally don't answer the phone if it's not anyone who's already in my address book.

Joe Carrigan: Right. Right.

Dave Bittner: So, Tony says, "I answered the phone and the woman on the other end greeted me by name, and said she was from the bank's Fraud Department, and they had locked my internet banking account. She wanted me to provide some identifying information." All right, next question from Tony, "Should I continue with the call and provide that information?"

Joe Carrigan: No. That is when you say -- okay. That's what I thought this was, so if you answer the call, you go, "Oh, no. That's what I thought this was. I'm going to call you back."

Dave Bittner: Yes.

Joe Carrigan: Just let them go through the panic there.

Dave Bittner: So, this is the classic inbound call.

Joe Carrigan: The inbound call. Right.

Dave Bittner: Right. Unsolicited inbound call.

Joe Carrigan: Yes.

Dave Bittner: All right. Tony says, "Being a good 'Hacking Humans' listener, I knew I should probably call back on a number I knew is correct. I asked if I could call her back, and interestingly, I found myself asking her for a number to call." He says, "I know. I shouldn't do that."

Joe Carrigan: Right.

Dave Bittner: She did provide the bank's actual tollfree number, so I was more convinced that this wasn't a scam, but of course, it could be a scammer who was hoping this would gain trust. I hung up and called on the tollfree number. This number is a generic entry point for all banking issues, and so I had to navigate through a multitude of options, none of which seemed to fit the issue at hand."

Joe Carrigan: Right.

Dave Bittner: While--

Joe Carrigan: This is awful, by the way. And this is not uncommon in my experience.

Dave Bittner: Yes. "While I was on hold to speak to a human, I went to my computer to try to log onto my account and noticed an alert email seemingly from the bank. I still had my 'Hacking Humans' radar up, so I analyzed the email and its headers. Nothing seemed amiss. After 30 minutes on hold, I was starting to get a bit frustrated. It was getting late, and I needed to get organized for my trip." Tony asks, "Was my frustration from an artificial time pressure I was giving myself?"

Joe Carrigan: Maybe?

Dave Bittner: Possibly?

Joe Carrigan: This is what we talk about frequently, where these guys hit you at the right time--

Dave Bittner: Yes.

Joe Carrigan: -just because they're hitting hundreds of people at the same time, they're going to get somebody at the right time.

Dave Bittner: They might have got lucky.

Joe Carrigan: Right.

Dave Bittner: Yes. Tony says, "The email from the bank seemed legitimate and had some direct contact numbers to the bank's Fraud Department, and these numbers only differed in the last three digits from each other, and the number I had initially received. I decided to call one of the numbers in the email." And Tony asks, "Was I doing the right thing? How much confirmation do I need that this isn't a scam? Am I being too suspicious?"

Joe Carrigan: That's a tough one.

Dave Bittner: Isn't it? Yes, I mean--

Joe Carrigan: I mean--

Dave Bittner: -is there any such thing as being too suspicious?

Joe Carrigan: My opinion is no. You can Google your bank's number -- your bank's name followed by "Fraud Department" or "Fraud Phone Number."

Dave Bittner: Right.

Joe Carrigan: And see if that gets you the same result that's in the email.

Dave Bittner: Yes. But that could be a scam.

Joe Carrigan: You're right. They could be running a fake ad. It could all be scams.

Dave Bittner: Right. It's scams all the way down. All right, so Tony calls one of the numbers and he says, "The call was answered fairly quickly by a person with an Australian accent, and who knew who I was before I'd given my name and he knew about my situation." By the way, I think Tony's Australian, so--

Joe Carrigan: Okay.

Dave Bittner: -the Australian accent tracks.

Joe Carrigan: Right.

Dave Bittner: If you or I called, and it was someone with an Australian accent, that probably wouldn't be reassuring.

Joe Carrigan: Right.

Dave Bittner: Right?

Joe Carrigan: I would be like, "Wait a minute."

Dave Bittner: Right. Right. Okay, so Tony asks, "Should this ring alarm bells?" Person knows who he is before he even gives his name.

Joe Carrigan: No, because I call into one of my financial institutions and they know who I am before--

Dave Bittner: Yes.

Joe Carrigan: -I answer--

Dave Bittner: So, they're using Caller ID.

Joe Carrigan: Right.

Dave Bittner: They have his number on file.

Joe Carrigan: They have my number on it and his number on file too, yes.

Dave Bittner: So, when the representative answers the phone, before they pick up, they've already got a bunch of information.

Joe Carrigan: Yes.

Dave Bittner: Right? "This person also knew that I had recently made a login to a new device using a VPN. At this point, I was pretty sure he was who he said he was and provided the identifying details he needed. He asked me what I had been doing? And it was apparent, he was now making sure that I wasn't a scammer. I relayed the story about what I was doing in regard to setting up a spare phone and identified the model. This convinced him and he unlocked my account. And I confirmed I could log in. He explained that scammers commonly use old iPhones, mine was a 6S-- "

Joe Carrigan: Interesting.

Dave Bittner: "-and VPNs. And this had triggered an alert in their systems which led to an automatic locking of my account."

Joe Carrigan: Okay.

Dave Bittner: It is now several weeks since this incident and my bank account seems fine. The whole situation has given me more confidence that my bank account is on the right track with its cyber security. I've also relearned that doing these things the night before going away is not a good idea."

Joe Carrigan: Right. Maybe a week before going away.

Dave Bittner: "And that VPNs are not always a good thing."

Joe Carrigan: Right.

Dave Bittner: "I realize this story has a happy ending, does not involve scammers, but I think it's important to have success stories along with the ones where things have gone wrong." Well, I agree with that, Tony.

Joe Carrigan: I agree 100%. This bank--

Dave Bittner: Is that a great example?

Joe Carrigan: -yes. This bank is doing a great job. They've noticed that bad guys come in on old iPhones on VPNs. So, if you log in on an old iPhone and a VPN, they go, "Wait a minute?"

Dave Bittner: Right.

Joe Carrigan: "This fits a pattern."

Dave Bittner: This is not right.

Joe Carrigan: Right.

Dave Bittner: Yes.

Joe Carrigan: You know, I use a VPN sometimes.

Dave Bittner: Yes.

Joe Carrigan: And I notice that sometime -- you know who the biggest blocker or you know, they need to verify that you're human is?

Dave Bittner: Who's that?

Joe Carrigan: It's Google.

Dave Bittner: Oh.

Joe Carrigan: You can't do a Google search on a VPN most of the time.

Dave Bittner: Really?

Joe Carrigan: Yes, because they want -- you know, a lot of people use VPNs for illicit purposes.

Dave Bittner: Yes?

Joe Carrigan: And I guess the one that I pay for, also has people that pay for it that use it for bad guy things. And--

Dave Bittner: So, what happens if you try to use Google with a VPN?

Joe Carrigan: You know, if I just type in -- into the search bar and go to Google, then it says, "Are you a human? Click here." And then you get those little captions.

Dave Bittner: Oh, okay.

Joe Carrigan: Right? So, what I immediately do is I just go to Bing, and Microsoft doesn't seem to filter this.

Dave Bittner: Oh, interesting.

Joe Carrigan: They're just up for serving out stuff. So, serving out search results.

Dave Bittner: When you're Number 2, you'll do whatever it takes.

Joe Carrigan: Right.

Dave Bittner: Right. Right.

Joe Carrigan: I should be going to Duck, Duck, Go to begin with. They really don't care where you come in from.

Dave Bittner: Yes.

Joe Carrigan: I truly believe that Google is doing this, not to prevent malicious activity, but to incentivize you to not use a VPN so that they can track you better.

Dave Bittner: Could be. Yes. So, I think there's a lot of interesting things in this story, and not to least of which is as we said, that things that are going on behind the scenes, where Tony's bank was keeping track of the devices that he uses to log in, which is great.

Joe Carrigan: Right.

Dave Bittner: You know? If you, like, I'm trying to think of my own interactions with places like banks or some healthcare organizations. You know, there are basically two computers -- two ways that I log in. Either through my mobile device or my desktop computer.

Joe Carrigan: Right.

Dave Bittner: And they are generally -- the mobile device could be anywhere, but it's the same device, but the desktop computer is always in the same place.

Joe Carrigan: Right. Always has the same IP.

Dave Bittner: Yes.

Joe Carrigan: Yes.

Dave Bittner: So, it makes sense that not just an IP address coming from the other side of the world--

Joe Carrigan: Right.

Dave Bittner: -which we hear about a lot.

Joe Carrigan: Yes.

Dave Bittner: But just a random IP address, or them tracking that it's coming through a VPN. Very interesting.

Joe Carrigan: I think that's really -- what I think here is the going above and beyond of this bank to -- they've got a list of exit nodes for VPNs.

Dave Bittner: Yes.

Joe Carrigan: And they're comparing the traffic against it. And they're able to get the agent string. They're doing some pretty good behind the scenes work. It's not difficult to do this.

Dave Bittner: Yes.

Joe Carrigan: This information is available to every bank out there.

Dave Bittner: Yes.

Joe Carrigan: But they've put -- they've gone through the process of putting things together and found out -- and done the analysis and they've determined that they're going to lock accounts when they see this. And it's great.

Dave Bittner: Yes.

Joe Carrigan: It was a minor inconvenience for Tony.

Dave Bittner: Right. Right. Yes, better safe than sorry, right?

Joe Carrigan: Right.

Dave Bittner: Yes. So, well, thank you Tony for sending that in. We do appreciate it. It's a great story. All right, Joe. What do you have for us this week?

Joe Carrigan: Dave, I have two stories because the first one's pretty short.

Dave Bittner: Okay.

Joe Carrigan: It's from Blair Young up at WBAL up here in Baltimore.

Dave Bittner: Okay.

Joe Carrigan: And I don't know, Dave. It's about the Maryland Lottery, which is kind of already a scam, but--

Dave Bittner: As I say, it's a tax on people who don't understand math.

Joe Carrigan: -that's right. But generally speaking, you should never buy a lottery ticket.

Dave Bittner: Right.

Joe Carrigan: It's the worst form of gambling, but even if you do buy lottery tickets, or if you don't, the Maryland Lottery is warning people of a phone scam spoofing the Idaho Lottery. And they're calling people in Maryland saying that you have won a lottery prize from the Idaho Lottery.

Dave Bittner: Okay.

Joe Carrigan: They're saying that you won the Powerball, Dave, which can be a very large jackpot.

Dave Bittner: Right.

Joe Carrigan: So, here's one of the key things you need to know about Powerball. The Lottery Commissions don't know who buys the tickets.

Dave Bittner: Okay.

Joe Carrigan: You walk in, you pay with cash, although there are -- I guess there are like -- you can pay with a debit card, too.

Dave Bittner: Yes.

Joe Carrigan: There's debit card machines on -- debit card readers on the automated machines now.

Dave Bittner: Right.

Joe Carrigan: But I don't -- you know, there's really no way for them to have your phone number. And then somebody calling you saying you won the lottery. No, that's totally on you to claim any prizes that you may have won. So, two things to remember. One, you have to buy a ticket to win. If you didn't buy a ticket, you did not win.

Dave Bittner: Yes, the slogan for the Maryland Lottery used to be, "You Got to Play to Win."

Joe Carrigan: That's right. And Number 2, the lottery will never call you to give you your money.

Dave Bittner: Yes.

Joe Carrigan: That you need to claim it.

Dave Bittner: They don't want you to claim it.

Joe Carrigan: They don't want you to claim it. Because if you don't claim it, after six months, guess who gets it?

Dave Bittner: Right. Right. Right.

Joe Carrigan: All these governments who participate in all these large jackpot lotteries, they get to keep it.

Dave Bittner: Yes.

Joe Carrigan: And you don't get anything. So, they're not incentivized to call you. So, just be aware that there's a lottery scam going around. Don't answer phone calls from the Idaho Lottery, and don't believe anybody that calls and says, "Hey, you won the lottery," because all that is, is an advanced fee scam.

Dave Bittner: Yes.

Joe Carrigan: Where they're going to start hitting you up for fees until you either run out of money or realize it's a scam and stop talking to them.

Dave Bittner: Right. Right.

Joe Carrigan: My second story comes from Don who says, "Hello. I know on your podcast you have often said that you would rather be taken advantage of or -- than risk not giving money to someone who is in need." Yes, to a certain extent.

Dave Bittner: Sure.

Joe Carrigan: Right? I mean, I don't want to lose thousands of dollars to somebody.

Dave Bittner: Right.

Joe Carrigan: I need that money more than somebody else does, I think. "But I ask if you want to encourage this sort of thing?" Well, we don't want to encourage it, no. And he links to a video of a group of scammers who are using a funeral for a child to collect money at intersections.

Dave Bittner: Yes.

Joe Carrigan: Have you seen this?

Dave Bittner: I have.

Joe Carrigan: And Don sent along a video that is a link to a story from WOOD TV 8 out in Grand Rapids, Michigan. And I'm going to tell you right now. You should not do what this reporter does in this story, because he walks up with a cameraman to these people who have signs. They're collecting money for a child's funeral, allegedly.

Dave Bittner: Oh.

Joe Carrigan: Right?

Dave Bittner: Ambush journalism, Joe. Ambush journalism.

Joe Carrigan: Ambush journalism.

Dave Bittner: Yes.

Joe Carrigan: Right. Michael Moore is proud, right?

Dave Bittner: And the team at 60 Minutes. Right.

Joe Carrigan: Hey. But these guys are -- these guys are walking up. These scammers are getting physical with the news crew. They know they're on camera and they have no compunction about hitting these people.

Dave Bittner: Right.

Joe Carrigan: And they do. So, if you see somebody in a -- running one of these scams, you know, don't approach them.

Dave Bittner: Yes.

Joe Carrigan: Don't tell them that they're scammers. Just know that they're scammers and don't give them money. I haven't seen this around here, but apparently these guys are travelling. They are a group of people that has shown up in Grand Rapids and they're moving from town to town. So, once they're done in Grand Rapids, they're going to move somewhere else.

Dave Bittner: Yes.

Joe Carrigan: And at the end of the video, he even accosts somebody who says -- they say, "Are you running a scam?" And the guy -- the scammer goes, "Yes, this is a scam. What more do you need to know?" And then they all pile into a minivan and they leave. Right?

Dave Bittner: Yes.

Joe Carrigan: And it's interesting. We see people like this around here. I don't know if you've seen anything like this, Dave, but there's usually people -- they will sit in the median with -- at shopping centers, with their children--

Dave Bittner: Right.

Joe Carrigan: -and say, you know, "Family out of work. Need help."

Dave Bittner: Right.

Joe Carrigan: And you know, I don't want to say blanketly that that's not a family with a dad out of work that needs help.

Dave Bittner: Yes.

Joe Carrigan: But I kind of don't think that's a family that's out of work with a -- you know, with a dad that's out of work and needs help. I'm a little -- I'm automatically dubious of that.

Dave Bittner: Well, so a couple things. I have heard of this scam around here. I believe what I saw was there was some folks running the funeral scam, I want to say in College Park, which is down where the University of Maryland is. So, I don't know if it's the same group or if you know, word gets around that a scam is successful, and scams spread.

Joe Carrigan: Right.

Dave Bittner: Look, there are plenty of these sorts of scams where you're trying to raise money from people at an intersection. Plenty of those are scams. I'm sure plenty of them are legitimate people.

Joe Carrigan: Right.

Dave Bittner: I have stopped and spoken to people who are legitimately asking for money at intersections and did my best to try to connect them with local services.

Joe Carrigan: Yes.

Dave Bittner: So, you know, some of them are legit. What I would say is if you can resist having your heartstrings pulled, give to your local organization that provide services to these types of folks, you know?

Joe Carrigan: Right.

Dave Bittner: To your homeless shelter. To your food bank. Your -- you know, all of those places where folks who are down on their luck can make use of these services and get the services they need and hopefully put them on a pathway to, you know, a better situation. So, look, I don't think there's -- this is so hard, because--

Joe Carrigan: It is.

Dave Bittner: -your heartstrings get pulled and you want to be helpful, and if you happen to be someone who's been, you know, successful and given many blessings through your life, and you want to try to help people. And that impulse is real and there's nothing wrong with that. But you just -- you've got to be careful.

Joe Carrigan: Yes.

Dave Bittner: And like I said, to me, what I've decided to do just for me personally, your mileage may vary, is I give to the larger organizations, and that way I trust their expertise to do the vetting, and to be able to best and most efficiently provide for the people who need this kind of thing. Yes. This is the worst of the worst. Absolutely.

Joe Carrigan: Yes. So, thanks Don for writing in with that. That's--.

Dave Bittner: Yes, it's good -- I mean, it's a good conversation to have had sparked.

Joe Carrigan: Yes.

Dave Bittner: So, thank you Don for sending it in and I appreciate it. I hope everyone gets a chance to spend some time, you know, just thinking about what are you going to do? How do you handle these things? Like I said, it's not easy. All right, well, let's move on to our Catch-of-the-Day. [ SOUNDBITE OF REELING IN FISHING LINE ] [ Music ]

Joe Carrigan: Dave, our Catch-of-the-Day comes from the Scam Community on Reddit. I don't know what it is, Dave. It's--

Dave Bittner: Well, I will read it.

Joe Carrigan: Right.

Dave Bittner: Okay? It goes like this. "I'm Teresa, a Recruiter from Michael Page, and we've noticed that your background and resume have been recommended by several online recruitment agencies. That's why we want to offer you a part-time job that you can do in your free time. Our job is simple. We just review apps for the App Store. There is no time limit, and you can complete the assessment at home. Daily commissions range from 300 to $1,000, and all payments are made on the same day. You can collect your commission immediately after each day's work. If you'd like to participate, please contact us via WhatsApp. Note you must be over 18 years old."

Joe Carrigan: So, I think I do know what this is, Dave.

Dave Bittner: Okay.

Joe Carrigan: This is those people that buy reviews and they're looking to outsource that. And probably you don't make $300 to $1,000.

Dave Bittner: Yes, I think it could be that. I think it also could just be a task scam. Which is where they draw you in with the promise of an easy task that will pay big money. But then, in order to do this task, you have to either download an app on your phone, or something you log into. And so, the money that you're earning, and I'm putting "earning" in air quotes, gets credited to your account, but if you really want to earn money, then they try to upgrade you to the next level.

Joe Carrigan: And that costs you.

Dave Bittner: And that costs you.

Joe Carrigan: Right.

Dave Bittner: And you can't get your money out until you upgrade. Or if you want to get your money out more quickly, you know, there's all these incentives they give you to turn over just a little bit of money. And then the thousands of dollars will be flowing towards you--

Joe Carrigan: Right.

Dave Bittner: -quickly. So--.

Joe Carrigan: Another use of the sunk cost balancing.

Dave Bittner: Exactly. Exactly. So, these task scams are pretty common, and it seems like the consensus that people responded to this. So, that was what was most likely what this was, but I think you could be right. It could be another one of those, you know, farms. Yes, absolutely. All right, well, we would love to hear from you. If there's something you'd like us to consider for our Catch-of-the-Day, you can email us. It's hackinghumans@n2k.com. [ Music ] All right, Joe. I recently had the pleasure of speaking with Brandon Kovacs who is a Senior Red Team Consultant at Bishop Fox. We're talking about some of the work that he's been doing with live deep fakes. Here's my conversation with Brandon Kovacs.

Brandon Kovacs: So, the presentation itself was really the outcome of a lot of research I'd performed on behalf of my company and to the whole deep fake and voice cloning scene, if that's what you want to call it? A lot of it really -- my interest in this topic really came out an event that happened earlier this year in January, February of this year, in which a finance worker at a multi-national firm in Hong Kong thought he was talking to the Chief Financial Officer of his company, amongst other employees on a live video call. And on that call, he was essentially instructed to send a wire transfer for $25 million, which he did because he thought he was talking to the CFO, and it was a live video call, right? Well, it turns out that was a real-time deep fake and voice clone performed. And that really stroked my interest, you know, because I -- you know, from like coming from a Red Teamer and especially for the social engineering side of things, I found that as a really interesting vector. And this was the first of its kind. And it's not going to go away anytime soon. If anything, it's only going to get worse and worse. So, that really sparked my interest. And after that incident, I was tapped by someone senior in my company who asked, you know, if I would be interested in you know, studying and trying to figure out how these threat actors are pulling this off?

Dave Bittner: You know, here on our show, my cohost and I have approached this kind of skeptically up until recently, you know? There have been stories about deep fakes, and I think we'd always kind of raised our eyebrows and said, "Well, it doesn't seem like people could do this real time yet," or you know, there'd be lag and so on and so forth. But I have to say you know, some of the demos I've seen recently and then reading some of the research from folks like yourself, it seems like -- well, let me ask you. Is it fair to say we're there?

Brandon Kovacs: Oh, we're sort of there. Once we finish, you know, the interview I will actually -- I'll switch over to my video feed and I'll give you a sneak peak of what it looks like. I have two clones that I created with permission of the people, you know, obviously. One being Alethe Denis. I believe she was on your podcast a while back. She's a coworker of mine. So, back in, you know, in February when I was, you know, when I was asked by someone on my team if I'd be interested in, you know, looking into how this is pulled off, she was the only one on my team who volunteered and said, "Go ahead and clone me. You know, there's all sorts of podcasts and interviews about me online." So, that was kind of the hypothesis we really wanted to test. Is it possible to create a legitimate looking deep fake video and voice clone only using public information, right? Information that we find on the internet. Because what we want to emulate -- what we really wanted to test and emulate is if we can pull it off on Alethe, then we can pull it off on any CEO or any public figure. Right? Just because if you think about it, if you're the CEO of a business, there is plenty of interviews that you've probably done. There's earnings calls, right? And you have great voice samples from that, if desired. So, that was kind of the hypothesis we set upon ourselves to test. And it turns out, you actually can. So, there's a version I created of Alethe, and then I also created another version of my friend Chris, who allowed me to clone him. And again, I'll show you this afterwards. And yes, it's -- we're definitely there. We're definitely there.

Dave Bittner: And what's the availability of the tools to do this? I mean, did you -- is this a matter of you know, you having to cook up custom things or are these things generally available?

Brandon Kovacs: So, that's the scary part. The deep fake tools that I personally used have been around for many years. Specifically for the video cloning itself, it's done using something called Deep Base Lab and there's a metric they reported. They said that 95% of deep fakes on the internet are created using Deep Base Lab. So, there's that. Then for the vocal cloning aspect, I'm using again another open-source tool called RVC, Retrieval Voice Conversion, which allows you to train vocal models against you know, audio as the input for the training data. But yes, the tools are widely available for anyone to really you know, use and leverage.

Dave Bittner: Well, let's talk about the execution of something like this, you know? If you were trying to convince me that you were you know, one of the models that you've trained here and you -- you know, you manage to convince me to get on a call with you, what would be going behind the scenes in terms of you know, your preparation and then the things that you would be doing during the call to convince me that the person I was talking to was authentic?

Brandon Kovacs: Yes, totally. So, you have the video side of things and the voice. And as I said, a lot of this is we're inferring this locally. So, to accomplish this, I bought a special laptop. It's really just a gaming laptop, right, with a 4090 graphics card that has 12 gigabytes of BRAM. And yes, I found that that's sufficient for running, high resolution models. And by high resolution, I mean around 384 to 512 pixels. So, let me walk you through the whole steps. So, first we have the creation of the models themselves, right? So, you're going to have a video model and a voice model. So, for the video model, any AI model or any machine learning model, the steps to create and train these models are kind of -- are very similar to, right? You have first the collection of the data, where you're sourcing it from wherever you're acquiring it from. Then there's the preparation of the datasets. Right? Once the models are trained, you then have this you know, pre-trained model which is the final output. So, you do this for both the video and the voice. So, for the video, a lot of these -- there's a lot of sub-steps that are necessary. So, for example, let's say you're doing a video model. The preparation of that data is going to differ than you know, the prep -- if you're preparing a vocal model, right? So, let's talk about voice clone. Let's just start with the audio for example. So, we're first going to collect the data, right? So, in the case of my coworker, my colleague Alethe, this involved sourcing audio from public sources, whether it's YouTube, interviews, podcasts such as this, right? So, we're going to first collect that data. Then we're going to prepare that data. And that can involve many different sub-steps. Once you have that dataset prepared, then you put it into the RVC trainer where you're training the model through a certain number of rounds. And then finally, you have that final output file, right? So, that you have -- then you have the final audio model. For video, it's a little bit more in-depth, but the -- from a high-level standpoint, the process is still the same. First, we're going to collect, you know, we're going to collect the data. So, for a video model, that can -- if you're emulating, you know, the bad guys, right? They're not going to call someone and say, "Hey, can I shoot video footage of you?" They're going to acquire it on public footage, you know, on the internet. Right? So, first you have the collection of the data, and then you have the preparation of that video data. First, we're going to extract the frames from all those videos that we've collected from the source and the destination. The source, meaning the person we're trying to clone. The destination is whose face it's going to actually swap and appear on. So, we're doing extraction of the frames to take a video file and pull out thousands of frames. Then those frames go through a process called alignment in which we're identifying what's called the facial landmarks. Whether it's the eyes, the nose, the mouth, etcetera. We're looking at the pitch and the yaw, you know, the rotation of the head. And we're applying that to both the source and destination datasets. And then finally, we're going to then label that data. So, labeling data when it comes to video deep fake models is through a process called masking. Masking at its high level is essentially you're defining the facial contours and the edges, right? So, my forehead all the way down to my chin and what not. And we're doing that on a number of images from both the source and destination. And the idea here is we're going to teach the trainer what a face looks like, what that person's face looks like. And we're also going to train it against various obstructions. So, for example, with the video model I created of my friend Chris, he's a huge Miami Dolphins fan, and he loves to wear Miami Dolphins hats. So, what we did with him when we were capturing the training data, we had him wear his hat as he normally does. And then I personally also wore the same hat. And what we did is when we were undergoing the training of the mask, we taught the model to essentially ignore the hat by only grab the inner bounds of his face, right? So, that way we could do the inferring with and without wearing a hat. And then finally, you know, once we have that dataset prepared, it undergoes the training in which you know, the deep face lab attempts to recognize patterns in both the source and destination. Learn from it and make a series of predictions based upon what it knows about point in time. And what you find is that over time, you know, as you're going from 1,000 iterations to 10,000 iterations, up to I don't know, 2.5 million iterations, the deep fake video models become incredibly detailed and very lifelike. And then finally, once that completes, you have what's called a DFM file which is the deep fake model. So, once you have those outputs of the audio model and the video model, it then requires you to infer them in real time, locally too. So, to do that, we're using -- once we have the Deep Face Lab model, we're then going to use something called Deep Face Live which allows you to infer Deep Face Lab video models, right? So, essentially, you just you know, pipe in the file, and you're inferring it. It pulls a, you know, a video buffer and it's doing real-time face swaps. So, I'll give you a demonstration after this. So, that's in terms of the video. And then the audio, it's a very similar process. And then from there, yes, it's just a matter of wiring the components together, right? You're taking the output from Deep Face Live, the outputs from the voice changer, and you're then piping them into something like OBS, you know, Virtual Studio Software, where you can then create a virtual camera. And once you have that, you can then select that virtual camera as your input. You know, if you're using Microsoft Teams, or Zoom, or you know, this program we're using to record the podcast right now to essentially you know, make calls.

Dave Bittner: So, then is the person committing the deep fake here, are you basically just performing yourself as if you were the artificial model and it's just taking your input, your face, your audio, your voice and then converting it in real time to the person you're imitating?

Brandon Kovacs: Yes, that's exactly what we're doing. And also, what I've found is that we're taking a really cinematic approach here. So, I learned a thing or two about production and you know, creating content, I guess you could say. So, for example, if I were to turn the green screen behind me into like an office environment, you know, I just drop a static, two-dimensional photo, I can essentially make it a three-dimensional environment by incorporating something like a cardboard box that's also wrapped in the green, chroma key color, deleting it from the scene so you can't see it. However, what I can do then is put objects on top of it, right? So, let's say in this static two-dimensional image, there's a shelf. You know that's behind me. I can put a cardboard box at the same height as what the shelf would be, delete it from the scene and then put a for example, a cup of water behind me. So, for example when I'm in the scene interacting and talking, I can then reach behind me, pick up a water off that shelf, start drinking it. And it really adds that sense of authenticity to the scene, because it's not just this two -- you know, a person standing in front of a two-dimensional green screen anymore at this point. It's a person in a full three-dimensional environment.

Dave Bittner: I'm imagining that you know, let's say you're imitating a CEO or something and you know, the CEO says, "Oh, well, here's a photo of my husband and kids." You know? If you were able to gather that information online from a Facebook profile, or something like that, you know, those little things you could have in the background. And if you could interact with those objects, boy, that would really sell it. And you can do that.

Brandon Kovacs: Yes, and it really elevates the scene and establishes that authenticity to it. And that's what I've really found that works.

Dave Bittner: So, having gone all through this, I mean, where does that leave you now in terms of the -- you know, the recommendations that you're making to people knowing that these capabilities are you know, readily available, albeit it with a decent amount of work, but someone who really wants to set themselves to do this, there's not a whole lot in the way.

Brandon Kovacs: The technical controls don't exist right now, right, because this is so new. Deep fake detection isn't as good as where it should be, right? And that's kind of the whole point of me doing this, because I really thought, you know, I've always believed that great defense requires offense. And that's why I work as a Red Teamer, right? And being able to train and allow Blue Teams to have this understanding and this knowledge, because you can't really defend against a threat until -- unless you really understand how it works, right, at its most fundamental layers. And if Blue Teamers or defenders or AI, ML scientists that are out there can understand how the Deep Fakes are created, then they can start working on how, "Okay, how can we then detect them? How can we--?" And start training models to then detect deep fakes, by using these deep fake -- deep fakes that we're creating as the inputs, right, to then enhance and -- enhance their detections. But, yes. Right now, at the moment, there's really nothing that exists.

Dave Bittner: Where do you suppose we're headed with this? I mean, I think we can all envision that pretty soon we'll be able to do this with the devices in our pockets. What does the future look like in terms of being able to believe what you see?

Brandon Kovacs: It's scary. Because as I said, right now, the technical controls don't exist to detect these things, and the barriers to entry are only getting lower and lower. Right? If someone wanted to create, you know, cinematic quality deep fakes 5 or 10 years ago, it wouldn't be possible. Or at least it wouldn't be possible without you know, a crazy cluster of GPUs and all this compute and all that good stuff, right? But now, you can create them using consumer grade hardware, whether it's just you know, a standard Nvidia graphics card. That's really the barrier to entry right now. And as technology progresses and the technology gets better and better, it's going to enable threat -- it's going to lower that barrier even more and allow threat actors to you know, pull these things off with little to no resources. Like right now, there's obviously a lot of technical considerations that you could take into place, but again, over time as the technology gets better, and these open-source tools become more and more prevail and accessible, it's just becoming more and more easier for you know, [inaudible 00:44:36] to just you know, use the same tools, to you know, use them for nefarious and malicious intent.

Dave Bittner: In looking back to that original situation you described that helped set you off on this journey, you know, the person who ended up transferring $25 million, can you imagine any steps that could have been put in place or you know, recommendations you would have for folks to be able to have extra layers of security if someone's trying to do something like this to you that -- extra things you can do to circumvent it?

Brandon Kovacs: Yes, totally. I mean, think of it like this. Tell me something only Dave would know, right? Something like a -- if you add a layer of two-factor authentication in the sense that if you have two trusted parties, and they run into an event where you know, someone's calling them with a sense of urgency, you know, what's the secret code that only you know? That's mutually agreed upon. So, that way if there's a potential incident where you know, you get a call from someone and they're trying to send you a wire transfer or gibe you access to a network or reset a password, you know, what is that two-factor authentication code that we agreed upon, you know months ago?

Dave Bittner: And actual like spoken password between the two of us, a shared secret.

Brandon Kovacs: Yes, so what's crazy is around five years ago, I get a phone call from my mom. She's freaking out. She goes, "Where are you? Where are you? What are you doing in Mexico?" And I'm like, "What are you talking about? I literally just woke up. I'm sitting on my couch." And she goes, "You're not in Mexico? You're not hurt? You're not in a car accident?" I'm like, "No. What's going on?" Well, it turns out someone found my phone number. Someone found my grandma's phone number. And someone called my grandma pretending to be me. And they said, "Grandma, I just got into a car accident. The police are here and they're going to arrest me unless I pay them off. Can you go to Western Union right now and send me $4,000?" Guess what my grandma does? She gets in the car, goes straight to Western Union, and sends off 4 grand. That incident could have been prevented had my grandma and I had a mutually agreed upon safe word, per se. So, if -- you know, when this person you know, claiming to be me, calls her, she can say, "Okay, Brandon. I'm happy to do that. What's the password?" right? And that would have mitigated and deflected it and prevented you know, these attackers or scammers, if you want to call them that, from pulling this off. [ Music ]

Dave Bittner: Joe, what do you think?

Joe Carrigan: All right, Dave, so my first question out of the gate is Brandon said he was going to show you a demo of this.

Dave Bittner: Yes.

Joe Carrigan: Did you see the demo?

Dave Bittner: I did.

Joe Carrigan: Did he look like Alethe?

Dave Bittner: He did.

Joe Carrigan: Did he sound like Alethe?

Dave Bittner: Yes.

Joe Carrigan: And it was live?

Dave Bittner: Yes.

Joe Carrigan: Impressive.

Dave Bittner: Yes.

Joe Carrigan: I'm sorry I missed this.

Dave Bittner: Yes, it is. It's impressive. I mean, it really is. You -- and it's uncanny. Like, he's sitting there and I'm you know, and I'm talking to Brandon, and Brandon puts on a hat with a fake wig and presses a button and Brandon turns into someone else. Just like right in front of my very eyes.

Joe Carrigan: Horrifying.

Dave Bittner: You know, Brandon turns into someone -- and that other -- and Brandon's just still talking and it's all happening in real time. The face is moving. The words are coming out. And it's in a different voice and it's a different face. You know, it has kind of that glitchy Zoom look to it. It's not perfect--

Joe Carrigan: Right.

Dave Bittner: -but the fact that it's coming to you online, I think you could easily write it off as--

Joe Carrigan: You'd dismiss the platform.

Dave Bittner: Yes, exactly.

Joe Carrigan: You would absolutely dismiss it.

Dave Bittner: Exactly. Yes, yes.

Joe Carrigan: So, you have these tools out there like Deep Face Lab and Deep Face Live and then RVC that powered by a simple modern graphics card, that might cost you a little more than $1,000.

Dave Bittner: Yes.

Joe Carrigan: I looked it up. They -- he says he's running a -- I think a 480?

Dave Bittner: Okay?

Joe Carrigan: The Deep Face Lab says they recommend a 270-plus.

Dave Bittner: Okay.

Joe Carrigan: So, it'll run on much older hardware. Now, I have at home, I have a generation earlier, 1080TI--

Dave Bittner: Okay?

Joe Carrigan: -which is pretty comparable to the 270 in the next generation.

Dave Bittner: Okay?

Joe Carrigan: So, I might be able to do this, Dave?

Dave Bittner: Yes.

Joe Carrigan: So, you might see--

Dave Bittner: Give it a try.

Joe Carrigan: -you might see somebody calling you that is actually me, although it does take a lot of work to get things to work. You have to build these models. You have to train them.

Dave Bittner: Yes.

Joe Carrigan: And I think it's interesting that it accounts for the obstructions like hats and glasses and wigs. And I guess that probably makes it a lot easier to impersonate somebody. Right? Like if I can buy myself a Dave Bittner wig, because Dave, your hair is much thicker and more luxurious, and also far more darker than my hair.

Dave Bittner: But that's -- see, but yes. And I think you're onto something here, because the hair is real, right? In other words, the hair is not generated by a computer. It's a wig, but it exists in the real world.

Joe Carrigan: Right.

Dave Bittner: That is a much harder thing to simulate.

Joe Carrigan: Yes, it's harder to simulate and render.

Dave Bittner: Right, so all the computer has to do is the face.

Joe Carrigan: Right.

Dave Bittner: Which is being framed by the hair. So--.

Joe Carrigan: It looks -- and that's -- I think that's part of it. Going back into this, you can also add environmental pieces. This was a very troubling interview to listen to, especially when Brandon says that there are no technical defenses against these things. And we need that. I think that's 100% correct. Right now, the only way to defend against something like this is some pre-shared information. You know, like if I say to you, Dave, the password is cinnamon.

Dave Bittner: Right.

Joe Carrigan: Now, everybody will know because this is a podcast.

Dave Bittner: Well, that's your decoy password.

Joe Carrigan: That's my decoy password, right.

Dave Bittner: Right. Right.

Joe Carrigan: So, yes.

Dave Bittner: Right.

Joe Carrigan: Yes, absolutely.

Dave Bittner: Right. So, yes. It's an interesting world, huh?

Joe Carrigan: It is a very interesting world and it's going to be very interesting to see where this winds up.

Dave Bittner: Yes. All right, well our thanks to Brandon Kovacs from Bishop Fox for joining us. We do appreciate him taking the time. [ Music ] That is our show. We want to thank you all for joining us. Our thanks to the Johns Hopkins University Information Security Institute for their participation. You can learn more at isi.jhu.edu. We'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cyber security. If you like our show, please share a rating and review in your podcast app. Please also fill out the survey in the show notes or send an email to hackinghumans@n2k.com. We're privileged that N2K CyberWire is part of the daily routine of the most influential leaders and operators in the public and private sector, from the Fortune 500 to many of the world's preeminent intelligence and law enforcement agencies. N2K makes it easy for companies to optimize your biggest investment, your people. We make you smarter about your teams, while making your teams smarter. Learn how at n2k.com. This episode is produced by Liz Stokes. Our Executive Producer is Jennifer Eiben. We're mixed by Elliott Peltzman and Trey Hester. Our Executive Editor is Brandon Karpf. Peter Kilpe is our Publisher. I'm Dave Bittner--

Joe Carrigan: And I'm Joe Carrigan.

Dave Bittner: -thanks for listening. [ Music ]