The CyberWire Daily Podcast 11.29.22
Ep 1712 | 11.29.22

DDoS as a holiday-season threat to e-commerce. TikTok challenge spreads malware. Meta's GDPR fine. US Cyber Command describes support for Ukraine's cyber defense.


Dave Bittner: A look at DDoS as a holiday season threat to e-commerce. A TikTok challenge spreads malware. Meta's GDPR fine. Mr. Security Answer Person John Pescatore has thoughts on phishing-resistant MFA. Joe Carrigan describes Intel's latest efforts to thwart deepfakes. And U.S. Cyber Command describe support for Ukraine's cyber defense.

Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Tuesday, November 29, 2022. 

Dave Bittner: Good day to you all. We trust you survived Cyber Monday with your wallet and your reason intact and that you weren't drawn into an uncontrollable vortex of avaricious delirium or at least you didn't, you know, spend too much or get ripped off. Anyway, today, by recent tradition, is Giving Tuesday. We hope everyone takes a moment to think about donating to some good cause and that, when you do, you remember to stay safe. 

DDoS as a holiday-season threat to e-commerce.

Dave Bittner: Now on to the news. We tend to think of cyber crime during the holidays as basically representing the threat of fraud. That it surely does. And the possibility of being scammed is rightly at the top of the online shopper's mind. But that's not the only threat out there. Fraud is a demand-side threat, but there are supply-side threats, too. So while consumers look to protect themselves from scams when shopping online during the holidays, retailers face an additional challenge - DDoS attacks intended to make their sites unavailable to customers. 

Dave Bittner: Bloomberg Law reports that the motives for such attacks against e-commerce sites vary. They can be anything from extortion by a gang to economic disruption by a nation-state's intelligence service. They can range from hacktivist protest to some loser out to cause trouble for the simple lulz (ph) trouble brings. While distributed denial of service attacks are usually of relatively short duration, measured in minutes or at most hours and almost never lasting for days, they can nonetheless exact a significant toll from affected merchants. 

Dave Bittner: Online commerce is time-sensitive. If the designer galoshes you intended to buy from can't be purchased because is unavailable, you, e-consumer, will probably just bop on over to and buy them there. Unfortunately, as Bloomberg Law points out, the merchants who are victims seldom have any realistic legal recourse to DDoS attacks. Often you don't know who they are or where they are. And even when you can find these things out, the perpetrators are commonly out of reach anyway, holed up somewhere a protective government will refuse to either extradite them or respect the ruling of a court. Better to take precautions against DDoS than to try suing the perpetrators after the fact. 

TikTok challenge spreads malware.

Dave Bittner: Have you seen the latest TikTok challenge, TikTokers? It involves asking you to pose naked using a filter called Invisible Body. But it's OK, probably even safe for work - not that we'd recommend it - because Invisible Body replaces the unclad version of you with a blurred outline. And, of course, the story doesn't end there. Those of you of a certain age will remember ads in old comic books for X-ray specs, cheap and bogus glasses that supposedly would let the wearer see beneath people's clothes. It turns out the market for X-ray specs has been updated to the digital age because fraudsters are offering a filter that takes out the blurriness just the way X-ray specs would do away with those tiresome clothes. Anywho (ph), as you can imagine, the defilterizing (ph) filter is a scam. Not only does it not work, but it carries the WASP info-stealing malware as a payload. Researchers at security firm Checkmarx sourly observe that more than 30,000 people with nothing better to do have joined the attackers' Discord server. And it's trending. 

Meta's GDPR fine.

Dave Bittner: The Irish Data Protection Commission has fined Facebook's corporate parent Meta 265 million euros over a breach that affected personal information of hundreds of millions of Facebook users, the BBC reports. The case is an unusual one in that most of the data obtained and subsequently dumped on an online forum had been scraped and not hacked. The Data Protection Commission found Meta in violation of Article 25 of GDPR. The Commission noted in its decision that this wasn't Facebook's first brush with unwelcome and illicit data scraping. The BBC quotes a Facebook spokesman as saying, we made changes to our systems during the time in question, including removing the ability to scrape our features in this way using phone numbers. Unauthorized data scraping is unacceptable and against our rules, and we will continue working with our peers on this industry challenge. We are reviewing this decision carefully. 

US Cyber Command describes support for Ukraine's cyber defense.

Dave Bittner: U.S. Cyber Command yesterday released a brief and general account that provides some additional insight into when U.S. support for Ukraine's cyberdefense began and what the nature of that support was. The U.S. Cyber National Mission Force deployed a large hunt forward team in December of last year to work with Ukraine's own cyber command. That initial deployment continued through March of this year. Despite the aggressive-sounding name, hunt forward operations are, according to U.S. Cyber Command, defensive in nature. The hunting is conducted in the networks being defended. They say, hunt forward operations are purely defensive activities, and operations are informed by intelligence. 

Dave Bittner: While U.S. Cyber National Mission Force personnel are no longer physically deployed in Ukraine, continued direct support of Ukraine's cyberdefenses continues. The agency says, CYBERCOM remains committed and continues to provide support to Ukraine, other allies and partner nations, with U.S. joint forces aligned and supporting the European theater. This support included information sharing of threats and cyber insights, such as indicators of compromise and malware. For example, in July 2022, CNMF publicly disclosed novel indicators to cybersecurity industry partners in close collaboration with the security service of Ukraine. None of this, of course, takes away from the work Ukraine's cyber operators have done to defend their country's networks. But it does shed some additional light on why Russian cyber offensives have generally fizzled. So good hunting forward, Cyber National Mission Force. 

Dave Bittner: Coming up after the break, Mr. Security Answer Person, John Pescatore has thoughts on phishing-resistant MFA. Joe Carrigan describes Intel's latest efforts to thwart deep fakes. Stick around. 

Digitized Voice: Mr. Security Answer Person. Mr. Security Answer Person. 

John Pescatore: Hi, I'm John Pescatore, Mr. Security Answer Person. Our question for today's episode comes via email. The question is, I hear folks on this podcast say that FIDO keys are better than one-time codes 'cause even a person-in-the-middle attack will fail. Can you explain how this works, and importantly, when a person-in-the-middle attack could still work, such as if there are misconfigurations? Thanks. 

John Pescatore: Well, Mr. Security Answer Person tends to specialize in tongue-in-cheek answers to broad, lightweight questions. So this one is a nice change of pace. Let's first set a baseline here, though. Reusable passwords are the root vulnerability for over 80% of successful breaches. Reusable passwords are a form of what-you-know authentication, just like mother's-maiden-name-, color-of-first-car-, etc., type answers are. These approaches rely on a shared secret, or as NIST defines it, a secret used in authentication that is known to the subscriber and the verifier. That shared secret is what enables phishing to succeed. If an attacker can get between you, the subscriber, and the site you want to access, the verifier, or trick you into giving up the secret directly, the game is over. 

John Pescatore: Public-key-based authentication does not have a shared secret. All entities have a private key that they share with no one and a cryptographically related public key that has to be maintained at a trusted site like a directory service or certificate authority. The elegant math behind all this allows a subscriber to be verified as long as there is a common and reliably accessible source of trustable public keys, which has been the obstacle to adoption in the past. 

John Pescatore: In the old days, when the telephone was the heart of all communications, we had trustable centralized directory services such as dialing 411 or using hard-copy phone books. Cell phones and the internet broke that. There are no central directories of email addresses or cell phone numbers. Instead, silos of directories evolved, mostly Microsoft Active Directories at businesses or contact lists on cell phones or bookmarks and web browsers or email-service-specific address books for individuals. Efforts in the past to agree upon standards in trusted third-party directories failed because of the big IT players. Want to see the history? Search on Sun Liberty versus Microsoft Alliance Passport. And the big business players like banks all wanted to maintain control of user enrollment and authentication so that no one could get between them and their customers. 

John Pescatore: But in recent years, the cost of successful phishing attacks has changed the economic equations for businesses and cell phone users. Cell phone users have become accustomed to strongly authenticating to their phones via fingerprint sensors and facial recognition and on having high-value services like banks require the use of those text messages to phones to prevent phishing from succeeding. All of that has caused today's big IT players, namely Apple, Google and Microsoft, to, at least for now, put down their swords and play nicely together in backing the FIDO2 WebAuthn standards for what has become known as phishing-resistant multifactor authentication or passkeys. Done right, passkeys can be created for logins and stored on iPhones, Android phones, and even Windows PCs and used across a variety of services and platforms with high barriers against man-in-the-middle and other attacks. 

John Pescatore: So here's, finally, the direct answer to your question. SMS text messages for multifactor authentication greatly raise the bar against phishing but are still vulnerable to man-in-the-middle attacks and bypass attacks too. When done right, passkeys implementing FIDO2 WebAuthn standards are very secure. However, misconfigurations are still possible, and backup processes for when something goes wrong need to be in place and tested. 

John Pescatore: Note above, I said, when done right. We are still in the very early days of passkey implementation and adoption. As we all know, very rarely is software anywhere near trustable before version 3.0 or sometimes version 30.0. Look for many reports of vulnerabilities in early implementations. We also have to see if the major social media platforms join the standards bandwagon and that all the IT providers avoid the temptation to vary from those standards. None of those concerns should slow anyone down from moving from reusable passwords to standard-based passkeys early and often. Seatbelts were uncomfortable at first, and airbags sometimes deployed unexpectedly early on, but they have saved millions of lives since those early days. If you really want to poke a stick in the eyes of the criminals, passkeys are a great stick to use. 

Digitized Voice: Mr. Security Answer Person. 

John Pescatore: Thanks for listening. I'm John Pescatore, Mr. Security Answer Person. 

Digitized Voice: Mr. Security Answer Person. 

Dave Bittner: Mr. Security Answer Person with John Pescatore airs the last Tuesday of each month right here on the CyberWire. Send your questions for Mr. Security Answer Person to 

Dave Bittner: And joining me once again is Joe Carrigan. He is from Harbor Labs and the Johns Hopkins University Information Security Institute and my co-host over on the "Hacking Humans" podcast. Hello, Joe. 

Joe Carrigan: Hi, Dave. How are you? 

Dave Bittner: Doing well, thanks. Interesting article from the folks over at VentureBeat - this was written by Sharon Goldman, and it's titled "Intel unveils real-time deepfake detector, claims 96% accuracy rate." What's going on here, Joe? 

Joe Carrigan: So this is called FakeCatcher, and Intel is saying that it is the first real-time detector of deepfakes, has a 96% - just like you said - percent accuracy. But it is using something interesting. It is not looking for artifacts within the actual deepfake. It's working on video. It's working specifically on video. It is based on a technique called photoplethysmography. And that's a very hard word to say, so I'm just going to say PPG from now on. 

Dave Bittner: Right. 

Joe Carrigan: And what PPG is, is it is a measurement of what's going on in your skin, in this case, of the amount of blood that flows in and out of your skin. You see, every time your heart beats, the amount of blood in your - in all your blood vessels changes. And that is measurable in computer vision systems. Your skin actually gets a little redder when that happens because there's more blood closer to the surface. You and I never see it because our eyes are not as sensitive as a computer camera is. 

Dave Bittner: Yeah. 

Joe Carrigan: And there's been tons of different things where you can go and look at samples of colors that are one bit off, and you can't tell the difference. But a computer can tell the difference very easily. 

Dave Bittner: Right. 

Joe Carrigan: So if you have something with a higher grain sensitivity, if you will, then you can easily detect that someone's heart is actually pumping. And in fact, Face ID uses the same technology or something very similar to it. And you and I were talking earlier, and one of the points that you made is that Face ID doesn't work on a cadaver. 

Dave Bittner: Right. Right. 

Joe Carrigan: So if you need to unlock someone's phone and they've already passed, putting their face up to it won't work because their heart is not pumping, and there's no blood flow change in their face. 

Dave Bittner: Yeah. 

Joe Carrigan: And Face ID won't work. 

Dave Bittner: My understanding with Face ID is that it's - in fact, it's using infrared illumination, which really highlights the - you know, the blood pumping through the veins. It - I think it - you know, it sees through that first layer of skin. 

Joe Carrigan: That's comforting. 


Joe Carrigan: Apple is seeing through your first layer of skin. 

Dave Bittner: Well, you know... 

Joe Carrigan: Yeah. You make your sacrifices for your companies, right? 

Dave Bittner: Right. Right. 

Joe Carrigan: They go on in this article and talk about how important it is to detect deepfakes and be able to identify them. And they talk about the history of the challenges with it. In 2020, there was a group from Google and Berkeley that showed that AI systems that were trained to distinguish between real and synthetic content were susceptible to adversarial attacks. And Intel is claiming here that their method is less susceptible because there's no big PPG data set out there to use to build fakes that will be able to get by their detector, which is interesting, I think. I'm hopeful about that comment, but I'm also a little skeptical about it. I think that - I don't think that this is something that's remarkably hard to fake. Maybe it is. Maybe it's something that's very, very hard to fake without a large data set. 

Dave Bittner: Right. 

Joe Carrigan: And if that's the case, maybe there are some adversaries out there that will begin building large data sets of these things or large enough datasets. They don't have to be super large. They just need to be large enough to fool this particular model. 

Dave Bittner: Yeah. 

Joe Carrigan: And of course, what does that mean? And that means, well, we're looking at another arms race situation. 

Dave Bittner: (Laughter). 

Joe Carrigan: Right now it looks like Intel is in the lead. 

Dave Bittner: Right. 

Joe Carrigan: The PPG-based deepfake detector. But I am worried that in the future, this will just become another part of the deepfake-generation software that's already out there. 

Dave Bittner: Yeah. Yeah. I mean, it strikes me that if it's looking for a rhythmic, slight, subtle change in the color of someone's skin tone... 

Joe Carrigan: Right. 

Dave Bittner: ...That that's not a hard thing to write a filter to do automatically over any video, really. 

Joe Carrigan: Right. 

Dave Bittner: Yeah. 

Joe Carrigan: And that's my fear. Now, maybe there's more to it that Intel is not discussing here and... 

Dave Bittner: Yeah. Yeah. 

Joe Carrigan: Which I would almost guarantee is the case (laughter). 

Dave Bittner: Yeah, sure. 

Joe Carrigan: They're not telling you all their trade secrets in a media interview, right? 

Dave Bittner: Right. 

Joe Carrigan: So maybe you need to get on the back end and figure it out, figure out what it is, run a bunch of tests and see what happens. 

Dave Bittner: Yeah. 

Joe Carrigan: But, you know - or maybe you just need to look at a real video yourself and start looking at the data sets and seeing what it is and then maybe just coding it. I don't know. I would like to think that Intel's right here. I would really, really, really like to think that. 

Dave Bittner: Yeah. 

Joe Carrigan: I am not as optimistic as this spokesperson from Intel is. 

Dave Bittner: (Laughter). 

Joe Carrigan: But that's just me, right? 

Dave Bittner: Yeah. Well, it is good that they're working on this. I mean, it shows that there's... 

Joe Carrigan: It is. No, I think this is good work. I'm not... 

Dave Bittner: Yeah. 

Joe Carrigan: I don't mean to disparage the work. The work is good and important work. 

Dave Bittner: Yeah. 

Joe Carrigan: And I think that everything we can have that can authenticate media is great. 

Dave Bittner: Yeah. 

Joe Carrigan: And I appreciate Intel doing this. 

Dave Bittner: All right. Well, again, the work from Intel is titled FakeCatcher, and this article comes from the folks over at VentureBeat. Joe Carrigan, thanks for joining us. 

Joe Carrigan: It's my pleasure, Dave. 

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our daily briefing at 

Dave Bittner: The CyberWire podcast is a production of N2K Networks, proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Elliott Peltzman, Tre Hester, Brandon Karpf, Eliana White, Puru Prakash, Liz Irvin, Rachel Gelfand, Tim Nodar, Joe Carrigan, Carole Theriault, Maria Varmazis, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Catherine Murphy, Janene Daly, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe, Simone Petrella, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.