The CyberWire Daily Podcast 8.26.21
Ep 1405 | 8.26.21

A quick look back at yesterday’s White House industry meeting. Revolution, coup, or a bit of both? Storytelling for security. Lessons from Olympic scams. Notes from the underworld.

Transcript

Dave Bittner: Outcomes from the White House industry cybersecurity summit. The Cyber Partisans aim at the overthrow of Lukashenko's rule in Minsk. A role for storytelling in security. Scams, sports and streaming. Speculation about the ShinyHunters' next moves. Verizon's Chris Novak on reducing false positives in threat intelligence. Bentsi Ben Atar from Sepio Systems on the risks of hardware-based attacks, internal abusers, corporate espionage and Wi-Fi. And cybercriminals like their VPNs, too.

Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Thursday, August 26, 2021. 

Dave Bittner: U.S. President Biden met with industry leaders yesterday to formalize some cybersecurity national priorities. Among the measures announced were a cooperative program between industry and the National Institute of Standards and Technology to bolster the security of the technology supply chain and the formal extension of the Industrial Control Systems Cybersecurity Initiative to natural gas pipelines. Participants from industry committed to initiatives ranging from coupling insurance coverage to compliance with certain basic security standards to investment in cyber workforce development to committing resources to cybersecurity technology. The initiatives that stood out involve development of security standards and offering incentives to follow them, training and workforce enhancement programs and offers of free services. Zero-trust, multifactor authentication and risk management solutions figure prominently among these last. 

Dave Bittner: The industry leaders in attendance specifically committed to these undertakings. Apple will establish a program to push continuous security improvements in the technology supply chain. The company will work with suppliers, more than 9,000 of them in the United States to, quote, "drive the mass adoption of multifactor authentication, security training, vulnerability remediation, event logging and incident response," end quote. Google announced an investment of $10 billion over the next five years to expand zero-trust programs, help secure the software supply chain and enhance open-source security. Mountain View will also assist 100,000 U.S. workers earn industry-recognized digital skills certificates that will qualify them for tech jobs. 

Dave Bittner: IBM also announced a training initiative. It intends to train 150,000 people in cybersecurity skills over the next three years. It will also partner with historically Black colleges and universities to establish cybersecurity leadership centers. Microsoft will invest $20 billion over the next five years for integration of cybersecurity into design and to develop and deliver advanced security solutions. It will also make $150 million in technical services available to government organizations at the federal, state and local levels. Redmond also plans partnerships with community colleges and not-for-profits to deliver cybersecurity training. Amazon will offer the public, at no charge, the same security awareness training it offers its employees. All AWS account holders will also receive multifactor authentication devices. 

Dave Bittner: Two cyber insurance providers, Resilience and Coalition, also participated in the meetings. Resilience will require policyholders to meet a minimal threshold of cybersecurity best practices as a condition of their coverage. This follows the historical pattern of the role insurers have played in other sectors. Coalition offer free to any organization that wants it the underwriter's cybersecurity risk assessment and continuous monitoring platform. We'll have other coverage of the White House cybersecurity summit, including industry reaction in this afternoon's CyberWire Pro "Policy Briefing." 

Dave Bittner: The Belarussian Cyber Partisans seem to seriously intend the overthrow of President Lukashenko's government. MIT Technology Review reports signs that the Partisans may have help from inside the regime itself, which suggests that, should the regime succumb to this and other pressure - and that seems unlikely, at least in the near term - its fall will be at least as much coup d'etat as revolution. 

Dave Bittner: There's been much discussion of cyber conflict in all of its forms, from direct action by nation-state intelligence services through political hacktivism to cyber privateering. And much of that discussion has been dominated by analogies that seem to suggest themselves - cyber Pearl Harbor to take one or cyber 9/11 to take another. Some of these analogies can be lazily or inaptly applied, but is it worth thinking about the role of analogy in a closer, more critical way? Earlier this month, author Nick Shevelyov discussed his new book, "Cyber War And Peace: Building Digital Trust Today With History As Our Guide" with SINET's Robert Rodriguez. Shevelyov's purpose in writing was to explore the role of storytelling in arriving at an understanding of cyber conflict. That storytelling involves not only historical analogies, but also proverbial narratives from myth and folklore. It is, Shevelyov argues, a superior way of arriving at an appreciation of fundamental cyber principles. Sometimes these stories are philosophical, like lessons the Stoics can teach risk managers. 

Nick Shevelyov: Well, the ancient Stoics had this concept of a premeditation of evils, right? Think through all the things that can go wrong in your life and then start to reduce the probability of those events. It's sort of like when you don't know what you really want, well, think about all the things you don't want and then that will help narrow your direction. And so I started this concept and templatized it and operationalized it of pre-mortems, as before we set out on a journey, on a project, on a program, what are all the things that can go wrong, and how do we avoid those? And what are the things that we didn't want to talk about because of incentives and because of a sense of urgency and because of building risk into our very own efforts? You know, risk happens when there's pressure, opportunity and rationalization. 

Dave Bittner: Shevelyov's target audience is, first, business leaders who wish to come to a better understanding of cybersecurity, and second, to security practitioners interested in improving their ability to communicate with the teams and the organizations those teams serve. The book has been available since August 18. You can listen to the whole interview on SINET's website

Dave Bittner: Researchers at security firm Zscaler's ThreatLabz have released a report on scams and adware campaigns that accompanied the recent Tokyo Olympics. The conclusions are instructive because they illustrate the way in which high-profile events in sport and other cultural domains draw the attention of cyber criminals. 

Dave Bittner: One of the more common fraudulent approaches Zscaler observed involved streaming services, suspicious streaming services, ThreatLabz calls them. Quote, "these streaming websites not associated with legitimate Olympic streaming providers. Instead, the websites claim to provide free access and then request payment credentials from customers. The sites often reuse a template that we've seen for many current events, including NBA, Olympic and football events," end quote. Registering with the dodgy sites require you to enter personal information, including pay card data. And the consequences of filling in the scammers' forms can be easily imagined. 

Dave Bittner: There's also a lot of adware, and here, too, the bait is usually a free or discount streaming service. Zscaler finds that many of these come-ons redirect to ads for sites devoted to betting, auto trading and the like. They've also seen some direct social engineering intended to get users to install adware. Quote, "we've seen cases where users are redirected to install adware in the form of browser extensions and fake software updaters. In the case below, we can see olympicstreams.me directing users to install the YourStreamSearch browser extension. YourStreamSearch is a known browser hijacker that recommends ads based on search history," end quote. So, sports fans, stream from legitimate sites only. 

Dave Bittner: Digital Shadows looks at the ShinyHunters, the criminal group that claimed to have compromised data held by AT&T - claims AT&T denies - and notes their shift toward extortion and their here today, gone tomorrow mode of operation. Whatever turns out to be the case with the claimed AT&T attack, the ShinyHunters will probably recede temporarily, then reappear with refined technique. 

Dave Bittner: Application security platform vendor Cequence finds that bot operators, like legitimate users, are finding virtual private networks useful in obscuring their origin and infrastructure. VPN services that don't limit the number of connections are proving valuable in mounting high-volume attacks. So some of the high-volume attacks benefit from the cloak a VPN can throw over other online activity. 

Dave Bittner: There are likely a handful of electronic devices in your office that, as far as security is concerned, you tend not to give much thought to, or if you do, you're confident that when they were installed and set up, the proper settings were put in place to isolate them from sensitive information. Bentsi Ben-Atar is co-founder of cloud services provider Sepio Systems, and he and his colleagues have published a series of YouTube videos demonstrating common hardware vulnerabilities that could fly under the radar. 

Bentsi Ben-atar: Obviously, in the boardroom and meeting rooms, there's obviously a lot of sensitive information that is being presented there. So how difficult would it be to capture that data and exfiltrate it to an interested audience, so to speak? So the attacker wanted to get all the information that is being presented, all the video that was being presented on a certain smart TV set that was located in a meeting room. In order to do that, he used an internal abuser, in this case, played by an evil maid who was paid off to do a very simple task - while she's cleaning the room, plug in a Rubber Ducky, which is a device that emulates a keyboard functionality, inject a certain payload. Then, she disconnects the payload. The attack itself is very persistent. And then everything that is being displayed on that TV set is actually being recorded locally. And then the attacker, using a preset access point where the TV connects to, can actually get all the files that were captured and stored locally without setting foot in the victim's office. 

Bentsi Ben-atar: Now, the cool thing about this attack is that we didn't need to manipulate anything on the TV. So it is actually an out-of-the-box TV, where we used a very simple method, where you actually can control everything on the TV by using the remote control. And every smart TV and TV in general that has a remote control has also a USB input that allows it to connect an external keyboard. So, you know, romantic as I would like it to appear or as professional as I would like it to appear - no reverse engineering, no malwares, you know, flashing malware, just a basic built-in functionality that is being used by the attacker due to the fact that he decided to use the hardware aspect as his attack vehicle instead of trying to force his way in through network interfaces or things like that. 

Bentsi Ben-atar: I'll tell you another nice story about that. One of the pushbacks that we get is from the data centers guy. They would say - argue, you know, to some level of what they believe to be true - is that no one can go in and out of a data center with anything. And then I told him, you know, there's one thing that can go in and out without any examination whatsoever. He can visit the facility on a periodic basis, which is important thing when you're an attacker, and you never check what he brings in. And the guy is the guy that brings the fire extinguishers. And, you know, next to every REC in a data center, there's an automatic fire extinguishing system. So the demo that we did then was actually implant the baseplate of a fire extinguisher with the implant using a low-range exfiltration radio so that it won't be picked up by any flashy RF sensors or, like, RF geolocation emitters or things like that. 

Bentsi Ben Atar: And obviously, no one checks it. No one scans the fire extinguishers. And, you know, if you're a subcontractor, then you need to go on a periodic by-ear (ph) basis. And you can, you know, change your storage devices. You can modify where you're connecting your devices. So it's a very easy setup. And this is the - this is exactly the - what we're trying to generate awareness about - the understanding that these devices do not require special capabilities and that the attack tools are readily available, whether through Hak5 shops or through AliExpress online stores. The technology is very much available. You don't need to be the brightest engineers in order to master them. And attackers understand the progress and the landscape of these cybersecurity measures that are being put in place in order to block them. 

Dave Bittner: That's Bentsi Ben Atar from Sepio Systems. 

Dave Bittner: And I'm pleased to be joined once again by Chris Novak. He is the global director of Verizon's Threat Research Advisory Center. Chris, it is always great to have you back. You know, one of the things that people deal with when they are ingesting threat intelligence is trying to deal with that - what I've heard described as that firehose of information, you know? And in that, you can get a lot of false positives. I know that's something you and your team have been focused on here. What can you share with us? 

Chris Novak: Yeah. Great to be on the show again, Dave. Thanks. You're absolutely right. That's probably the No. 1 complaint. There's almost that groan in the room the moment you say, hey, how about threat intelligence? And you kind of hear everyone go, ugh. That's... 

Dave Bittner: (Laughter). 

Chris Novak: It's like the old days of IDS, you know? Everyone's like, oh, wow, this thing is great. It's got all these blinking lights. I get, you know, a million alerts a day. And you're like, but then you have a million alerts you need to dispatch and figure out, right? 

Dave Bittner: Right. Right. 

Chris Novak: And so we said, look, there's - you know, the famous, there's got to be a better way. You know, so we worked with our team on this. And we said, you know, it was actually kind of interesting from a threat intelligence perspective, us bringing something to the market. I'd say we were actually pretty late to the game, but that was, I would say, maybe almost intentional. You know, part of it was regulatory. For years, we were not able to share because we were a telco and there were various, you know, FCC regulations on what we could and couldn't share. And then, you know, in the mid, I'd say, 2015, 2016 timeframe, there were some changes in the laws that allowed us to share, you know, essentially anonymized, aggregated, essentially cyberdefense kind of data. And so that really kind of took our handcuffs off and said, OK, now we can actually get into this space and share what it is we've always seen, because it was kind of - you know, the way I would describe it to people, it was like we were kind of sitting back, kind of looking at the internet, like you'd see the movie "The Matrix." And we could see everything... 

Dave Bittner: Right. 

Chris Novak: ...Going on. We just couldn't talk about it. So that kind of unleashed our ability to talk about it. And what we actually found was a lot of times when we look at these incidents, we find that they are, you know, rarely in isolation. About 80-plus percent of all the breaches we investigate are connected to other breach events. But a lot of times, it's that connective tissue that you don't necessarily always have the luxury of seeing, unless you're an ISP. We've done a lot there to reduce the false positives, to say, look, as we collect more data, you inherently have the potential for there to be more false positives. How do you reduce that kind of firehoses you mentioned? And so... 

Dave Bittner: Yeah. 

Chris Novak: ...One of the things that we did was - Verizon is a massive managed security services company. People may not necessarily know, but there are millions of devices around the world that we manage for customers, as well as millions of devices within Verizon that obviously our corporate security team manages. And so what we put together was this concept of active false positive reduction, where we said, when our investigative team goes out and finds something, let's take that data that we would normally ingest into our feed, curate and send out to customers of our intelligence feed, and let's bounce it against our MSS and our internal systems and see whether or not we get anything that lights up somewhere. And what can we learn from it? Because one of the challenges we found a lot of customers had with threat intelligence is there's not a feedback loop. I mean, you think about it, if you subscribe to a news feed, you know, like a threat intelligence feed, you get data, but you rarely give data back to say, hey, this was a false positive or this triggered three times or a hundred times. 

Chris Novak: In many cases, the threat intelligence feed provider has no knowledge of what you do with it after they've kind of flung the data over the fence to you. And that was another one of those there's-got-to-be-a-better-way kind of thoughts. And we said, all right, well, if we take that data and bounce it against all these millions of devices that we manage that are across - you know, essentially span the globe, so they've got a geographic footprint, span all industries, so we've got, you know, really little to no bias in the data, let's see what lights up. And if we see that there are false positives that our managed security services or internal corporate security teams flag, we can essentially tamp that down or remove that data altogether from the feed without a customer even having to be bothered by it. 

Chris Novak: Go one step further. If you have the ability to actually see demographics - so think about it, Dave. If you can see that an indicator of compromise goes off predominantly in Japan and that's the only place you see it, you might start saying, well, hey, maybe let me see if I can enrich this intelligence a little bit further and find out, you know, why when I push this intelligence out to my globally diverse set of endpoints do I only ever see it light up in Japan. Maybe that tells me something about who the threat actor is. Maybe it tells me something about their motives. It may not come right out and spell it out for me, but it gives me a thread that I can pull at to say, let's enrich this a little further. Let's answer the question of why. 

Dave Bittner: Is there a danger that, you know, something that is a false positive to me may be a very interesting bit of information for you? 

Chris Novak: Ah, excellent point. So there is always a chance of that. But one of the things that we look at there is, typically, a false positive is not going to be removed or indicated as a false positive just based on any one flag. So if one entity says, hey, this is a false positive or one endpoint we manage, it kind of looks like a false positive, what we're looking at is data on an aggregated level. So we have millions of endpoints in which we're collecting data on. If one or two flag it as a false positive but hundreds, thousands or tens of thousands say this is actually good, it may actually be whoever it is that flagged it as a false positive is wrong or doesn't have as big of a picture view as maybe some of those other endpoints do. 

Chris Novak: And so that's where that kind of - there's a machine learning element to the back end to kind of collect and aggregate the feedback that we get and essentially score it. So if we say, hey, this response is typically always very well received and this response is often wrong, we can actually take that into the bigger picture view as we determine whether or not, do we mark it as a false positive or not, or do we just reduce the confidence in it as opposed to flat out marking it as a false positive. 

Dave Bittner: All right. Well, fascinating stuff for sure. Chris Novak, thanks so much for joining us. 

Chris Novak: Thanks, Dave. 

Dave Bittner: Thanks to all of our sponsors for making the CyberWire possible. 

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our Daily Briefing at thecyberwire.com. 

Dave Bittner: The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Elliott Peltzman, Tre Hester, Puru Prakash, Justin Sabie, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe, and I'm Dave Bitner. Thanks for listening. We'll see you back here tomorrow.