The CyberWire Daily Podcast 9.29.20
Ep 1183 | 9.29.20

Ransomware versus shipping, hospitals, and schools. Cyberattacks’ growing sophistication. An interim rule enables implementation of the US Defense Department’s CMMC program.

Transcript

Dave Bittner: Three - count them - three big ransomware attacks are in progress. One of them has moved into its doxxing phase. Microsoft resolves authentication problems that briefly disrupted services yesterday. Tracking trends in cyberattacks - the sophistication seems to lie in the execution. The U.S. Defense Department now has an interim rule implementing its CMMC program. Ben Yelin describes the extensive use of facial recognition software by the LAPD. Our guest is Christy Wyatt from Absolute on their Endpoint resiliency report. And why do hackers hack? To a large extent, it seems they do so because they can.

Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire Summary for Tuesday, Sept. 19, 2020. 

Dave Bittner: There have been some major ransomware attacks that developed over the weekend and whose effects are continuing. Two of the biggest appear to have hit businesses as opposed to operational systems. And the third one has, as now is customary, made a threat to release sensitive personal information. The French container shipping giant CMA CGM SA disclosed yesterday it was dealing with a cyberattack on two of its subsidiaries in the Asia-Pacific region. The Loadstar says the company is working through the attack. It's business as usual as far as moving cargo is concerned, and the company is in the process of remediating disruptions to its IT systems. The company's own disclosures said they closed off external access to its systems as it was containing the ransomware. They describe the attack as affecting peripheral servers. Sources tell Le Monde Informatique that the attack was a Ragnar Locker ransomware infestation. In any case, copies of what appear to be the ransom note are signed by Ragnar Locker, and they follow that gang's customary pattern, offering, for example, to decrypt two files for free as evidence of good faith or whatever the criminal equivalent of good faith might be. There's been no indication so far that CMA intends to pay the ransom. 

Dave Bittner: The second big attack was on a large healthcare system that operates facilities in both the U.S. and the U.K., although the disruptions appear to be confined to operations in several U.S. states. Universal Health Systems, UHS, is the victim, sustaining a cyberattack that NBC calls one of the largest of its kind. BleepingComputer reports that it's a Ryuk ransomware attack. Fierce Healthcare says that while the affected hospitals are reverting to manual backups while their IT systems are unavailable, they are nonetheless being forced to divert ambulances and reschedule surgeries. A brief disclosure UHS issued yesterday said that patients were safe, that no patient or employee data appears to have been accessed, copied or misused. 

Dave Bittner: Many outlets, Threatpost and WIRED among them, are drawing the obvious comparison between the UHS attack and the ransomware incident earlier this month in Dusseldorf that forced an ambulance diversion that cost a patient her life. There are no such lethal consequences of the UHS incident, so far at any rate, and reversion to manual systems appears to have enabled the hospitals to continue their operations, albeit in an impeded fashion. But the disruption is widespread and, to say the least, inconvenient. The Russian mob behind Ryuk is known for big-game hunting, that is, going after large corporations and other institutions with deep pockets. They have shown themselves to be indifferent to public safety, whatever Robin Hood and compassionate pieties they may have woofed during the pandemic. 

Dave Bittner: And a third ransomware attack has turned sour after the victims refused to pay the extortionists. The Wall Street Journal reports that Clark County School District in Nevada - that's the county where Las Vegas is located - it has about 320,000 students - well, they declined to pay - and that the criminals retaliated by releasing Social Security numbers, grades and other personal information. The attack appeared to have begun on Aug. 27, when the district noticed anomalies in its IT systems. The attackers warned the district on Sept. 14 that they would begin releasing information if they weren't paid, and now they seem to be making good on their threat. 

Dave Bittner: One brief disruption yesterday seems to have been unrelated to any attack. Microsoft yesterday suffered outages to Office 365 and the Azure cloud. Redmond resolved the problem, which it characterized as an authentication issue, after a few hours, ZDNet reports

Dave Bittner: Microsoft's Digital Defense Report concludes that attackers have markedly increased their sophistication over the past year. The sophistication seems to lie more in improved execution of such well-known techniques as target identification, indirect approach and credential stuffing than in the deployment of exotic technical novelties. Pick the targets. Go after the softer ones that enable you to get at the harder ones. And make effective use of well-known tactics, techniques and procedures. This can be seen in the way foreign intelligence services interested in, for example, the U.S. elections, are prospecting relatively soft targets among nongovernmental organizations and think tanks. Microsoft highlights four major trends. Last year, they blocked more than 13 billion - with a B - malicious and suspicious emails. More than a billion of those carried URLs set up for the explicit purpose of launching a phishing credential attack. The most common reason they were called in for incident response between last October and this July was, unsurprisingly, ransomware. Nation-state espionage services have been occupied with reconnaissance, credential harvesting, malware and VPN exploits. And finally, IoT threats are growing and evolving. The first half of this year saw a 35% increase in IoT attack volume over the same period of 2019. 

Dave Bittner: The U.S. Office of Management and Budget has approved an interim rule requiring defense contractor compliance with NIST Special Publication 800-171. The standards in SP 800-171 deal with protection of controlled, unclassified information, something defense contractors handle a lot of. The interim rule implements the Defense Department's Cybersecurity Maturity Model Certification program. One of the major changes the interim rule brings is that the Department of Defense will now be able to audit contractor cybersecurity itself. Hitherto contractors have been expected to self-certify compliance, but now external government audits will be possible. The interim rule takes effect in 60 days, and it's open for comment through the end of November. The program itself remains a work in progress, with a number of unanswered questions and a projected phase-in period of five years. 

Dave Bittner: And finally, why do hackers hack? Well, the motives are varied, but a big explanation seems to be, a Finbold study concludes, that they do it because they can. It's the philosopher John Rawls' Aristotelian Principle, or what the old ethologists called, when studying animal behavior, Funktionslust, the pleasure any of us, whether a man, a woman, dog or cat, gets from doing the stuff we're able to do. And for the spies and the crooks, well, right - they want information and money. But they also probably do it because they can. 

Dave Bittner: Christy Wyatt is president and CEO at Absolute Software, a provider of endpoint security. They recently published a report highlighting their insights on endpoint resilience. Christy Wyatt joins us with their findings. 

Christy Wyatt: So this is the second. We did our inaugural report about a year ago, where we - because of our position embedded in a half a billion devices, we have a lot of visibility over those devices that are activated. And as we were sort of watching COVID unfold but also watching the state of security as COVID was unfolding, we like to publish that data, so customers can really use it to benchmark themselves and to sort of check their strategies and see what others might be doing that might be helpful for them. 

Dave Bittner: Well, let's go through some of the findings together. What were some of the key things that stood out to you? 

Christy Wyatt: In this past report, in the past state of the endpoint report, I think I - one of the biggest things that we've been tracking over the past year has really been the resiliency of security controls on endpoint devices. By that, we mean we measure not just how many security applications or controls you have protecting your device but how well they're working. Are they installed? Are they running? Have they gone offline? And so some of the things we noticed this year versus last year is that we've continued to see the number of security controls on these devices increase. But we've also seen the rate of decay stay constant, meaning that these controls continue to fall offline. And during this year, a year where every device is off the network and at home with your employees, it's a very bad year to not have your security running when you need it the most. 

Dave Bittner: Well, given the information that you've gathered here, what are your recommendations? How do organizations do a better job of getting on top of this? 

Christy Wyatt: We think right now, especially given what's going on around us, we're sort of in a new modernization of endpoint computing. You know, these kinds of events force you to reevaluate your architecture and say, do I have all of the pieces I need? And so today, what's most important is that you know where every single asset is 'cause it's not in the building. You know, you need to think about, what are the strategies for these security applications? A lot of the security applications that we've come to rely on have an assumption that you're connected to the corporate network either because you're in the building or because you're at home connected via VPN. 

Christy Wyatt: And then the third piece that we talk about all the time is resiliency. So how do we uniquely heal things? So there's a variety of different things you can do when something's gone wrong. You can notify the administrator. You can throw a flag. Of course, you know that the folks that are looking for these kinds of warning signs are being drowned and inundated with signals, especially since everybody went home. We know that help desks are struggling. So sending a signal or sending a red flag is not necessarily going to be helpful. So these things are going to fail. These things are going to go offline. It is natural, like any other living, breathing thing. You know, an endpoint - you can kind of view it as something that's always in a constant ever-changing state, like a living, breathing thing. So you need to have a way of fixing things and then learning after the fact. What did I just fix, and why did I need to fix it in the first place? And then you can make better decisions in IT. 

Dave Bittner: That's Christy Wyatt from Absolute Software. 

Dave Bittner: And joining me once again is Ben Yelin. He's from the University of Maryland Center for Health and Homeland Security and also my co-host over on the "Caveat" podcast. Ben, always great to have you back. 

Ben Yelin: Good to be with you again, Dave. 

Dave Bittner: Interesting article - this is from The LA Times written by Kevin Rector and Richard Winton. The article is titled "Despite Past Denials, LAPD Has Used Facial Recognition Software 30,000 in Last Decade, Records Show." What's going on here, Ben? 

Ben Yelin: So the Los Angeles Police Department had previously denied using facial recognition software entirely - or at least they denied having records related to facial recognition. But this article, through its investigation, discovered that they had used facial recognition technology nearly 30,000 times since 2009. So I think what they were saying to press prior to this article and this study being released was technically true. They did not maintain the records. 

Dave Bittner: (Laughter). 

Ben Yelin: But they have access to a regional database maintained by the Los Angeles County Sheriff's Department. And through that database, they were able to - or Los Angeles Police Department officers were able to access facial recognition records over - nearly 30,000 times over the past 11 years. This is an extremely effective law enforcement tool, at least it is theoretically. If you have, you know, victims of crimes who are not willing to confront criminal defendants in court, oftentimes the best way to identify those criminal defendants is through something like facial recognition software. If you have a security camera that caught somebody's face and you match it up to a mug shot or a driver's license record, that is going to be very compelling evidence in a court of law. 

Ben Yelin: But, you know, there are always privacy concerns. And there are particularly transparency concerns when, you know, it takes a Los Angeles Times expose to discover the extent to which this technology is being used. 

Dave Bittner: Yeah. It seems to me like there's a big gap between no use and 30,000 times. How was the LAPD trying to thread that needle? 

Ben Yelin: So he said (laughter) - the assistant chief of police of Los Angeles, who's a guy by the name of Horace Frank, claimed as part of this article that it is no secret that the Los Angeles Police Department uses this technology. He said they're not trying to hide anything. This goes in - this is directly in contrast to recent denials from the department itself, including two in the past year, where they claim to not have access to facial recognition records. 

Ben Yelin: The discrepancy is explained in kind of the most pathetic way possible, which is they were simply mistakes. We did not mean to, you know, conceal the fact that we're using this technology. It just sort of happened. That seems to be the explanation that they came up with. My guess is, you know, there is some gray area in terms of maintaining records and accessing records. And it seems to be true that they did not maintain those records. But through this regional collaborative through the Los Angeles County sheriff's office, they were able to access these records. 

Ben Yelin: I think that distinction might be meaningful, you know, from the department's perspective. But at least from the public's perspective, it's probably rather useless. The public now knows that no matter who's actually storing these records, they've been used up to 30,000 times in the country's second-largest local police department. 

Dave Bittner: Now, California has led the way when it's come to many privacy laws. Could this article here, these revelations from The LA Times - could this be, you know, ammunition for those who are looking to bolster those arguments? 

Ben Yelin: So there are some legal protections as it relates to facial recognition technology. There was a memo written by the Los Angeles Police Department's Office of Constitutional Policing and Policy - always good to have one of those offices... 

Dave Bittner: (Laughter). 

Ben Yelin: ...That sets of facial recognition usage policies within the department. So it said that the technology shall not be utilized to establish any database or create suspect identification books. It has to be based on particular information. It can't be used as a general identification tool when there's no investigative purpose or the sole source of identification for a subject's identity. I think these are very helpful protections because we know that facial recognition software is not fail-proof and it introduces its own biases. And we've seen cases where people have been falsely accused based on facial recognition technology. 

Dave Bittner: Right. 

Ben Yelin: So it's good to have these protections in place. But, you know, it's always a question of enforcement. If the department was not aware enough to admit that they were obtaining these records 30,000 times over the last 11 years, it's going to create a trust issue as to whether they are complying with these department regulations. 

Dave Bittner: Yeah. I mean, I guess this notion that the LAPD can follow these rules internally but then, when it's convenient, walk across the street to their good friends at the LA Sheriff's Department and make use of their system, you know, could - you can imagine why people would call foul on that. 

Ben Yelin: Yeah. I mean, I think it was a misleading way to respond to press inquiries about this. You can see why they did it. I mean, because it's such an effective investigative tool, you know, you don't want to use that as a tool. You also don't want to cause controversy among the public that you're trying to protect. So you can... 

Dave Bittner: Right. 

Ben Yelin: ...Certainly understand that from their perspective. But it was a little bit evasive to glom onto this distinction between storing the records and collecting the records. You know, I think that's something that they have to be held to account for. 

Dave Bittner: Yeah. All right. Interesting story from the LA Times. Again, it's titled "Despite Past Denials, LAPD Has Used Facial Recognition Software 30,000 Times in the Last Decade, Records Show." Ben Yelin, thanks for joining us. 

Ben Yelin: Thank you. 

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our Daily Briefing at thecyberwire.com. And for professionals and cybersecurity leaders who want to stay abreast of this rapidly evolving field, sign up for CyberWire Pro. It'll save you time and keep you informed, and it lets your fingers do the walking. Listen for us on your Alexa smart speaker, too. 

Dave Bittner: The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Elliott Peltzman, Puru Prakash, Stefan Vaziri, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.