The CyberWire Daily Podcast 7.14.22
Ep 1619 | 7.14.22

Ukraine evaluates Russia’s cyber ops. Smartphones go to war. Lilith ransomware. ChromeLoader evolves. Rolling-PWN looks real after all. Schulte guilty in Vault 7 case.

Transcript

Dave Bittner: An overview of the cyber phase of Russia's hybrid war. Smartphones as sources of targeting information. Lilith enters the ransomware game. ChromeLoader makes a fresh appearance. Honda acknowledges that Rolling-PWN is real. Part two of Carole Theriault’s conversation with Jen Caltrider from Mozilla's Privacy Not Included initiative. Our guest is Josh Yavor of Tessian to discuss accidental data loss over email. And a guilty verdict in the Vault 7 case.

Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Thursday, July 14, 2022.

An overview of the cyber phase of Russia's hybrid war.

Dave Bittner: The State Service for Special Communications and Information Protection of Ukraine has issued a report on the current state of the cyber phases of Russia's war as they unfolded during the second quarter of the year. It sees Russia as concentrating on, first, espionage, second, network disruption, third, data wiping and, fourth, disinformation. Of these four, the network disruption and disinformation have come to represent a relatively greater fraction of Russian cyber operations. The report says, comparing to the first quarter of 2022, the number of critical IS events originating from Russian IP addresses decreased by 8.5 times. This is primarily due to the fact that providers of electronic communications networks and/or services that provide access to the internet blocked IP addresses used by the Russian federation. The SSSCIP, like many observers, considers the nominal hacktivists who have been recently active as simple front groups for the Russian intelligence services. 

Smartphones as sources of targeting information.

Dave Bittner: Traditionally, targeting cells distinguish between targets, which is something sufficiently identified and located to be hit with an attack, and target indicators, which is information that provides a lead that may, with further investigation, be developed into a target. Sometimes the process is quick, as it might be with data from weapons-located radar, and other times protracted, as it might be with spot reports from the field. So if Private So-and-so says he thinks he heard something on the other side of the ridge, that's a target indicator. Someone needs to go over and look. If the radar says that enemy shell came from grid such-and-such, that's a target. But now the commodification and ubiquity of smartphones on the battlefield and elsewhere have not only given armies a new operational security challenge, but they've also provided targeting cells with a wealth of battlefield information that can provide targets with a hitherto unprecedented immediacy. 

Dave Bittner: An analysis by Mike Fong, CEO of Privoro, published in Help Net Security explains the risks and opportunities the smartphone presents on the battlefield. Mr. Fong says, of all the signals given off by smartphones in the normal course of operation, location data is perhaps the most valuable during battle. Unlike captured conversations or call metadata, location data is actionable immediately. Location data can reveal troop movements, supply routes, and even daily routines. A cluster of troops in a given location may signal an important location. Aggregated location data can also reveal associations between different groups. The obvious risk to soldiers is that their location data can be used by the enemy to direct targeted attacks against them. Notably, it has been reported that a Russian general and his staff were killed in an airstrike in the early weeks of the invasion after his phone call was intercepted and geolocated by the Ukrainians. So monitoring phones can yield not only intelligence but targets. Fong's conclusion is that how an army handles smartphones and the data they throw off can be a difference-maker on the battlefield. He says smartphones are so ubiquitous that their presence on the battlefield is inevitable, even when they've been prohibited or otherwise discouraged from use due to lethal consequences. But each location ping gives the enemy another signal that may ultimately culminate in a targeted missile strike or an improved defensive posture. The side that can best fight this information battle very likely has the upper hand in winning the war. 

Lilith enters the ransomware game.

Dave Bittner: Researchers at Cyble describe a new ransomware operation, Lilith. And BleepingComputer reports that the group not only operates a new strain of malware, but that it's already posted the first victim to its double extortion dumpsite. Cyble notes, throughout 2021 and 2022, we have observed record levels of ransomware activity. While notable examples of this are rebrands of existing groups, newer groups like Lilith, RedAlert and 0mega are also proving to be potent threats. As far as the origins of the name Lilith are concerned, she's either a demon from Mesopotamian mythology or the spouse of Dr. Crane on "Cheers." Which source you like depends upon where you take your cultural references. 

ChromeLoader makes a fresh appearance.

Dave Bittner: Palo Alto Networks' Unit 42 describes new variants of the ChromeLoader malware now making their appearance in the wild. The researchers write, this malware is used for hijacking victims' browser searches and presenting advertisements, two actions that do not cause serious damage or leak highly sensitive data. However, based on the wide distribution the attackers gained in such a short time, they were able to inflict heavier damage than the damage inflicted by the two primary functions of the Chrome extension. The extension serves as adware and as an information stealer, pulling in the victim's browser searches. 

Dave Bittner: The gang using ChromeLoader seems to have clear ideas about what it's up to. Unit 42 writes, additionally, the authors were quite organized, labeling their different malware versions and using similar techniques throughout their attack routines. But ease of criminal use has its downsides too, at least from the criminals' point of view. Unit 42 writes, this probably made their lives easier while developing their attack framework and maintaining their attack chains. But unintentionally, this also made the investigation process significantly easier. In fact, it improved the research ability so much that we were able to detect two new versions of this malware, the first one and the latest, which have never been linked to this malware family before. 

Honda acknowledges that Rolling-PWN is real (but says it's not as serious as some think).

Dave Bittner: SecurityWeek reports that Honda has acknowledged that the Rolling-PWN proof-of-concept does indeed work against the carmaker's remote keyless system. It is, in fact, possible for someone to unlock the car and even start it. But, Honda says, they couldn't just drive the car away, since that requires the key fob to be present. The Record quotes Honda's statement, "however, while it is technically possible, we want to reassure our customers that this particular kind of attack, which requires continuous, close-proximity signal capture of multiple sequential RF transmissions, cannot be used to drive the vehicle away." The 2022 and 2023 models are said to be protected against Rolling-PWN, and Honda is making other security upgrades to mitigate the vulnerability. 

A guilty verdict in the Vault 7 case.

Dave Bittner: And finally, the second trial of Joshua Schulte has ended in a guilty verdict. Mr. Schulte, a former CIA employee, was arrested after WikiLeaks' 2017 disclosure of the Vault 7 classified documents that outlined Langley's methods of penetrating networks operated by its intelligence targets. The New York Times reports that his first trial had resulted in convictions for contempt of court and lying to federal investigators. But the jury had been unable to reach a verdict on the more serious charges of which he's now been convicted. The U.S. attorney for the Southern District of New York who prosecuted Mr. Schulte offered a brief statement on the outcome of the trial and some thoughts on Mr. Schulte's motivation, which in the U.S. attorney's view comes down to resentment, which would be the E in the familiar counterintelligence acronym MICE, summarizing the motives that bring people to espionage as money, ideology, compromise and ego. 

Dave Bittner: The statement is short enough to quote in full. Joshua Adams Schulte was a CIA programmer with access to some of the country's most valuable intelligence-gathering cybertools used to battle terrorist organizations and other malign influences around the globe. When Schulte began to harbor resentment toward the CIA, he covertly collected those tools and provided them to WikiLeaks, making some of our most critical intelligence tools known to the public and, therefore, our adversaries. Moreover, Schulte was aware that the collateral damage of his retribution could pose an extraordinary threat to this nation if made public, rendering them essentially useless, having a devastating effect on our intelligence community by providing critical intelligence to those who wish to do us harm. Today Schulte has been convicted for one of the most brazen and damaging acts of espionage in American history. Security firm Tessian recently released survey data tracking accidental data loss over email. Josh Yavor is chief information security officer at Tessian. 

Josh Yavor: We see that around 3 in 5 organizations have experienced accidental data loss via email in the last year. You know, I just mentioned email. The reason I mention that is because the majority of our - the reported data loss events, they have a human cause at the - as a root cause. And in addition to humans being involved, a significant number of those events are not actually malicious or intentional. They're accidental. And so I think that's a key takeaway, is that, you know, today, in 2022, this is still a very large problem and a largely unsolved problem in terms of sufficiently addressing the reality of human behavior and being able to reduce the frequency of accidental data loss. 

Dave Bittner: Now, one of the things that caught my eye in the report was that employee negligence was a leading cause of data loss incidents and folks just generally not following policies that are in place. 

Josh Yavor: Yeah - or not knowing about them. And so I think that's another layer to consider as we think about negligence, right? Negligence isn't always intentional, right? Somebody who is negligent because they didn't know there was a policy or didn't understand the policy because it was poorly communicated or did not fit clearly the use case that they were - they, as the employee, were trying to achieve in terms of, you know, data sharing with a customer or a partner or something like that - I think that's a really great callout because as we think about what negligence actually means, we have to account for the fact that, most often, I believe - and I think we're getting more and more data that backs this up - negligence is not necessarily a reflection of employees making intentionally incorrect decisions. More often than not, they're trying to do the right thing, but they're either unaware of the requirements or the requirements don't actually clearly fit, you know, their purpose. And that's the responsibility of security teams to address. 

Dave Bittner: Yeah. And another thing that caught my eye was that it's not necessarily equal among the various teams within an organization, that some parts of an organization - I suppose largely because of the type of work that they're expected to do - they may be more at risk. 

Josh Yavor: That's right. And so I like to think about them as, you know, external-facing teams in particular. So if you look at, like, marketing, public relations and so on, they tend to have the - have higher rates of occurrences for accidental data loss in particular. But if you expand on that thinking and think about, like, who else is included, that also includes your recruiting team. It includes, if you're in the - in a product business, and you have, like, customer success or customer support roles, those roles as well. And if you think about it, these are roles that fundamentally require sharing of files, clicking of links, opening of files bidirectionally. And these are generally just, overall, some of the riskiest roles in need of support. And we see that manifest here in data loss as well because I think, like, if I step back and look at how we support these roles, our traditional tooling and approaches don't sufficiently empower people in those roles to avoid making these mistakes. 

Dave Bittner: Well, if we come at it from a different direction and ask the question - you know, for the organizations that are doing it right, who are finding the most success here, are there any common threads in the things they're doing? 

Josh Yavor: Yes. And this admittedly goes a little bit beyond the foundational data that we have in this report, and it's an extrapolation that I would bring to the table. 

Dave Bittner: Yeah. 

Josh Yavor: The theme that I would highlight is that organizations who are conscious of what we just talked about - right? - the reality that employee negligence is usually not intentional and it's reflective of a need to better empower and educate users in a way that makes sense to them, that - you know, organizations that have that realization and make that investment tend to do better. The other thing that we see is that organizations that specifically select tooling and processes that engage the employees rather than subject them to, you know, like, blocking events and things that really disrupt the business and really just approaches that engage employees and let them make empowered and informed choices securely, they tend to do better as well. So basically, meeting them in that moment where they're sending an email, sharing a file or so on and providing them with that coaching and clear education - that leads to much better outcomes generally. 

Dave Bittner: So there really is - in addition to all the technology that we throw at this problem, seems to me there's a corporate culture element here as well. 

Josh Yavor: Absolutely. If you're in a corporate culture that's punitive - right? - and you make a mistake and you're subjected to a nasty email to you and your manager or you're forced to take, you know, an hour or more of remedial security awareness training, what that means is that you're less likely to actually report things that need to be reported or ask for help when you need it because you're afraid to get in trouble. And in the worst case scenarios, people who are in unhealthy corporate cultures around security and data loss will often then seek to - or at least be tempted to seek alternate data-sharing flows where it's outside of the view of the security team so that they run lower risk of, you know, making a mistake that will end up in - you know, costing them, you know, time and energy. But at the same time, that's introducing shadow IT and, you know, decreasing the overall security of the organization. And so there's definitely a balancing act between positive engagement and, like, good security culture and, again, like, having punitive and consequential culture components in place. 

Dave Bittner: What do you hope people take away from the report? What are some of the lessons here that you hope people learn? 

Josh Yavor: Yeah, I think, like, a key lesson here is that it's not enough to focus on intentional insider threat as your core area of focus for human-led data-loss risk, and that we need to recognize that accidental and unintentional data-loss events are really predominant in what we're seeing reported across the majority of organizations, and that the key takeaway should be that if our assumption is that people are generally trying to do the right thing and they just need help, our focus in terms of supporting them from tooling to internal, like, security team practices and so on needs to be focused on that emphasis. How do we empower people to make the best choices in the moment and have our entire tooling and process stack reflect that? 

Dave Bittner: That's Josh Yavor from Tessian. 

Dave Bittner: On yesterday's CyberWire "Daily Briefing," we shared the first half of Carole Theriault's conversation with Jen Caltrider from Mozilla's Privacy Not Included initiative. They wrap up their discussion today. Here's Carole. 

Carole Theriault: So today, we have part two of my interview with Jen Caltrider. She is the lead at Privacy Not Included. That's a Mozilla Foundation project. And it helps people assess the privacy levels of different devices, apps and everything technology, basically. So welcome back to the show, Jen. Now, maybe you can tell us about your latest research. What were you looking into? And what did you find? 

Jen Caltrider: We just got done reviewing mental health apps, and they were a particularly creepy space for us. They were, like, probably the creepiest things I've ever reviewed, are these mental health apps - in one part because they collect such a huge amount of personal information on you. You know, what's your mood? You know, how often are you seeing a therapist? You know, I have what your OCD triggers are, your eating disorder triggers or what symptoms you're having, what meds you're taking, and all this really personal data that they collect. Also, you know, with the mental health crisis that's exploded over the past year, there's hundreds of millions of dollars kind of flowing into this space. And so the companies are growing really quickly. And they are - and they're caring about making money right now more than they are about protecting privacy even though they say they care about privacy. And, you know, so in our experience, just reviewing these mental health apps over these past couple months, you know, we had some things where - you know, we read the privacy policies. And of the 31 companies that we emailed asking our questions at the email address listed in their privacy policy, after a month and a half, only three companies that actually responded to us. 

Carole Theriault: Wow. Are you kidding me? 

Jen Caltrider: Yeah. They just didn't respond to our questions. And then, one of them, one of the companies we then followed up with, they weren't really happy because they - you know, we called them out for some questionable privacy practices, and they weren't really happy. And they're like, well, why didn't you reach out to us before, you know, you launched? And we're like, we did three times at the email address in your privacy policy. And their response was, oh, well, the person that monitors that left in March, and we haven't replaced them. And I'm like, well, that's not showing that you care about privacy. Another company was unhappy with us, and they wrote a post defending themselves. And in the post they said, oh, Mozilla got everything wrong because they tried to infer our business practices from our privacy policy. 

Carole Theriault: Huh. 

Jen Caltrider: And I had to laugh because I'm like, well, how else is a person supposed to infer your business practices around privacy other than your stated business practice around privacy? 

Carole Theriault: Yeah. 

Jen Caltrider: You know, it feels like to me that it's almost a game for these companies to write these privacy policies that too often are vague, vaguely worded, have wiggle room. There's, like, five privacy policies. There's a privacy policy and a privacy notice and a addendum for the EU and an addendum for California. And they don't make it easy for consumers to actually stop and understand. And one of the things that really got me when I was reading all this was - going back to that, we'll never share or sell you data without consent - well, what does consent look like - right? Like, downloading and registering to use the app I think too often might count as consent - which most people are like, no, that's not consent. I just want to use your app. You better ask me first before you give my personal data to Facebook. And no, that - you know, based on my reads of privacy policies, a lot of time consent is as simple as you downloaded the app. You registered. You've given us your consent to do this. And then, you know, you read, well, how do I withdraw consent? And a lot of times it's like, in order to withdraw consent, you must delete the app. 

Carole Theriault: Yeah. 

Jen Caltrider: You're like, uh, that's not great. Even if consumers do go in and read privacy policies - which please do. I - you know, I'm a nerd. And I know they suck, and it's hard to read them. But it's also a good exercise. But there's just so many questions that you kind of walk away with where things aren't clear - that, you know, maybe even if you read it, it sounds good at the top. But then you actually get down into the depth of it, and you're like, hold on. It said you wouldn't - I saw a privacy policy said that we'll never share your data with - for advertising purposes. And then down below, it said, here's the advertisers we share your data with. And I'm like, wait a minute, you know? So it's tricky. It's really tricky. 

Carole Theriault: Do you think the value that we're putting on so-called big data is what's causing this problem? 

Jen Caltrider: Oh, yeah, absolutely. I mean, you know, back in 2017 (inaudible) we started, you know, OK, so there was your fitness tracker tracked your steps and maybe your heart rate. And your smart speaker would listen for, you know, a couple of commands. But now, you know, we have apps that are collecting our conversations with therapists. We have apps - you know, we have fitness trackers that can tell your emotional state and whether you're drunk or not. You know, so the amount of data that can be collected now - like, well, this is the data economy. If we're going to make money, this is how it's done. And - but, you know, there's just so much personal information that's out there now, so much more that, you know, people are going to, I think, rather quickly go from - well, nothing bad's happening; I'm not feeling any repercussions from all this data sharing - to - holy crap, you know, how do they know this about me? And now what do I do? How do I take that back? And so, you know, I feel like in any kind of, you know, justice movement, there's a tipping point. And we're not quite at that tipping point with privacy, but I feel like it's getting awfully close. When your cars can know everywhere you've been and your mental health apps can know your emotional state and Facebook can - you know, is developing, you know, algorithms to know as much personal information about you as can so that, you know, people all over the world can target you with ideologies. It's getting really scary. And, you know, now's the time to kind of make a difference, I feel like. 

Carole Theriault: Sometimes we have to end on a more sobering note but an important one. We've just been talking with Jen Caltrider. She is the lead at Privacy Not Included, a Mozilla Foundation project. 

Dave Bittner: That's Carole Theriault speaking with Jen Caltrider from Mozilla's Privacy Not Included initiative. 

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. 

Dave Bittner: The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Liz Irvin, Elliott Peltzman, Tre Hester, Brandon Karpf, Eliana White, Puru Prakash, Justin Sabie, Rachel Gelfand, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe, and I'm Dave Bittner. Our in-studio executive producer today was Dexter the dog. Thanks for listening.