The CyberWire Daily Podcast 3.25.21
Ep 1297 | 3.25.21

Mamba ransomware’s evolution. Facebook acts against Evil Eye. Huawei is invited into OIC-CERT. Slack Connect gets poor security and privacy reviews. An excursus on fleeceware.

Transcript

Dave Bittner: The FBI warns organizations that Mamba ransomware is out and about in a newly evolved form. Facebook takes down a Chinese cyber-espionage operation targeting Uyghurs. Huawei joins the Organization of Islamic Cooperation. Slack thinks it might have made a security and privacy misstep. Caleb Barlow from CynergisTek on health care interoperability. Our guest is Roei Amit from Deep Instinct on their 2020 Cyber Threat Landscape Report (ph). And a look at fleeceware.

Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Thursday, March 25, 2021. 

Dave Bittner: On Tuesday, the US FBI circulated a flash alert about Mamba ransomware to industry. Mamba now uses a weaponized version of DiskCryptor against its targets. DiskCryptor is an open-source, full disk encryption tool. As the FBI points out, the software isn't inherently malicious, but Mamba's operators have weaponized it. After Mamba has done its work and rendered the victim's files inaccessible, it displays a ransom note that includes the actor's email address, ransomware filename, the host system name and a place to enter the decryption key. Victims are instructed to email the extortionist and arrange payment of the ransom. A decryption key is promised in exchange for payment. 

Dave Bittner: The bureau recommends adopting 15 specific and familiar hygienic practices to avoid a Mamba infestation. One of the recommendations is peculiar to defense against this latest version of Mamba. Quote, "If DiskCryptor is not used by an organization, add the key artifact files used by DiskCryptor to the organization's execution blacklist. Any attempts to install or run this encryption program and its associated files should be prevented," end quote. 

Dave Bittner: And of course, the bureau discourages anyone from paying the ransom. Quote, "Payment does not guarantee files will be recovered. It may also embolden adversaries to target additional organizations, encourage other criminal actors to engage in the distribution of ransomware and/or fund illicit activities," end quote. So you may not get your files back even if you pay, but one thing is for sure - you'll help fuel the bandit economy of the cyber underworld. 

Dave Bittner: Facebook announced yesterday that it had taken down a Chinese cyber-espionage operation directed principally against Uyghur activists, journalists and dissidents living abroad in Turkey, Kazakhstan, US, Syria, Australia, Canada and other countries. Facebook's tweet announcing the takedown cited earlier work on the threat actor by Volexity, Project Zero, and Trend Micro (who called the group "Evil Eye"). Facebook said that a lot of the surveillance activity was conducted off platform, with surveillance installed via maliciously crafted bogus news articles that falsely represented themselves as media reports in outlets covering news of interest to the Uyghur diaspora. Those links are now blocked on Facebook. The Washington Post notes that the takedown shows that Facebook's intelligence operations are now looking beyond Facebook itself. 

Dave Bittner: Huawei has joined the Organization of Islamic Cooperation's Computer Emergency Response Team, OIC-CERT, the first tech company to do so. Malaysia and the UAE sponsored Huawei's membership, Gulf News reports

Dave Bittner: OIC-CERT is the third-largest organization of its kind. The Organization of Islamic Cooperation has 57 member countries. Huawei sees its invitation to OIC-CERT as a testimony to its cybersecurity chops. Gulf News sees that invitation as, quote, "a rebuff to recent U.S. efforts to stop countries from signing up Huawei for their 5G networks," end quote. 

Dave Bittner: The four most dismaying words in IT may be, why don't we just... - as in, why don't we just open up our platform so users can DM anyone? Slack, the widely used business chat application, yesterday introduced a feature, Slack Connect, that would have allowed messages to be exchanged with people outside the user's organization. Early notices haven't been positive. It was poorly received, with users seeing the feature as a privacy and security bug. According to Vice's Motherboard, Slack, acknowledging the decision was a mistake, is now backtracking and limiting the new feature's scope. 

Dave Bittner: Quote, "After rolling out Slack Connect DMs this morning, we received valuable feedback from our users about how email invitations to use the feature could potentially be used to send abusive or harassing messages. We are taking immediate steps to prevent this kind of abuse, beginning today with the removal of the ability to customize a message when a user invites someone to Slack Connect DMs. We made a mistake in this initial rollout that is inconsistent with our goals for the product and the typical experience of Slack Connect usage," end quote. Many organizations aren't waiting for the walkback and are limiting the feature themselves, the Record reports. You may ask, don't people in organizations get lots of email that they don't want? Sure, but as the help desk types would say, that's a known issue, and organizations have a lot more control over their email environments than they do over Slack Connect, whose granularity apparently doesn't get much more finer than on or off. 

Dave Bittner: And finally, greetings, fellow youths. And remember fleeceware? Well, don't worry - fleeceware remembers you. It has to remember you, at least well enough to know when that free trial ages into a premium subscription. Security firm Avast yesterday blogged about what they found when they went looking for fleeceware on planets Apple and Android - 250 apps with a north of a billion downloads and an estimated dodgy revenue in excess of $400 million, which is a lot of fleece. Fleeceware, remember, is an app that starts off with a free trial and then, at the end of the trial period, quietly enrolls the inattentive user into a subscription with whopping, big fees that users wouldn't have signed up for if they'd been in their right, which is to say skeptically vigilant, mind. As Avast puts it, quote, "The application takes advantage of users who are not familiar with how subscriptions work on mobile devices, meaning that users can be charged even after they've deleted the offending application," end quote. The free trial period is usually just three days long. The apps usually have some not particularly distinctive functionality which they actually deliver, more or less forgettably, but their principal purpose is to fleece the unwary. The most common nominal benefits on offer include musical instrument apps, palm readers, image editors, camera filters, fortune tellers, QR code and PDF readers, and slime simulators. Who falls for this stuff? Kids, mostly, as the popularity of slime simulators might suggest. The youths see free trial, figure they're good to go, and so they are for three days. After that, it's Katy bar the door, until Mom or Pop notice these weird subscription charges on their statements. By that time, the grifters have, as the kids like to say, already made some bank. 

Dave Bittner: The researchers at security firm Deep Instinct recently published their 2020 Cyber Threat Landscape Report. And among the findings was the discovery of adversarial machine learning being used in the wild. Roei Amit is a threat intelligence researcher at Deep Instinct. 

Roei Amit: So every year around December, we realized we have a lot of data that we collected over the year, and we think that we have some insights that we think could contribute to the community and to our purposes. So we gather up all of our, you know, data we collected in our cloud and other resources, and we create this research paper, which is not too long but not too short. And we just - yeah, you know, it's right about exactly the amount you want to read without getting too caught up in all the technical details but still, you know, feel like you had a good and interesting read and not just, you know, get the highlights. 

Dave Bittner: Well, I mean, would you say it's fair to say 2020 was a year like no other, given the pandemic? 

Roei Amit: I totally agree. We've seen a lot of interesting things happen, especially because of COVID-19. For example, almost all of the phishing campaigns, in some way or another, included the COVID-19. These were, like, the most - this was, like, the most talked hot topic in the phishing campaigns themselves - the documents or the fake links that were used. It was also - it's interesting to note that the second-most common subjects were the U.S. elections and the Black Lives Matter movement. 

Dave Bittner: That is interesting. 

Roei Amit: Yes, I agree. It's very interesting to see what attackers think that might - will be interesting for their targets. 

Dave Bittner: Right. One of the things that you draw attention to here is some advanced adversarial machine learning in people's defense posture. Can you take us through your thoughts there? 

Roei Amit: Yeah, sure. So we saw in the past theoretical work and proof of concepts in which adversarial machine learning attacks are aimed at, you know, security products that utilize machine learning and deep learning in order to evade their detection. What they basically do is trying to take advantage of design weaknesses and flaws that are inherently in the way that machine-learning-based or deep-learning-based cybersecurity models work in order to evade their detection. And we saw it in the past in proof of concepts and in theory. But actually, in 2020, we found a sample in the wild that utilizes adversarial machine learning techniques in order to bypass these products. And I'm not saying we can expect every malware to be able to do so, but it is something new and something that should be looked at by anyone in the industry. It's very interesting to see what happens. And, of course, there are ways to defend. And these products can defend from these techniques. But it is very interesting to see that hackers and attackers also evolve, and malware developers are continuing to try to find ways to bypass these products. 

Dave Bittner: That's Roei Amit from Deep Instinct. 

Dave Bittner: And joining me once again is Caleb Barlow. He is the CEO at CynergisTek. Caleb, it's always great to have you back. Today, we are talking about health care interoperability and the effect that may have for security professionals. What sort of things do you want to share with us today? 

Caleb Barlow: Well, hello, Dave. 

Dave Bittner: (Laughter) Hello. 

Caleb Barlow: What if I told you and kind of everyone listening that you needed to open up your corporate databases so that any of your clients or, let's say, even independent research groups could access your most sensitive records? And oh, by the way, this was going to be required by regulation. I think most people listening to this podcast would probably have an issue with that. 

Dave Bittner: Yeah, I think you'd get a fair amount of pushback on that. I think it's fair to say. 

Caleb Barlow: But that's what's happening in health care. So there's something called the 21st Century Cures Act. It was signed into law in December of 2016. And this was the start of what we call the information blocking and interoperability rule. So, Dave, imagine if you're a health care provider. The rule prohibits information blocking, which is defined as something, except as required by law or covered by an exception, is likely to interfere with the access, exchange or use of electronic health information. And the rule goes on to define kind of eight exceptions that would not, you know, construe information blocking. So what does this mean? Well, this means that if a patient has an app, let's say a fitness app or, you know, something else that - you know, maybe they're working on weight loss or - you know, it can be any number of applications that want to access your health care record. You need to allow it, unless one of these eight exceptions is met. But what it also means is that an independent researcher, let's say someone doing cancer research, and they want to access patient information on cancer patients - you also need to allow that, as well. 

Dave Bittner: OK, what about patient privacy? 

Caleb Barlow: Well, remember, you know, the - when we look at patient privacy, the way this comes into play under HIPAA is by requiring that entities that access a health care record have certain security provisions and controls in place. It doesn't restrict them from actually accessing it in the first place. It just says they have to take due care when accessing it. So in a lot of ways, this opens it up. And remember, you know, HIPAA is all about portability. Now, you know, the P in HIPAA is portability as it refers to insurance. But, you know, if you really kind of dig through all these murky regulations, it's also about opening up health care information for research, for other applications and other things you may want to do. So on one hand, as a consumer, this is great because I can go in, I can see what's in my health care record, I can link it to my, you know, fitness application or whatever else it is I want to track. But from a security professional's standpoint, this is a nightmare because I have no idea what the security is of these apps. I may not even know, like, who's behind these apps in terms of ownership or these research projects, but I have to allow them in. And the - you know, we're actually moving into a mode here where the actual enforcement of this rule is going to start to come into play over the next couple of months. 

Dave Bittner: Is there no vetting? I mean, can anybody just hang up a shingle and call themselves a researcher and have at your medical records? 

Caleb Barlow: Well, it's not quite that simple. And as you can imagine, like all regulations, this is very murky. And no one yet really totally understands exactly how this is going to be enforced. But, you know, suffice it to say, you could, as a health care provider, restrict someone from accessing this information if one of these eight conditions were met. 

Caleb Barlow: And one of them, of course, is security. So, you know, you've got to look at the type of risk, the type of harm. So risk of harm is one of the issues. Privacy is one of the issues. Security is one of the issues or if it's simply infeasible - you know, not just I don't want to do it, but truly infeasible to access this data. You know, otherwise you've pretty much got to allow this, and you're going to have to demonstrate why you think that individual or entity asking for access can't secure it or has an associated privacy risk with it. 

Dave Bittner: Are people out there already trying to take advantage of this? Are they knocking on health care providers' doors and saying, let us have at it? 

Caleb Barlow: Well, I think we have to divide this, Dave, into kind of two swim lanes. The first swim lane, which is can a consumer take advantage of this, the answer to that is absolutely yes. I mean, you know, I know my health care provider, there's an online portal. I go in, and, you know, I - there's a laundry list of applications that they've already worked through the API that I can connect to my mobile phone or to, you know, other entities that may not even be mobile. And that's fantastic. I mean, it was - honest, it was kind of interesting just to see what notes my doctor wrote about me and my health care record and what was in there. 

Dave Bittner: (Laughter) Right, right. 

Caleb Barlow: And all kinds of interesting - like, you know, you can see, legit, your data. 

Dave Bittner: Yeah. 

Caleb Barlow: But on - so that part is fantastic. But the other side of this and the part that health care CISOs are just starting to understand - one of the things that's really worth spending a lot of time in my business on is looking at these APIs that may be coming in and saying, what's the API asking for? Do we know if the API's secure? How do we pen test that API? But I think we're just at the beginning of security professionals being concerned. 

Caleb Barlow: I don't think we've yet seen instance of abuse, but there are certainly all kinds of doomsday scenarios that people are starting to think about, particularly on this research provision, right? I mean, how do we actually understand who's doing the research, where they come from... 

Dave Bittner: Right. 

Caleb Barlow: ...Who owns that research, what happens to it downstream? And that's all the stuff that's going to get worked out over the coming weeks and months as we all figure out how interoperability rolls out. 

Dave Bittner: All right. Well, something to certainly keep an eye on. Caleb Barlow, thanks for joining us. 

Dave Bittner: Thanks to all of our sponsors for making the CyberWire possible, especially our supporting sponsor, ExtraHop - leaders in cloud-native network detection and response. Learn more at extrahop.com/cyber. 

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our Daily Briefing at thecyberwire.com. And for professionals and cybersecurity leaders who want to stay abreast of this rapidly evolving field, sign up for CyberWire Pro. It'll save you time and keep you informed. The nutty taste people like. Listen for us on your Alexa smart speaker, too. 

Dave Bittner: The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Elliott Peltzman, Puru Prakash, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.