The CyberWire Daily Podcast 10.2.20
Ep 1186 | 10.2.20
CISA and Cyber Command describe a new RAT. Emotet spams Team Blue. Spyware campaigns described. Maritime sector hacks. And another reason not to pay the ransom.
Transcript

Dave Bittner: SlothfulMedia is the new RAT in town. Emotet spam counts on political commitments. ESET describes two distinct spyware campaigns in the Middle East and Eastern Europe. Hackers are paying more attention than usual to the maritime sector. Awais Rashid from the University of Bristol on privacy concerns of contact tracing apps. Our guest is Krystle Portocarrero from Juniper Networks on the continued rise of encryption and the technical and privacy challenges that come with it. And the U.S. Treasury Department cautions all that paying up in a ransomware attack might land you in sanctions hot water.

Dave Bittner: From the CyberWire Studios at DataTribe, I'm Dave Bittner with your CyberWire Summary for Friday, Oct. 2, 2020. 

Dave Bittner: U.S. Cyber Command yesterday warned that a new implant - SlothfulMedia, a remote access Trojan - has been detected in attacks against targets in India, Kazakhstan, Kyrgyzstan, Malaysia, Russia, and Ukraine. Details are up on Cyber Command's VirusTotal page. The U.S. Cybersecurity and Infrastructure Security Agency, which cooperated with Cyber Command in developing the alert, described SlothfulMedia as an information stealer. There's been no public attribution other than to say the attacks are the work of a sophisticated cyber actor. CISA and U.S. Cyber Command have in recent months been most ready to expose hacking directly by nation-states. 

Dave Bittner: Election-themed spam represents itself as mobilizing adherents of the U.S. Democratic Party, but it's really just infecting their devices with Emotet, Proofpoint says. The campaign, whose motivation seems criminal and not political, surged yesterday. The email's body text is simply copied from a page of the Democratic National Committee's site. The lures in the subject line are in the customary act-now style, intended to inspire a sense of urgency and override skepticism and better judgment. Team Blue Take Action is the most common subject line, but some of the related subjects are Detailed Information and Volunteer. Three others are List of Works, Information and Volunteer, which are just sad. It makes it seem like the hoods just aren't even trying. Got to get a call-to-action zinger in there, kids. The last one Proofpoint mentions is Valanters (ph) 2020, and that's V-A-L-A-N-T-E-R-S, which we think means volunteers. So add a sic, as the smug editors write, or learn to spell, kids, as we say around the shop. The baited attachments that carry the malicious payload have similar names. 

Dave Bittner: A note for international listeners who may be baffled by the idiosyncratic American political color palette - blue in American political slang denotes the Democratic Party; that is, the left or center-left. Red, in a reversal of the usage that would be common in most of the rest of the world, means the Republican Party; that is, the right or center-right. So to be on team blue means, broadly speaking, to be on the progressive side of the issues. But in the case of this spam wave, while it's fishing for Democrats, they're just targets of opportunity, and the goal is traditionally criminal. The Emotet spammers are crooks of the ordinary kind. Tomorrow it could just as easily be a call to join team red and hop on the Trump train. 

Dave Bittner: Researchers at security firm ESET have identified a cyberespionage group, XDSpy, that's been active against targets in Eastern Europe since 2011. Military, diplomatic and corporate organizations in Belarus, Moldova, Russia, Serbia and Ukraine figure in the target list. The target list is unusual as is the variation in sophistication the group shows. Its techniques vary from highly sophisticated operations to low-grade commodity skid work. ESET hasn't been able to discern any connections to other threat actors, and whoever they are, XDSpy has been in business for 11 years. 

Dave Bittner: ESET this week also described a new strain of Android spyware cloaked as bogus versions of legitimate services, including AndroidUpdate, Threema and Telegram. ESET calls the group responsible APT-C-23. Others have called it Desert Scorpion or Two-Tailed Scorpion and linked it to Hamas. The targets currently being prospected are for the most part in the Middle East. The malware, which ESET calls Android/SpyC23, is being offered in DigitalApps, a third-party store that contains a mix of benign and malicious apps. This discovery offers information on evolving tactics and techniques. The Two-Tailed Scorpion threat actor has been on defenders' radar since the Chinese security firm Qihoo 360 outed them in March of 2017. 

Dave Bittner: The International Maritime Organization, a U.N. regulatory body concerned with the shipping industry, yesterday disclosed that it had been hit with a cyberattack that significantly disrupted its IT systems. The nature of the attack isn't yet known, and it represents an administrative and business problem as opposed to a direct threat to safety of navigation. The industry publication gCaptain offers some a priori speculation that the incident may have been a hacktivist protest of the grounding of the container ship M/V Wakashio off Mauritus and the attendant bunker oil spill. But this really is just speculation. The motive is as unknown as the malware, but many observers have taken note that this represents the third cyberattack against a maritime sector target over the past week. First, as The Wall Street Journal notes, the French container giant CMA CGM was hit with ransomware over the weekend. And on a smaller but still irritating scale, Maritime Executive reports that the British ferry service Red Funnel, which operates between Southampton and the Isle of Wight, had suffered a cyberattack that disrupted online ticket sales. If you wanted to buy a ticket, you just have to show up at the kiosk and hand your money over in person. 

Dave Bittner: And finally, if you're a ransomware victim, here's another reason to refuse to pay the extortionists. Not only are they creeps - we lapse into lawyers' technical jargon here - who shouldn't be rewarded and encouraged, but you may be placing yourself on the wrong side of the law. You could find yourself in violation of sanctions. Yesterday, the U.S. Treasury Department Office of Foreign Assets Control issued a friendly reminder that companies involved in ransomware payouts risk transgressing OFAC regulations and incurring civil penalties. The notice specifically names financial institutions, cyber insurance firms and companies involved in digital forensics and incident response. One takeaway from The Wall Street Journal's coverage - if you do pay, don't keep ransomware payments quiet. It's a bad look, and it will land you in hot water. Looping in law enforcement is encouraged, and it counts as good behavior in any assessment of penalties. So call the cops. 

Dave Bittner: My guest today is Krystle Portocarrero, product manager for advanced threats from Juniper Networks. She joins us with insights on how the increased use of encryption presents challenges for both privacy and technical reasons. 

Krystle Portocarrero: We're seeing, you know, an increase in encryption kind of across the board. And anywhere from probably 70 to 90% of most internet, kind of, outbound connections are now being encrypted via SSL. You know, and it makes a lot of sense. Most services now - you know, everybody's kind of banking online, shopping online. So most of these services are offering encryption to keep those things, you know, protected, which makes a lot of sense. And then also, you see some of the largest providers have really started, you know, a huge push for using encryption. So, you know, Google, Microsoft, Facebook have all started encrypting all their connections with SSL as well. 

Dave Bittner: And so what are some of the challenges that the increased use of encryption provides for folks who are securing enterprises? 

Krystle Portocarrero: Well, so there's a lot of challenges there - right? - because most of the security tools that are available today all require traffic to be in the clear. So any kind of deep packet inspection, if you're talking about doing things like, you know, intrusion prevention, antivirus - right? - all of these types of traffic inspection tools require the traffic to be in the clear, which, of course, as we start seeing more and more encrypted traffic, that becomes harder. So the main way of dealing with it today is to proxy all those connections. And so, you know, whether you do that on a firewall, whether you do that on a separate device, it doesn't really matter. It's still adding a lot of overhead. It's overhead not only for the device that has to proctor (ph) this connection, so now you have instead of one connection, it's two connections. And then there's also the overhead of managing certificates, which anybody that's ever kind of run, you know, a PKI, it's not the most exciting thing to do, right? It's just a lot to handle doing certificate revocation, keeping track of, you know, making sure that everything is still up to date. 

Krystle Portocarrero: So there's an entire infrastructure around certificates that adds on, you know, quite a bit of overhead. But, you know, currently that's still the best way of dealing with it. But even that is - you know, outside of the overhead it adds, I think there's a lot more going on in the world now where people are starting to really question the - you know, are those always necessary and wanting, you know, kind of users expecting more privacy. You see things like GDPR in Europe and things like CPPA in California where we're looking at, well, at what points is it not acceptable to decrypt a user's traffic? And even if it's, you know, an enterprise traffic, you know, do you want to be decrypting - you know, if somebody's browsing, you know, health care or their bank at work, that's, you know, a fairly normal, typical activity. Should you be decrypting all of that? So there's some privacy issues, I think, that it brings up. And then how do you kind of deal with that? 

Dave Bittner: I see. And where do you suppose we're headed? What does the future hold for this? 

Krystle Portocarrero: Well, you know, there - I don't think that, you know, going back, right, and trying to do deep packet inspection and really, you know, kind of break the encryption on all this traffic is a great idea. I think kind of where we're headed with needing, right, users, wanting more privacy is generally a really good thing, which means that we have to find a different way of dealing with it. So there are certain technologies that are out there, like SSL inference, that are starting to just look at the details of, like, the SSL handshake or the TLS handshake to figure out what's going on, you know? You can look at connection statistics and certain ways in which, you know, connections might beacon out to get an idea if they are malicious or not without having to break the decryption. 

Dave Bittner: I wanted to touch on that because it seems to me like - I suppose there's an educational component here as well because I don't have the sense that a whole lot of users really have a good idea of exactly what happens in the pathway of their data, you know? What - if I am at work and I'm doing something, I'm logging into my doctor's office or something like that, it seems to me like perhaps, you know, there are assumptions that people make or - in either direction that either they're going to see everything or everything is going to be encrypted or somewhere in between. Do you suppose that that's part of it as well as kind of - I don't know - establishing kind of norms as to what we can and can't expect? 

Krystle Portocarrero: Yeah. No, that's a really good point. And especially what we're seeing today, you know, with more and more people working from home and having to, you know, constantly be connected to a VPN, the work and home space - like, that line is kind of blurring. And so you're more likely to probably browse to these kind of things that you might wait until you do, you know - until you get home. Now you're doing it either way. And so I think there is kind of a shared responsibility, you know, to inform your users of what's actually going on, what types of technologies you're using, where exceptions are being made so they can make more informed decisions as well. 

Dave Bittner: That's Krystle Portocarrero from Juniper Networks. There's an extended version of our interview available on CyberWire Pro. Check it out on our website, thecyberwire.com. 

Dave Bittner: And I'm pleased to be joined once again by professor Awais Rashid. He's a professor of cybersecurity at Bristol University. Awais, it's great to have you back. I wanted to touch on privacy and - which to me has really come to the top of mind for a lot of folks, especially as we've been going through this pandemic, and we've had to deal with nations deploying contact tracing apps. And I believe in your home in the U.K., it sort of got off to a bit of a false start there. 

Awais Rashid: Yes, there has been a lot of debate about contact tracing. And you must have heard this debate about centralized versus decentralized... 

Dave Bittner: Right. 

Awais Rashid: ...Approaches to contact tracing. And a lot of that has kind of hinged on the issues of privacy. I think the key thing to think about is that the question here isn't as to whether one approach is necessarily superior than another because developers' or organizations' ability to sort of implement them has its challenges in itself. But also both approaches have their pros and cons. The big question with regards to privacy comes from the fact that in the case of the decentralized approach, there isn't a central repository or, shall we say, a central database, which is going to hold all that information. And the concern around the centralized approach exactly comes from that - is that it's not simply from the perspective of people being concerned about the confidentiality of that data. It's as much about the transparency and accountability of how that data may be used and who will access that data and for what purposes. And that is really what the biggest debate here has been about in the first instance. 

Dave Bittner: Yeah, it's been fascinating to watch as different areas around the globe have had different approaches. And I suppose a big part of it is communications of being successful in explaining to the citizens what we're trying to do, how we're trying to do it and how much security is a part of that and even, I suppose, if there are certain sacrifices folks might have to make when it comes to privacy for the greater good. 

Awais Rashid: So I think the communication is exactly part of the issue. But I think part of the issue also is that there has been a lot of focus around the discussions over the years. Privacy has become very much an issue of confidentiality. So we talk about privacy breaches when people's personal data has been leaked. And as a result, something has happened. But privacy is actually much more than just confidentiality. And this debate about contact tracing really brings it to the fore because the question here isn't that, you know, let's take a centralized approach. People are not saying necessarily that the centralized approach is bad. The concern is how that centralized database is going to be used. And how do we actually demonstrate that it's only going to be used for contact tracing? It won't be used for any other purpose? And if it is used for any other purpose, then how do you find out that it has been used for any other purpose? 

Awais Rashid: So that that is really at the heart of this, that if you start to build centralized repositories, then certainly there are there are concerns about, for example, you know, surveillance and the use of that information for purposes for which it wasn't collected. So so I think whatever approach one takes, the issue about how the data is actually used and accessed - and how do you actually communicate those aspects of the data to the people whose data is being held? - is as important as ensuring the confidentiality of that data. 

Dave Bittner: It strikes me also that the folks who've been developing these apps and are trying to implement them - they have a bit of an uphill climb because, certainly, when it comes to things like social media, we've seen story after story of people's data being shared or released in ways that they're not not necessarily comfortable with. 

Awais Rashid: Yeah, absolutely. And I think that that really - therein lies, really, the problem because with all these kind of various breaches and also when we are in the space where, you know, there are sort of news about large-scale interference with democratic processes based on data from social media and so on and so forth, it generally erodes trust in this kind of infrastructure. And one of the key challenges that is faced with any such approach is that that cost-benefit analysis is not particularly clear to someone actually contributing that data. 

Awais Rashid: We have a data economy at the moment, which kind of very much works on an all-or-nothing model. You know, as a user, you either sign up to a service - you provide your data, and you, you know, benefit from the service. Or you sign up to - don't sign up to a service and don't provide your data. There is no halfway house there because you don't really have a lot of control as to how that data is subsequently used. You don't have a lot of visibility of how that data is subsequently used. And can you actually say, no, I want it to be used for X purposes, but not Y purpose? 

Awais Rashid: And now we're back to the contact tracing. If you are contributing your data to a contact-tracing platform, how do you actually say, well, I only want it to be used for the very purpose of contact tracing and no other purpose? And how do you ensure that it's actually not being used for any other purpose? And that is really where the problem lies at the heart of it. There is, of course, quite a lot of work that has been done in the space of what is known as privacy-enhancing technologies, about ways to share data without revealing information about the particular details about individuals to whom that data belongs. But, you know, a lot more work needs to be done in that space to make sure, how do we actually share this kind of information on a massive scale, for example, in the case of a crisis or a pandemic without actually impinging on privacy and civil liberties? 

Dave Bittner: Yeah. All right. Well, professor Awais Rashid, thanks for joining us. 

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. And for professionals and cybersecurity leaders who want to stay abreast of this rapidly evolving field, sign up for CyberWire Pro. It'll save you time, keep you informed, and it's finger-licking good. Listen for us on your Alexa smart speaker, too. Be sure to check out this weekend's "Research Saturday" and my conversation with Joakim Kennedy and Rory Gould from Anomali. We'll be discussing the Smaug ransomware as a service. That's "Research Saturday." Don't miss it. 

Dave Bittner: The CyberWire podcast is proudly produced in Maryland out of the start-up studios of Data Tribe, where they're co-building the next generation of cybersecurity teams and technologies. 

Dave Bittner: Our amazing CyberWire team is Elliott Peltzman, Puru Prakash, Stefan Vaziri, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe. I'm Dave Bittner. Thanks for listening. We'll see you back here next week.