The CyberWire Daily Podcast 6.7.23
Ep 1839 | 6.7.23

PowerDrop’s capabilities are up in the air. A Russian cyberespionage campaign channels their inner 007. A disconnect between law firms and cybersecurity protections.

Transcript

Dave Bittner: A new PowerShell remote access tool targets a US defense contractor. Current Russian cyber operations against Ukraine are honing in on espionage. CISA and its partners have released a Joint Guide to Securing Remote Access Software. A bug has been reported in Visual Studio's UI. Awais Rashid from University of Bristol discusses privacy in health apps. Our guest is Jim Lippie of SaaS Alerts with insights on software as a service Application Security. And what are the disconnects between cybersecurity and the legal profession?

Dave Bittner:  Dave Bittner with your CyberWire intel briefing for Wednesday, June seventh, 2023.

PowerDrop, a new PowerShell remote access tool targets a US defense contractor.

Dave Bittner: PowerDrop is a new malicious PowerShell script discovered by researchers at Adlumin to have infected machines at an unspecified US aerospace defense contractor. The malware uses a combination of Windows PowerShell script and Windows Management Instrumentation to create a new remote access trojan. The researchers write that what separates this malware from others is the fact that it is novel. That is other code like this hasn't been witnessed before. The researchers say that it straddles the line between a basic "off-the-shelf-threat" and the advanced tactics used by Advanced Persistent Threat Groups. Though attribution remains inconclusive, Adlumin assesses that based on the target and living off the land tactics, it's likely that the threat actors are operating on behalf of a nation state. Currently, it's unknown whether this incident is part of a larger campaign targeting multiple organizations.

Current Russian cyber operations against Ukraine focus on espionage.

Dave Bittner: CERT-UA warned Monday of a Russian cyber campaign that prospects government and media targets for the purpose of data collection. It uses LONEPAGE malware, a PowerShell script, to stage information stealers and keyloggers in its targets. The campaign, which has been active in the second half of last year, is consistent with recent Russian cyber operations in that its goal is espionage as opposed to either influence or sabotage.

CISA and partners release Joint Guide to Securing Remote Access Software. 

Dave Bittner: CISA, the FBI, the MS-ISAC, and the Israel National Cyber Directorate INCD have released a Joint Guide to Securing Remote Access Software. The guide centers around detecting and preventing the use of legitimate remote access software and common exploits that could be used against an organization. One of the particular concerns about this software is that it is used in normal IT tasks. This allows the remote access tools to be exploited by threat actors who typically remain undetected by antivirus tools, or by endpoint detection and response defenses. Abusing remote access software doesn't require a threat actor to create a new capability. CISA explains in the guide that remote access software solves the issue of creating and utilizing custom malware for malicious actors and that the way remote access products are legitimately used by network administrators is similar to how malicious RATs are used by threat actors. The guide recommends, among other things, that organizations create a baseline of their normal activity and begin monitoring for unusual spikes indicative of a compromise. For prevention and mitigation of this threat, the guide strongly encourages organizations to implement zero-trust solutions whenever and wherever possible. Adding safeguards that prevent users from accessing a large number of machines in a short amount of time can also mitigate risk.

Visual Studio UI bug reported.

Dave Bittner: Researchers at Varonis discovered a UI bug within Microsoft Visual Studio's extension installer that allows an attacker to spoof an extension signature and effectively impersonate any publisher. The flaw can be exploited by opening the VSIX file as a ZIP file and adding new characters to the extension name, which will prevent digital signature warning from popping up in the installation prompt. The threat actor can then add a phony digital signature label at the beginning of the file name. Microsoft fixed this flaw in April, and users are advised to ensure Visual Studio is up to date.

Disconnects between cybersecurity and the legal profession?

Dave Bittner: And finally, the International Legal Technology Association in partnership with the Conversant Group has released a joint research report detailing the disconnects between cybersecurity and legal personnel and practices. The survey benchmarks the cyber practices of law firms worldwide. Law firms are said to be an ideal target for malicious actors, between the storage of extremely sensitive business, civil, criminal, and personal data of clients, and the potential financial payoff for the hackers. Due to the sensitive nature of the data that can be lifted, law firms are said to be significantly more inclined to give in to the demands of a threat actor. As of the end of 2021, the report shares that almost a third of law firms saw a breach, and 36% reported the past presence of malware. A surprisingly low number of law firms, just over 15%, saw gaps in their cybersecurity protections, despite being a common target. The research shows a significantly more elevated number than that. About three quarters of those surveyed also believe they had a leg up on others in their industry in terms of cyber protections, though the researchers have found this to be unlikely. 65% of respondents also note the presence of lateral movement defenses, though the researchers have found the presence of only two offerings in the market that include those capabilities, meaning that the understanding by the firms of what true "lateral movement defenses" are may be murky at best. So counselors, there may be some overconfidence, here. Beef up your cyber protections, or you may find yourself embroiled in legal battles on your own time.

Dave Bittner: Coming up after the break, Awais Rashid from University of Bristol discusses privacy in health apps. Our guest is Jim Lippie of SaaS Alerts with insights on software as a service application security. Stay with us.

Dave Bittner: Jim Lippie is CEO of cybersecurity firm SaaS Alerts, who recently released the firm's annual cybersecurity report. The research specifically looks at attack attempts on small businesses throughout the year.

Jim Lippie: 32% of small and medium sized businesses today are using MFA, multi-factor authentication, which is consistent with what CISA cites in terms of what is going on in the enterprise. They say it's 30%. So that is a very low number, both of those, 30 and 32% respectively, based on how important, you know, MFA is in terms of mitigating risk. So that was one finding that we found pretty interesting. Another was you know, last year, the number one attacker, if you will, in terms of countries that were coming after small and medium sized businesses around the world was Russia. You know, that was in the 2022 report. What we found in 2023 or the findings from this past year based on the Ukrainian conflict is that those attacks from Russia came way down, and now the number one from this past year, the number one threat actor country, if you will, is China. They're trying to get into small and medium sized businesses the most, and I want to be very clear about the fact that this isn't necessarily nation-state type attacks. Right? But these are where the attacks on small and medium sized businesses are emanating.

Dave Bittner: And what are we talking about in terms of volume, here, that you all are tracking?

Jim Lippie: Yeah, so we see approximately 50,000 brute force attacks every single day on about 7,500 small businesses that we monitor.

Dave Bittner: Do you have a sense for what the success rate is?

Jim Lippie: It's consistent with what we see in terms of, you know, overall averages, but it's about 1%.

Dave Bittner: Still a big number.

Jim Lippie: It's a big number when you consider the volume.

Dave Bittner: Yeah. I know another thing that you all track in the report, here, is phishing attacks. What are you all seeing there?

Jim Lippie: They're definitely on the rise. Phishing attacks, social engineering, there's so much information out there right now on the net that it's really easy, at this point, for threat actors to gain information that's publicly available, and then use that information in phishing campaigns and social engineering campaigns to trick unsuspecting end users into sharing information they should not be sharing, and then obviously leveraging that to gain access to your credentials and then down the lot into the environment, and then they move laterally from there. We've seen significant, you know, significant uptick in successful attacks recently, even in the last few months since we released the report from last year's findings.

Dave Bittner: Well, based on the information that you all have gathered here, what are your recommendations, then, for organizations to best protect themselves?

Jim Lippie: Number one, everyone should be using MFA. Number two, you should be monitoring all of these SaaS applications on an ongoing basis for unusual behavior. We're - Dave, we're a needle in a haystack game, here. Ninety-eight percent of all the security events that these applications throw off every single day are completely harmless. It's the 2% that we need to worry about, and they can be difficult to find. You know, for instance, one of our partners in the Midwest recently uncovered a Chinese spy ring. It was internal to a company. They never would've known that this was going on if they were not monitoring the user behavior associated with Office365. This employee was sharing very sensitive information out of one drive in SharePoint, sent to 2 specific IP addresses in China. Once it was downloaded on the other side, it was deleted to essentially destroy the evidence. What they didn't realize was that our software actually captures that activity, and it was - that case has been handed over to the authorities. But that is something that if you're not monitoring for it, if you're not looking for it, you're never going to know it. So being more vigilant and monitoring that level of activity is really, really important on an ongoing basis. Making sure that these applications are configured to best practice initially, and then again, monitoring those changes on a go-forward basis, really, really important. And then just overall, just being more vigilant in general, you know. There's a lot of best practices that people generally don't follow. You know, instead of a password, have a passphrase. You know? Use password managers. There's a number of best practices from a general perspective people should be following to mitigate their risk.

Dave Bittner: That's Jim Lippie from Saas Alerts.

Dave Bittner: And I'm pleased to be joined, once again, by Professor Awais Rashid. He is director of the National Research Center on Privacy, Harm Reduction, and Adversarial Influence Online at University of Bristol. Dr. Rashid, always a pleasure to welcome you back to the show. I think like a lot of folks, I have taken full advantage of the various apps on my mobile device, and you wanted to address, today, some of those health apps and some of the privacy concerns that you and your colleagues have been looking into. What can you share with us today?

Awais Rashid: So the app market has exploded for a while, now, but even more so, especially I would expect in some part due to the pandemic, for sure, personal health is very much on everyone's mind, and be it from you know, apps that allow you to track your exercise or you know, physical activity to also apps that provide support for mental health and wellbeing. But that naturally begs the question as to what information these apps collect, what are their data privacy practices, and are the permissions that are being collected, are they suitable for the task at hand? So this is a - there are a number of pieces of work that we've been doing, so for example, we did an analysis of 27 top ranked mental health apps from the Google Play store and we noted which has been finding elsewhere in the field, as well, that often, you know, official permissions are requested which are not necessarily required for the app to provide its functionality. We also found other issues. For example, in secure cryptographic implementations. We also found that for example, personal data and credentials were actually being leaked through logs and web requests, and this is not - the latter is not necessarily an adversarial thing, it's just really you know, implementation practices in itself. So in other cases, for example, we've also looked at developers, for example, using - asking for permissions, especially a link to, for example, fitness related applications. And again, there are interesting applications there because often certain permissions are very complicated and developers don't fully understand them, so they will request these permissions when they aren't necessarily required for the task at hand. So this creates an interesting problem in itself that we are increasingly reliant on these apps. We utilize them a lot, but there are significant privacy considerations around the data that has been collected.

Dave Bittner: Yeah, we've seen some reports lately, really sobering reports of some of these apps saying I've seen some trying to help people with things like addiction, but then sharing private information about the users for advertising. Seems to me like there's a real betrayal of trust, here. How do we come at that? How do we bridge that gap?

Awais Rashid: So that's really exactly where the problem lies, right? Because it's who are the third parties with whom the data is being shared? Who are the advertisers potentially, with whom the data is shared? And there are multiple, multiple issues here. I think one is as to what the user is actually signing up to in the first instance, and we all know the problems with very, very complicated privacy policies, which are often presented in legal jargon, and so on. But increasingly, for example, on both Apple IOS and on Android, you have you know, permissions, dynamic permissions where the app, as you install it or as you utilize it, asks you to grant particular permissions or deny those particular permissions. However, a lot of the time, users are not really, really clear as to whether these permissions are necessarily needed. Okay? And then if the data has been collected, what has it been used for? Transparency is very, very important. How do we provide it is much harder. So transparency can be provided, or we can claim that we provide transparency through very complicated privacy policies, and people agree that's not necessarily a good way of thinking. It can also provide transparency through the permissions - asking for permissions as they are needed, which is great. However, if as a user I don't really understand what that permission is asking me to do, it may or may not be needed, and what are the ramifications for that? And I think there is a sort of a bigger challenge, here, as to how do we actually communicate to users what is it the app is asking for and what are the implications? And that is non-trivial.

Dave Bittner: Do you suppose that perhaps it's time to increase the regulatory burden on some of these companies? I mean, to say to them listen, you know, we gave you a shot at self-regulating yourselves and that hasn't exactly gone very well. So we're going to put some rules in place here.

Awais Rashid: Yeah, so this is a live debate. This is a live debate in the UK. So for instance, there is online harms bill that is going through Parliament that is talking about the responsibility that sits on you know, large service providers, for example, as for what happens on their platforms and push that to the apps that they're also providing. There is also other calls for, for example, you know, voluntary courts of practice around making things clear. Regulation does have a place, but the challenge with regulation always tends to be is that it responds to what's the here and now, and things in the technology sector certainly move very, very fast. And I think it's a question of how do we actually provide regulatory environment which actually responds to those kind of changing technological landscapes? And that in itself is again a real challenge. But the biggest thing is evidence and making policy on the basis of evidence in itself, and that is something certainly that we do within the command center, of which I am the director. We aim to provide evidence on the basis of which, for example, some of the debates that are around online harms bill in the UK are being shaped. But I would also emphasize the role isn't for just one party. Regulators have a role to play. Organizations who are building these apps, they have a role to play. Developers, who are, you know, actually doing the work on the ground, they have a role to play. Platform providers have a role to play, for example, Google, Apple, you know, those kind of organizations. And of course, we cannot say that users have no role to play, because as a user, you know, I want to be able to decide what information I give. But at the moment, that balance isn't there in the sense that a lot of the time, users feel that they either have to give all the permissions or they don't really know, and they're unsure, and they give all the permissions, or they often have this feeling of helplessness. They go, well, I have to because I want to use the service, for example, and use the app. And that balance isn't right. So we have to make sure that all these different stakeholders come together and do something about it.

Dave Bittner: All right, well interesting work you all are doing there on this. Dr. Awais Rashid, thanks so much for joining us.

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. We'd love to know what you think of this podcast. You can email us at cyberwire@n2k.com. Your feedback helps us ensure we're delivering the information and insights that help keep you a step ahead in the rapidly changing world of cybersecurity. We're privileged that N2K and podcasts like the CyberWire are part of the daily intelligence routine of many of the most influential leaders and operators in the public and private sector, as well as the critical security team supporting the Fortune 500 and many of the world's preeminent intelligence and law enforcement agencies. N2K's strategic workforce intelligence optimizes the value of your biggest investment: your people. We make you smarter about your team, while making your team smarter. Learn more at n2k.com. This episode was produced by Liz Irvin and senior producer Jennifer Eiben. Our mixer is Trey Hester with original music by Elliott Peltsman. The show was written by Rachel Galphin. Our executive editor is Peter Kilpe and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.