The CyberWire Daily Podcast 5.23.23
Ep 1829 | 5.23.23

BlackCat gang crosses your path and evades detection. You’re just too good to be true, can’t money launder for you. Commercial spyware cases.


Dave Bittner: AhRat exfiltrates files and records audio on Android devices. The BlackCat ransomware group uses a signed kernel driver to evade detection. GUI-Vil in the cloud. Unwitting money mules. Ben Yelin unpacks the Supreme Court's Section 230 rulings. Our guest is Mike DeNapoli from Cymulate with insights on cybersecurity effectiveness. And a trio of commercial spyware cases.

Dave Bittner: I'm Dave Bittner with your CyberWire intel briefing for Tuesday, May 23rd, 2023.

AhRat exfiltrates files and records audio on Android devices.

Dave Bittner: ESET reports finding a trojanized Android app that's afflicting Android devices with our AhRat malware. The iRecorder Screen Recorder app began its career on Google Play innocuously in 2021, but by August of last year, had become malicious. ESET explains that the app received an update containing malicious code quite a few months after its launch. The application's specific malicious behavior, which involves extracting microphone recordings and stealing files with specific extensions, potentially indicates its involvement in an espionage campaign. The malicious version has received some 50,000 downloads. Google has purged it from the store and ESET has found no evidence of the malware anywhere else in the wild. AhRat is based on AhMyth, and its functionality suggests its origins as an espionage tool. AhMyth itself has an intelligence service heritage. It was used by APT36, a group probably based in Pakistan that deployed AhMyth against government and military targets in South Asia, but ESET is careful to avoid attribution in its report. There's considerable crossover between criminal and espionage tools, and the purveyors of AhRat remain unknown.

BlackCat ransomware group uses signed kernel driver to evade detection. 

Dave Bittner: A new report warns of bad luck, BlackCat may be crossing your path without your knowledge. Trend Micro reports that the BlackCat ransomware gang is using a new signed kernel driver to evade detection. The researchers assess that this new kernel driver could be an updated version of signed code Mandiant, Sophos and SentinelOne discovered in December. The three firms coordinated disclosure showed attackers abusing Microsoft developer accounts certified through Microsoft's hardware developer program to create malicious kernel drivers and use them in ransomware attacks.

Dave Bittner: Trend Micro believes that this new driver is an updated version that inherited the main functionality from the samples disclosed in previous research. They further explain that these kernel drivers are mostly used in the evasion phases of an attack. Trend Micro assesses that this new signed driver is still being developed because it is not structured well, and some of its functions currently cannot be used. The report found that threat actors can get their hands on code-signing certificates by purchasing leaked ones available on the dark web abusing Microsoft's portal or by impersonating legitimate entities.

GUI-Vil in the cloud.

Dave Bittner: Permiso has blogged about a threat group they've been tracking for the last year and a half a financially motivated cloud threat actor they've called GUI-vil. The group is reportedly based in Indonesia and participates in cryptomining, leveraging Amazon Web Services for their illicit operations. The hackers employ graphical user interface tools, including an older version of the S3 Browser from early 2021. They use the browser to conduct their operations after gaining access to the AWS Management Console. Researchers say the malware initially performs reconnaissance by monitoring public sources for exposed AWS keys and scanning for vulnerable GitLab instances. The researchers also say that the exploitation of known vulnerabilities, as well as vulnerable publicly exposed credentials, are primarily the methods of initial compromise.

Unwitting money mules. 

Dave Bittner: Is that new Tinder date really that generous or is it too good to be true? The US government has begun carrying out over 4,000 legal actions against individuals involved in money-laundering schemes. These can include cases involving those who acted sometimes unwittingly as money mules. The Register reports that recently 25 individuals have been charged with participating in money laundering schemes. In one case of note, Craig Clayton of Rhode Island is alleged to have created 65 shell companies in the US and 80 bank accounts to launder over $35 million between 2019 and 2023. US Postal Service Inspector-in-Charge Eric Shen said, "Anyone can be approached to be a money mule, but criminals often target students, those looking for work and those on dating websites. When those individuals use the US mail to send or receive funds from fraudsters, Postal Inspectors are quick to step in and put a stop to money mule activities." Many apparently unwitting money mules have been given strongly worded letters explaining the legal consequences if they don't cease all alleged money laundering. While receiving cash from a virtual date may sound enticing, the old adage "It's too good to be true" perfectly sums up this scheme. Experts agree of course that it's just common sense to not take money from strangers on the internet. Doing so can lead to scams, and in this case, unknowingly assisting in large-scale money laundering operations.

Three commercial spyware cases.

Dave Bittner: And finally, three spyware cases have attracted attention early this week. The first case involves charges brought against four suspects from Bavarian company FinFisher, who are accused by German authorities of selling surveillance software to Turkey, The Washington Post reports. The prosecutors say that the licensing requirements were intentionally violated by the suspects, as they were selling surveillance software to countries outside of the EU. The outlet notes that the company's FinSpy software was made available under false pretenses to members of the Turkish opposition in 2017, and was used to spy on them.

Dave Bittner: Israeli spyware maker QuaDream is shutting down after failure to get authorization to sell its spyware to new clients, Haaretz reports. The company also promised products and capabilities that never ended up seeing the light of day, including a broadened scope on their existing offering that allowed for the hacking of Android devices. The providers struggled to compete against fellow Israeli spyware giant NSO Group within the European Union, and honed in on Asia, Africa and Arabic nations. QuaDream reportedly held talks with four countries including Morocco after the NSO Group didn't get the green light to renew their contract with the nation. An ethical hacker from Amnesty International calls the attempt to strike a deal with Morocco, after a multitude of reported spyware abuses, proof of the total inability of the commercial spyware industry to police itself. QuaDream is in the process of selling off its assets, and its employees are reportedly interviewing with other organizations. And lastly, Mexico's top human rights official, Alejandro Encinas, was found to be targeted by NSO Group's flagship spyware Pegasus, The New York Times found. This case is reportedly the first in the nation's history to target such a high-ranking official in the country's administration. While it isn't possible to confirm with 100% certainty which Mexican government agency targeted Mr. Encinitas, only the military has access to Pegasus, sources familiar with the contracts affirm, Mr. Encinas and the military have a less than pleasant relationship as he has previously accused the Armed Forces of being involved in a mass disappearance involving 43 students. His phone has reportedly been infected a number of times. Coming up after the break, Ben Yelin unpacks the Supreme Court's Section 230 rulings. Our guest is Mike DiNapoli from Cymulate with insights on cybersecurity effectiveness. Stay with us.

Dave Bittner: Mike DiNapoli is director and cybersecurity architect at Cymulate, an automated security validation testing company. They recently released their annual cybersecurity effectiveness report, and Mike DiNapoli joins us with some of the highlights.

Mike DeNapoli: So there are two major highlights. One is good news. For the most part, security controls are getting better year over year. We looked at the results over the last three years to do some comparisons. There was a spike up in risk levels in 2022, but that wasn't unexpected. 2022 is where a lot of the push to permanent remote work occurred, so there were a lot of changes that needed to be made to security controls, and therefore, we did expect that overall risk would go up, and it did, but for most security controls over the course of 2022 -- and I apologize, the uptake was in 2021 when all the remote work happened -- but in 2022, over the course of the year, we saw that those same security controls were incurring less risk to the organization. The levels of risk were going down. There were however, two exceptions, and that's the other interesting thing that we found.

Dave Bittner: Hm, well, let's talk about that. What were the exceptions?

Mike DeNapoli: The first one was defenses around public-facing architecture like websites and services. So we saw a lot of improperly configured web application firewalls. We saw a lot of organizations that are not adopting newer but not new protocols and technologies. For example, in the email space, there's a distinct lack of DKIM and SPF and those sorts of things being used, so that's one. Public-facing infrastructure, the level of risk associated with that class of infrastructure went up a little bit, and that's not a great thing to see. The second major area of higher risk than expected was in the defense of data. So what we're seeing is that data exfiltration, while the use of tools like data loss prevention systems and cloud access security broker systems has gone up, the overall risk of losing control of sensitive data has also gone up, and we would have expected that it would come down as those security controls are more often used.

Dave Bittner: Hm, so what are the conclusions to be had then, based on that reality?

Mike DeNapoli: So the first one with public-facing infrastructure, as a whole the industry really has to begin understanding that this is a team sport. While of course, a business is going to take the most effective security measures they can for their organization, that's expected. We also have to start worrying about working with the greater cybersecurity community. So we have to start implementing protocols like DKIM on email, and DMARC on email. Now, those don't directly protect my organization when I implement them, but they do protect other organizations by allowing them to confirm that email sent from my org came from my org. And in return, I get to use SPF, which allows me to do that confirmation when an email is coming in for someone else, but if both sides aren't using these technologies, neither side can benefit, so that's the first thing. Public-facing infrastructure, first off finding shadow IT because there's a lot of websites that are visible that weren't behind a WAF. Second, let's start playing as a team sport in addition to all of the things that of course, a business is going to do that solely focused on protecting itself. In the DLP world, in the data loss prevention world, the issue has become the threat actors adapt, and they adapt fairly quickly. They're smaller groups, sometimes individuals. There's not a lot of change control requests and other things going on, so they tend to adapt quickly, and that's what they've done here. What we're seeing is that traditional data exfiltration methods, such as uploading to a Dropbox or a OneDrive, or exfiltration by things like USB, are being quite well defended. However, when it comes to things that cannot be easily blocked at a firewall or a VPN, such as AWS S3 storage, if you block S3, you're blocking about half of the internet, so it's not easily accomplished in any way. Those methods of data exfiltration are being allowed, and it's because the industry does need to start to move away from group policies and physical restrictions as a sole means of controlling data and start moving to a more hybrid model where we have CASB, we have DLP. They are being tuned on a regular basis to catch new exfiltration attempts and methodologies. So it's not all horrible news. The methods that are effective are remaining effective. However, there are much more newer methods that are just very difficult to block with traditional security techniques, and they are gaining steam quite rapidly.

Dave Bittner: So what are your recommendations then, given the information you gathered here? How is it that folks can best go about defending themselves?

Mike DeNapoli: On the data exfiltration side specifically, there are a couple things that can be done. One is to begin the road, begin to walk down the path of data loss prevention tools, cloud access security broker tools. And many organizations actually have begun walking down those paths, but these systems are not set it and forget it, so this is something we expect to see the impact of over time. If you are walking down that path, and maybe you've already implemented a DLP or turned on CASB in something like O365, of course make sure that you're assessing it, testing it, that it's doing what it's supposed to do. A prime example of that, we did see that when a user attempted to send sensitive information as any form of email attachment, it was getting blocked, and that's why we say we do see indications that these tools are coming into play. That would be part of the CASB specifically around email. However, if the user were to scrape the text, and in many cases, the text is what is actually the sensitive info, like healthcare information or business confidential or privileged information, and place it in the body of an email, it was not blocked. It was received by the intended recipient. So that is an indication that these tools are beginning to be used, which is good, but that they're not necessarily being tuned for the methods that threat actors will use, which is bad. So again, none of this is indicating that this is a hopeless situation in any way, but it is indicating that perhaps these tools are not being tested during initial tuning, or not being tuned over time.

Dave Bittner: That's Mike DiNapoli, from Cymulate.

Dave Bittner: And joining me once again, is Ben Yelin. He is from the University of Maryland Center for Health and Homeland Security, and also my co-host over on the Caveat Podcast. Ben, welcome back.

Ben Yelin: Good to be with you, Dave.

Dave Bittner: Article here from CNN by Brian Fung, titled Supreme Court shields Twitter from liability for terror related content and leaves Section 230 untouched. This story has been getting a lot of attention here, Ben. What's going on?

Ben Yelin: Yeah, so this is something you and I have been following for a long time. There were these twin cases at the Supreme Court, one targeting Google as the parent company of YouTube and one targeting Twitter, both of them concerned acts of terrorism, and these companies were being accused of aiding and abetting terrorism through their algorithms, or through the way that they directed users to different videos. And there was a fear in Silicon Valley that these cases would be the vehicle to significantly curtail the power of Section 230 of the Communications Decency Act, which is a shield against liability for many of the most prominent platforms, if they are just acting as platforms where people can post content. There was some very contentious arguments during oral arguments in both of these cases at the Supreme Court where I think the justices in good faith were trying to work through the complications of this issue. What counts as simply being a platform where some other user's content is being posted

Dave Bittner: Right.

Mike DeNapoli: -- and what counts as being the creator, or being the creator of content by simply having these types of algorithms? Is there something unique about these algorithms? Is Google speaking in a legal sense when they direct people who have watched ISIS videos to other ISIS videos? Is that action aiding and abetting acts of terrorism? And I think the Supreme Court was pretty divided on this particular issue, and I think that's reflected in what happened. So in the case that had more to do with the anti-terrorism statute (that was the Twitter case) basically, the Supreme Court in a unanimous decision said that these tech companies actions aren't aiding and abetting terrorism, according to the definition of that anti-terrorism statute. They're not taking the sufficient type of proactive measure that would provide material support for terrorists. By simply being the conduit to information and having these algorithms directing people to different videos, that cannot be considered aiding and abetting. The Gonzalez v. Google case was set to be the case that really could have cut against Section 230. I think there was some speculation prior to oral arguments that this could be the case where the Court reconsiders how broad that Section 230 liability shield is. And what the court did in that case is basically just punt it back down to the lower court saying, we're not going to resolve the Section 230 issue because we want the lower court to weigh the substantive issues of aiding and abetting terrorism in light of their decision in the Twitter case. So it's just not right for review for Section 230, and the allegations in this case about how closely Google was connected with this act of terrorism, which was the 2015 terrorist attacks in France, is so tenuous that it just is not the right vehicle to be the type of decision that would reverse or curtail the powers of Section 230. So this was an unusual per curiam opinion in which no justice actually signed on to it. It's the per curiam opinion, which means the opinion of the court basically said, we're sending this back to the lower court without any discussion on the relative- the relevant Section 230 issues. So where that leaves us is Section 230 is intact. The online platforms are thrilled that they're not going to be exposed to liability, there's not going to be a million lawsuits just because every time somebody searches something untoward on YouTube or Twitter, they are directed through the algorithm to similarly dangerous content. So I think that's definitely a sigh of relief for Silicon Valley. There are critics of Section 230 and the extent of Section 230 liability shield.

Dave Bittner: Right.

Mike DeNapoli: There are certainly critics in Congress, and they're going to have to sort of regroup. They didn't get the decision that they were looking for in this case, so they have to go back to the drawing board legislatively and figure out a way to either amend Section 230 to increase the potential liabilities for some of these companies if they are contributing to acts of terrorism or other activity, or to allow lawsuits against these companies in the event that there have been allegations of political bias. So there is that congressional route where we might see Congress try to engage in some type of legislative action, and there might be some future Supreme Court case, whether it's this Gonzalez case once it's been remanded, or whether it's a different case where there's more of an opportunity based on the facts of the case to decide the Section 230 issue on the merits. And so we're just going to have to wait to see if that case presents itself, so no resolution right now. The status quo is good for Silicon Valley, so they're happy.

Dave Bittner: This case or these cases going the way that they did, does that at all affect the likelihood of future cases making it all the way to the Supreme Court?

Mike DeNapoli: I don't think that necessarily does. I think there- if there's some type of case where the facts are very contingent on the particular Section 230 issues, and there have been proper allegations on the substantive law, then I think you would have a much better chance of seeing this actually litigated. The Supreme Court is reticent to decide major legal questions unless they're absolutely forced to. It's kind of this constitutional canon of avoidance if they don't have to step in and decide something, to generally (with very notable exceptions which I will not get into), but it is generally their practice to avoid weighing in on those issues, and I think that's what happened here. This just wasn't the case that was going to be decided by a complex analysis of Section 230 and exactly how it was going to apply. I think, with a different set of facts, it could have been very different, and we would have gotten that decision. And now I think the justices, based on the oral arguments and reading probably hundreds of different briefs, are more familiar with some of the Section 230 issues that have been raised and might be more likely in the future to grant certiorari if a Section 230 case comes their way.

Dave Bittner: All right. Interesting stuff. Ben Yelin, thanks so much for joining us.

Ben Yelin: Thank you.

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our daily briefing at We'd love to know what you think of this podcast. You can email us at Your feedback helps us ensure we're delivering the information and insights that help keep you a step ahead in the rapidly changing world of cybersecurity. We're privileged that N2K and podcasts like the CyberWire are part of the daily intelligence routine of many of the most influential leaders and operators in the public and private sector, as well as the critical security team supporting the Fortune 500 and many of the world's preeminent intelligence and law enforcement agencies. N2K's strategic workforce intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at This episode was produced by Liz Irvin and Senior Producer Jennifer Eiben. Our mixer is Tre Hester, with original music by Elliott Peltzman. The show was written by Rachel Gelfand. Our Executive Editor is Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.