The CyberWire Daily Podcast 6.17.22
Ep 1602 | 6.17.22

Malibot info stealer is no coin miner. "Hermit" spyware. Fabricated evidence in Indian computers. FBI takes down botnet. Assange extradition update. Putting the Service into service learning.

Transcript

Dave Bittner: Malibot is an info stealer masquerading as a coin miner. Hermit spyware is being used by nation-state security services. Fabricated evidence is planted in Indian computers. The U.S. takes down a criminal botnet. The British home secretary signs the Assange extradition order. We wind up our series of RSA conference interviews with David London from the Chertoff Group and Hugh Njemanze from Anomali. And putting the service into service learning.

Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Friday, June 16, 2022. 

Malibot: an info stealer masquerading as a coin miner.

Dave Bittner: Researchers at F5 Labs describe Malibot, an Android malware family, capable of exfiltrating personal and financial information, SecurityWeek reports. F5 says the malware can often be found posing on fraudulent websites as popular cryptocurrency mining app The Crypto App but may also pose as a Chrome browser or other applications. The malware's capabilities include support for web injections and overlay attacks, the ability to run and delete applications, the ability to steal cookies, multi-factor authentication codes, text messages and more. Malibot was found to abuse the Android Accessibility API, which permits it to perform actions without user interaction and maintain itself on the system. The use of the accessibility API also allows for bypass of Google two-factor authentication, as prompts can be validated through the infected device. Malibot uses the same servers used to distribute Sality malware and shares a Russian IP address with other malicious campaigns. The primary targets of Malibot have so far been customers of Spanish and Italian banks, but the malware could soon expand its geographical reach. 

"Hermit" spyware used by nation-state security services.

Dave Bittner: Researchers at Lookout have discovered a sophisticated Android spyware family, Hermit, that appears to have been created to serve nation-state customers. The spyware, currently in use by Kazakhstan's government against domestic targets, has also been associated with Italian authorities in 2019 and, at other times, with an unknown actor in Syria's Kurdish region. The researchers believe that the Android spyware is being distributed through text messages that claim to be from legitimate sources. And note that while an iOS version of the spyware exists, researchers were unable to get a sample. The Android spyware is reported to support 25 modules, and 16 of them were able to be analyzed. Many of the modules collect different forms of data, such as call logs, browser data, photos and location, while others can exploit rooted devices and make and redirect calls. Lookout security researcher Paul Shunk explained to SecurityWeek that the initial application is a framework with minimal surveillance capability, but that it could fetch and activate modules as needed, which allows for the application to fly under the radar during the security vetting process. 

Fabricated evidence planted in Indian computers.

Dave Bittner: Citing updated research by SentinelOne, WIRED reports that police in Pune, India, planted incriminating evidence in the computers of journalists, activists and academics - evidence that was subsequently used to justify their arrest. According to WIRED, SentinelOne has connected the evidence planting to activity it reported in its February 2022 study of the ModifiedElephant APT. The report said, the objective of ModifiedElephant is long-term surveillance that at times concludes with the delivery of evidence - files that incriminate the target in specific crimes - prior to conveniently coordinated arrests. 

US takes down criminal botnet.

Dave Bittner: The U.S. attorney for the Southern District of California has announced the takedown of a Russian cyber gang's botnet. Working with partners in Germany, the Netherlands, and the United Kingdom, the US FBI seized RSOCKS, a criminal-to-criminal service that offered access to bots as proxies in the C2C underworld market. The U.S. attorney explained, once purchased, the customer could download a list of IP addresses and ports associated with one or more of the botnet's backend servers. The customer could then route malicious internet traffic through the compromised victim devices to mask or hide the true source of the traffic. It is believed that the users of this type of proxy service were conducting large-scale attacks against authentication services, also known as credential stuffing, and anonymizing themselves when accessing compromised social media accounts or sending malicious emails, such as phishing messages. It costs RSOCKS' criminal clientele between $30 and $200 a day to route their traffic through the proxies. 

British Home Secretary signs Assange extradition order.

Dave Bittner: The Telegraph reports that British Home Secretary Priti Patel today signed an order extraditing WikiLeaks impresario Julian Assange to the United States, where he faces espionage charges. Mr. Assange's legal team intends to appeal the decision. 

Putting the Service into service learning.

Dave Bittner: And finally, there's a spy story out of The Hague. The Netherlands General Intelligence and Security Service announced  yesterday that they'd stopped a Russian GRU illegal from taking a position, an internship, with the International Criminal Court in The Hague. AIVD gave a brief account of the legend the illegal had created as part of his cover. They say, the Russian intelligence officer, purported to be Brazilian citizen Viktor Muller Ferreira, born April 4, 1989, when, in fact, his real name is Sergey Vladimirovich Cherkasov, born in September of 1985. Cherkasov used a well-constructed cover identity by which he concealed all his ties with Russia in general and the GRU in particular. AIVD has published documents giving more details on the legend. Some of them seem to have been written by Mr. Cherkasov himself. 

Dave Bittner: They tell a touching story of a mildly hard-scrabble life Senhor Ferreira led growing up in Brazil, a little bit global south, a bit of Horatio Alger and supplying what Gilbert and Sullivan would have called merely corroborative detail intended to give artistic verisimilitude to an otherwise bald and unconvincing narrative. He was pained when the other kids called him gringo because they thought he looked German, for example. Gringo is typically reserved for Anglophones, but then English is a Germanic language, so maybe close enough for the playground. Another fun fact - Senhor Ferreira reminds himself that he likes clubbing only where they play trance music, a detail we confess would have screamed GRU hood to us. But then we don't get around much anymore. 

Dave Bittner: The Washington Post notes that Mr. Cherkasov passed himself through the Johns Hopkins University School of Advanced International Studies between 2018 and 2020, earning a master's degree with a specialization in American foreign policy. The Post also reports the general consensus as to why the GRU wanted to place him in the International Criminal Court. They're interested in intelligence about war crimes investigations Russia faces in the ICC. Setting up an illegal with an elaborate, plausible legend is expensive and time-consuming. And that the GRU thought this worthwhile suggests that they consider the ICC a target worth infiltrating. Russian Foreign Minister Lavrov may protest that war crimes stories are just Western fake news and Ukrainian provocations, but the GRU knows better. 

Dave Bittner: The Johns Hopkins professor who wrote him a letter of recommendation is commendably frank about how he was gulled, He said, after the graduation, he asked for a reference letter for the ICC. Given my research focus, it made sense. I wrote him a letter, a strong one, in fact. Yes, me. I wrote a reference letter for a GRU officer. I will never get over this fact. I hate everything about GRU, him, this story. I'm so glad he was exposed. The professor shouldn't feel too bad. It's not his job to be a counterintelligence officer, after all. And illegals have fooled the best. 

Dave Bittner: Congratulations to AIVD for smoking this one out. The Dutch authorities sent him back to Brazil, by the way, which seems a nice, literal-minded, ironic touch. Let the Aquarium pay for Mr. Cherkasov's passage home, that is, if they want him back. 

Dave Bittner: David London is managing director of the cybersecurity practice at the Chertoff Group. And at last week's RSA conference, he and I got together to discuss some of the trends he and his colleagues are tracking. 

David London: Within the government, there's often, you know, this kind of expectation around compliance. But we are finding that, whether it's around the weaponization of supply chain, heightened expectations around, you know, reporting requirements and kind of oversight, both from DHS as well as SEC, we know that our commercial clients are going to have to, you know, comply and establish programs that enable, you know, an alignment to those expectations. And so being here together, trying to talk the same language has been, I think, you know, of importance to everyone and something that, despite our ability to kind of prove that we can work productively by Zoom, allows, I think, a lot more face-to-face kind of communication and rapport-building and trust-building. 

Dave Bittner: The conversations that you're having, particularly on the policy side - what direction are you seeing things going there? 

David London: So I kind of alluded to some of these heightened expectations. And we work a lot on - particularly with our partner Synopsys, one of the leading providers of software supply chain security, on, you know, the issue of the weaponization of the supply chain. And so the U.S. government has really doubled down on software supply chain visibility, transparency and security through the executive order. We're seeing that kind of promulgated through NIST and through other organizations of defining critical software, establishing kind of core software supply chain best practices and kind of testing and validation. And so organizations are struggling with that. You know, they're struggling with the level of technical debt and how to wrap their arms around both their own custom code and open-source code, which 80- to 90% of all kind of code bases are based on. And you look no further than, obviously, SolarWinds, but also some of the more recent, you know, malwareless cyber extortion attempts by groups like Lapsus$ who have taken source code from some of the largest technology providers like Samsung, Nvidia, and posted it onto the web, giving kind of breadcrumbs to adversaries to identify, you know, flaws, vulnerabilities that can then be used and have much broader blast radiuses. 

Dave Bittner: The customers that you're talking to, what sort of conversations are you having when it comes to being prepared on the regulatory compliance side? 

David London: So one of the things that we do a lot of is cyber crisis exercises and wargaming. And because we don't think looking at sort of compliance or kind of, you know, regulatory expectations in a vacuum as particularly helpful - we all know compliance does not equal security. And so as we kind of frame these issues around cyber crisis incident response planning and management, some of those regulatory expectations come into play. But they don't - they're not viewed, you know, outside of overall cybersecurity risk management best practices. So we're seeing a lot of demand signals for that, particularly with what's happening with the Russia-Ukraine aggression. 

Dave Bittner: Yeah. 

David London: Where we originally saw somewhat mild kind of Russian activity, what we're seeing today is - and Microsoft wrote a blog and a report on this - is kind of a broader focus around hybrid warfare, where you have kinetic attacks that are coupled by disruptive cyberattacks. And thus far, that's been relatively isolated to sort of the Ukraine and Russian domain. 

Dave Bittner: Right. 

David London: But, you know, given Russia's history, given its nation-state capabilities and tradecraft, we expect that there will be retaliatory attacks. And so our clients, our organizations are very focused on that and looking at kind of black sky, dark sky events that involve Russia or other nation-states. How do we respond? How do we identify an attack? How do we fulfill our kind of legal and regulatory obligations? But I think more importantly, to our clients, particularly those in critical infrastructure, how do we continue to have steady state operations? How do we build resilience into our program while also complying with, you know, regulatory expectations? But how are we achieving a level of execution to our customers and our clients? 

Dave Bittner: Is there overall a sense of optimism that we're in a good place in terms of meeting those goals? 

David London: I don't think you would talk to any cybersecurity expert - I think there is a level of, the work is never done. 

Dave Bittner: Yeah. 

David London: I do believe that there is a higher level of optimism with the level of sort of coordination and information sharing that is occurring with government. You know, I've worked in cyber exercises in wargaming for about a decade and a half. And the constant complaint was, we give you all the information - we, private sector - and we don't get anything back. We don't get anything enriched back. And so the latest law passed by Biden in March - this kind of CISA guidance on incident reporting within 72 hours, material incident reporting ransomware events within 24 hours - they are, the private sector is expected to provide information, including threat tactics and behaviors. But in return, the U.S. government will be providing either, you know, on the classified side, where possible, or sanitized information on not just thank you very much, but here are some very specific ways you can protect your environment based on the threat activity and the sightings that we are observing within the environment. And so I do think there's optimism there. 

David London: But I also think, given the weaponization of the supply chain, you know, the nation-state capabilities, the merging of nation-state and financially motivated capabilities, has created just additional headwinds among, you know, our clients. And there's also resourcing, you know, concern. And so particularly where you see some contraction in the economy, particularly in the tech sector, which we have observed to have significantly growing security programs given the risk exposure, they'll have to balance that resourcing of their cybersecurity priorities against the priorities of their broader enterprise. And, you know, we work a lot with organizations on building risk registers. And I think it kind of puts in sharp relief the importance of building a kind of repeatable cyber risk management program where you can take a risk, and you can begin to quantify it based on your overall inherent risk to the organization, the level of kind of countermeasures and residual risk you achieve and the impact of that attack so that you can have some way of triaging all the many and growing risks within your organization and kind of prioritize that resourcing. 

Dave Bittner: David London, thanks for joining us. 

Dave Bittner: There's a lot more to this conversation. If you want to hear more, head on over to CyberWire Pro and sign up for interview selects, where you get access to this and many more extended interviews. 

Dave Bittner: Hugh Njemanze is president and CEO of security firm Anomali. At RSAC, he presented on the increasing and changing usage of intelligence to improve security. I caught up with Hugh Njemanze for an overview of his presentation. 

Hugh Njemanze: So basically, in the old days, people generated logs of everything. And those logs are all about events that are happening on your system. Eventually, it became interesting to look at the originators of activities and start assigning, if you will, reputations to them. So in other words, these IP addresses correspond to a bad actor. Maybe they use 20 different addresses that are associated with them. Over time, that evolves. Some of those become obsolete. Some new ones come into play. And so collecting that kind of information from researching activities that had already happened in the past became a thing, started to be referred to as threat intelligence. And if you compare that to the real world, it's like when you have break-ins in a house. The burglar alarm can see the window break. But if somebody gets arrested, you can start building track record. They go after safes. They go after TVs. They tend to go in this neighborhood. And so then a neighborhood watch can start to say, we saw a suspicious character that's a known malicious entity in your neighborhood. 

Dave Bittner: It matches the track record that we've seen with previous things. 

Hugh Njemanze: Exactly. So that's really what threat intelligence is about, is a neighborhood watch. It's leveraging the fact that we know somebody did something in the past, so we can infer what they're going to do. It's kind of like if a terrorist tries to get on a plane without a knife, how do you know they're a terrorist? But if you have a watchlist, then you can look at their ID and say, well, this is a known person with a record of behavior. Otherwise, you can only stop the guys that pack guns and knives in their roller boards to alert you. 

Dave Bittner: Right. Right. So what is considered the state of the art these days when it comes to threat intelligence? 

Hugh Njemanze: Well, there's different aspects to threat intelligence. So one is, how do you collect it? How do you research it? And then there's, how do you use it? And where the state of the art has been evolving most rapidly recently is applications of threat intelligence. So in other words, what can you do with it, right? So - and a lot of the changes actually come from organizations, typically large enterprise, that are finding new applications of threat intelligence on a regular basis and then sort of feeding that back into the community. And for example, at Anomali we learn a lot from what people are doing with the threat intel that they're acquiring. Now, threat intelligence can come from commercial firms that specialize in doing research and collecting intelligence. It can come from communities that do it as a service to their peer groups. It can come from things like ISACs, which are information-sharing communities... 

Dave Bittner: Right. 

Hugh Njemanze: ...That share the threat intel feeds, research feeds with their membership. 

Dave Bittner: So as an organization, are those traditional things you mentioned earlier - you know, like your logs - are those all being fed into the collection of information that's then used to form better threat intelligence? 

Hugh Njemanze: That's a great question. So those are actually two complementary sets of data. So it's kind of like if you have a phone book, that's all the people in the phone book, right? 

Dave Bittner: Right. 

Hugh Njemanze: And then when they do something - when they drive through a tollbooth, when they go through airport security - you compare them to that phone book, but you also look at maybe their IDs, their job description, etc. And so the alerts and events are what's being done on the network, the threat intel is a list of who's who on the network, and what you have to do is marry those two sets of data. So the event activity is something that's happening continuously on a daily basis. A large organization could be collecting more than a billion logs per day - maybe several billion logs per day. On the threat intelligence side, that's like the little phone book or the TSA No-Fly page. 

Dave Bittner: Right (laughter). 

Hugh Njemanze: So... 

Dave Bittner: Right. 

Hugh Njemanze: ...What you have to do is compare that list to the billions of events that are happening every day and look for matches. And so that's actually - the more threat intelligence expands, the harder it gets to do that. So 10 years ago, we were looking at maybe 100,000 active indicators that people knew about. 

Dave Bittner: OK. 

Hugh Njemanze: And a year later, it was a million. A year later, it was 10 million. Then it was 100 million. Now, we have probably the largest single repository of threat intel in the world, and it's like 5 billion threat indicators. So it's a multiplication problem because if you have a billion events and a thousand indicators, you have to make a trillion comparisons. 

Dave Bittner: Right. 

Hugh Njemanze: But if you have a billion indicators and a billion events, it's just mind-boggling. And so that's the scope of the challenge today - is doing what's known in database worlds as a join between all the activity and all the known actors. 

Dave Bittner: What is the spectrum of ways that people consume threat intelligence? Like, I would imagine, you know, different sizes, different types of organizations integrate it in different ways. 

Hugh Njemanze: Absolutely. So first of all, there's a variety of tools in a security operations center. There's SIMs. SIMs have, I would say, the largest appetite for threat intel. So in other words, because SIMs are receiving activity log events from all around your network - from your switches, your routers, your hosts - in addition to security tools, like firewalls, IDS and so forth, and so that's where people typically, if they're large enough to have a SIM and a SOC, they typically direct threat intel to the SIM. That's where the bulk of the activity happens. 

Dave Bittner: I see. 

Hugh Njemanze: But then they'll also send some to the firewall. But firewalls are designed to have block lists, and those lists are measured in thousands, not billions and millions. So what people do is they filter it down to a small set of intelligence that they think is relevant to the firewall - send that to the firewall. They send a bigger subset to the SIM. 

Dave Bittner: That's Hugh Njemanze from Anomali. 

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. Be sure to check out this weekend's "Research Saturday" and my conversation with ExtraHop's Edward Wu. We're discussing a technical analysis of how Spring4Shell works. That's "Research Saturday." Check it out. The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Rachel Gelfand, Liz Irvin, Elliott Peltzman, Tre Hester, Brandon Karpf, Eliana White, Puru Prakash, Justin Sabie, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here next week.