Volt Typhoon goes undetected by living off the land. New gang, old ransomware. KillNet says no to slacker hackers.
Dave Bittner: China's Volt Typhoon snoops into US infrastructure, with special attention paid to Guam. Iranian cybercriminals are seen conducting ops against Israeli targets. A new gang uses recycled ransomware. A persistent Brazilian campaign targets Portuguese financial institutions. A new botnet targets the gaming industry. Phishing attempts impersonate OpenAI. Pro-Russian geolocation graffiti. Andrea Little Limbago from Interos addresses the policy implications of ChatGPT. Our guest is Jon Check from Raytheon Intelligence & Space, on cybersecurity and workforce strategy for the space community. And KillNet says no to slacker hackers.
Dave Bittner: I'm Dave Bittner with your CyberWire Intel briefing for Thursday May 25th, 2023.
China's Volt Typhoon snoops into US infrastructure, with special attention to Guam.
Dave Bittner: A joint advisory from all Five Eyes reports a major Chinese cyberespionage operation that has succeeded in penetrating a wide range of US critical infrastructure sectors. Microsoft, in its own report on Volt Typhoon, as the threat actor is being called, says the group has been active since at least the middle of 2021. The targets of the spying have included a slew of sectors, including communications, manufacturing, transportation, government, IT, and education among others. Microsoft writes that the threat actor intends to lie low and conduct cyberespionage for as long as they can. It does this, the Five Eyes stress, by carefully living off the land, exploiting existing legitimate administrative tools and privileges in its targets.
Dave Bittner: Much of Volt Typhoon's activity has been directed against Guam, a US territory in the western Pacific that plays host to important US military bases. Those bases would be important to any US intervention on behalf of Taiwan should China decide to take a page from Russia's geopolitical playbook and invade what it regards as a renegade province. For its part, China dismisses the reports as American disinformation and denies its involvement in any activity the Five Eyes and Microsoft associate with Volt Typhoon.
Iranian cyber ops against Israeli targets.
Dave Bittner: Two Iranian threat actors have been observed targeting Israeli organizations. The first, Agrius, has been observed conducting ransomware attacks against Israeli entities, Check Point reports. What appears to be destructive ransomware attacks are actually masking influence operations, the researchers suggest. The APT group now calling both itself and its newest ransomware strain "Moneybird" has been seen in recent attacks deploying their new C Plus Plus ransomware. While the researchers did not elaborate on what organizations were victimized, the Record writes the techniques reflect that of Agrius. Public-facing web servers were the initial point of compromise, which when entered, allowed for reconnaissance and data stealing as the hackers were able to move laterally within networks.
Dave Bittner: Information Security Buzz reports that another Iranian threat group is attacking Israeli shipping and logistics companies to lift customers' data. Israeli cyber firm ClearSky says with low confidence that this may be the work of Tortoiseshell, also known as TA456 and Imperial Kitten. At least eight websites were targeted in the campaign including SNY Cargo, logistics company Depolog, and restaurant equipment supplier SZM. AI-Monitor says what the firm calls a watering hole attack or an attack infecting the website of a specific group has also victimized some organizations in the financial services industry. The majority of websites as of mid-April had been purged of the malicious code.
Blacktail, a new ransomware group using recycled ransomware.
Dave Bittner: A new ransomware operation calling itself Buhti has been discovered by researchers at Symantec. The tool uses variants of Lockbit and Babuk ransomware, as well as a custom infostealer which is able to search for and archive specific file types. The researchers were unable to attribute this new campaign, which has been found to target both Linux and Windows machines, to any known threat actors, and so have dubbed the associated group Blacktail.
Operation Magalenha, a Brazilian persistent campaign targeting Portuguese financial institutions.
Dave Bittner: SentinelLabs released a report today regarding a campaign that they've observed targeting over 30 Portuguese financial institutions. Researchers assess with high confidence that this campaign is being conducted by a Brazilian threat group, who they referred to as Operation Magalenha. SentinelLabs writes that this conclusion is further supported by the presence of Brazilian Portuguese language usage within the infrastructure and malware.
Dave Bittner: The threat group's infrastructure shows features that differentiate it from other campaigns. One unique aspect was the existence of two simultaneous PeepingTitle variants on the same infected machine. The operation also uses Russian internet-as-a-service provider Timeweb Cloud, which researchers say is known for its lenient anti-abuse policies. The operation uses multiple infection vectors such as phishing emails, malicious websites advertising fake installers of popular software, and social engineering.
Botnet targets gaming industry.
Dave Bittner: Akamai detailed the activities of a new botnet by the name of Dark Frost observed targeting the gaming industry. The Dark Frost botnet consists of a conglomeration of stolen code from other botnets, particularly Mirai, Gafgyt, and Qbot. The threat actor seems driven, at least in part, by a need for attention, as they have been observed on social media channels not only admitting to their illicit botnet creation and use, but have shared live recordings of their attacks. The botnet has launched DDoS attacks against not only gaming companies, but those that are gaming company-adjacent, game server hosting companies, online streamers and various other members of the community. While the malware was unsophisticated, it was capable of significant damage. With an ever-growing amount of source code from existing malware strains readily available, as well as access to AI code generation, threat actors are seeing a significantly lower bar to entry.
Phishing attempts impersonate OpenAI.
Dave Bittner: INKY has detailed a new phishing attack that impersonates ChatGPT creator OpenAI for credential harvesting. The threat actors are using a multitude of techniques in this brand impersonation phishing attack, including spoofing, dynamic redirection, and utilizing malicious links. Tithey falsify an email to appear to be from OpenAI that the researchers say looks nearly identical to the one users receive when they sign up for a new ChatGPT OpenAI account. The hackers spoof the email address to appear to come from the IT department of the receiver. They swap out the safe link in the legitimate email for a malicious link that asks for a user's credentials. If they are entered, then they're stolen.
Geolocation graffiti.
Dave Bittner: The UK's Ministry of Defence this morning pointed out a geolocation spoofing stunt. They wrote that, "Analysis by Geollect indicates that since the 14th of May 2023, commercial vessels' Automatic Identification System data has been remotely spoofed to create the impression of a 65-kilometer-long Russian prowar Z symbol on the Black Sea, visible on open source tracking software." The tracks reportedly show the vessels' speeds as upward of the rather implausible 102 knots, or just under 120 miles per hour, adding further evidence that the reports were fake. Spoofing AIS, which the Defence Ministry says is used to track vessels and ensure their safety, increases maritime accident risk. The Ministry credits pro-Russian actors who likely conducted the spoofing as an information operation, potentially in an attempt to bolster Russian morale ahead of an anticipated Ukrainian counteroffensive.
What's up with KillNet?
Dave Bittner: And finally, if you are wondering how things are in the world of cyber auxiliaries, privateers, and general no-good-niks, KillNet's boss spokesperson KillMilk this week announced that he was firing a bunch of his hacktivists. The Russian outlet Lenta.ru reports that KillNet participants cite clearing out groups from the gang that are insufficiently professionally contributing to attacks against the West,. So hacktivists, up your game or you're out.
Dave Bittner: Coming up after the break, Andrea Little Limbago from Interos addresses the policy implications of ChatGPT. Our guest is Jon Check from Raytheon Intelligence & Space on cybersecurity and workforce strategy for the space community. Stay with us.
Dave Bittner: Jon Check is Executive Director of Cyber Protection Solutions at Raytheon Intelligence & Space. In collaboration with my N2K colleagues on the T-Minus podcast, I caught up with him at the RSA Conference for insights on cybersecurity and workforce strategy for the space community. Can you give us some insights as to what the situation is on the ground? I mean, I know we talk over on the cybersecurity side about that there are skills gaps, that there's challenges in hiring people. Is it pretty much the same in space?
Jon Check: Yes, I would say in space, the same rules apply, right? There's the skills gap. It's lack of diversity, right, something that we also need to address, because I mean, space is- it has Earth's problems. It's all the same thing.
Dave Bittner: I love it.
Jon Check: Right? Just in a different level of the atmosphere,
Dave Bittner: Right.
Jon Check: Right.
Dave Bittner: Even when we build that moon base, it'll be the same.
Jon Check: Just a little- you know, all the same rules and problems will apply to the moon, I'm sure.
Dave Bittner: So how are you and your colleagues at Raytheon coming at that to try to narrow those gaps?
Jon Check: Well, one of the key things is making sure that there's context. So to solve any problem, you really need to have the people that are deep into cyber, that are here to do all the right things around that which would be implementing the zero-trust pillars, ensuring that you're doing all the things to secure an environment, but also marrying them up with people with deep space knowledge, people that understand how satellites work, how the communications work between ground stations, and those things floating above us, and how does that- how do they talk between them, and eventually put those two contexts together. You can't just have- you know, ultimately, cyber is a team sport. That requires all players to be engaged and helping each other fill the gaps that they don't have in knowledge, and that's one of the critical learnings we have within Raytheon is we have a part of our business that does offensive cyber. So we've developed something we call Raytheon offensive labs, where we teach our defenders to think like an attacker, which means it's a totally different mindset you approach a problem to versus one of the gaps we have in traditional learning, I'll say in traditional colleges and universities they do great work, but they don't teach offensive cyber. Programs, you know, that's typically learned by somebody who has an interest in cyber, and they're doing that in a cyber defense competition where they're defending against a red team that's trying to attack their fictitious network supporting a company, or a CTF or one of those other aspects to where you get more of the flavor of okay- you know, the greatest thing ever, my most enjoyable experiences in cyber are after you do an exercise like that, and the red teamers are out briefing the teams that they were attacking, and the conversations are the best, because they're like, "Oh, yeah, when you typed in that 100-character password, and it took you 30, you know, 30 seconds to do that, we'd already seen it, cut and pasted it and we were owning everything you had at that point."
Dave Bittner: Wow.
Jon Check: So it's a great dialogue because that person, they're thinking, "I'm being super secure, because I'm doing a 100-character password-
Dave Bittner: Right.
Jon Check: -and taking the time." Meanwhile, the attacker is like, "Yeah, I could see you doing it the whole time. I was just cut and pasting and putting it where I needed to go next in your network." So it's a great- those are learnings that have to continue, and I think space will- exactly the same rules apply, and that's what we're really focused on is how do we marry the okay, here's what attackers would do in space. What does it look like in a cyber vector and how do we ensure that the defenders understand what that looks like?
Dave Bittner: And a situation like that, to see it be able to be done in a collaborative way, you know, there's an adversarial element to it, but at the end of the day, in that particular case, everybody's on the same team.
Jon Check: 100%, and it's- it really is. I mean, people look forward to that. That's like one of the highlights, because that's when you truly get the learnings of when you were doing all- and over time, one of the things we participate in is the National Collegiate Cyber Defense Competition which is in its 18th year this year, and so there's absolutely been a mature- the maturity level of the teams that come to participate each year from the colleges and universities, has greatly improved. Right? They've clearly learned and are way more advanced than they were when the competition started.
Dave Bittner: When it comes to the security of satellites, for example, you know, in a previous life, I worked in the television industry and back in the '90s, and I remember talking to my friends in Master Control who are responsible for the uplinks and things like that, and I remember, you know, they would use phrases like let's light this candle and things like that, right? But I also remember asking them, like, how are we ensuring that we're not stepping on each other's signals? What keeps someone from, you know, lighting up an uplink and just stomping on someone else's signal? And the response I got over and over again was, "Well, we're gentleman. We would not do that." I suspect we're probably not 100% in that mode anymore with the dependence on satellites that we have now, the global arena and the having adversaries out there. Is that an accurate view from my point of view?
Jon Check: I would say, without a doubt people- I'm not going to say would become complacent but certainly, okay, great, that communication you have from the ground to the satellite is encrypted. Okay, but once that satellite is up there, what are the sensors it has that can receive input?
Dave Bittner: Yeah.
Jon Check: How are other outside entities trying to, you know, breach your security through those other sensors and other vectors, even outside of just what the communications link is? I think, you know, satellite manufacturers have the same challenges that everybody else does. There's a lot of times when people release products, there are other features like microphones or RF capabilities that are turned off, and but it's still out there. It still has the capability, so if an attacker knows a feature that's on something, they just upload the driver start taking advantage of it, move laterally within that platform.
Dave Bittner: Yeah.
Jon Check: So that's really something you have to think about is it's not just the straightforward attack vectors. When you think like an attacker, okay, what comprises this? What are all the different components? How do I test each component-
Dave Bittner: Right.
Jon Check: -to figure out what is the way I would be compromised? And as a defender, that's exactly the things we need to make sure we're locked down. If you really don't need a certain sensor on a satellite, don't put it up there and shoot it up in space with it on.
Dave Bittner: Well, I was thinking along those lines that, you know, I imagine the conversation when someone walks into their boss's office and says, "Boss, I accidentally bricked the router." You know, that's a different conversation than "Boss, I accidentally bricked the satellite,"-
Jon Check: Right?
Dave Bittner: -Because you can't-
Jon Check: Absolutely.
Dave Bittner: -just swap out, you know, something that's in geosynchronous orbit.
Jon Check: Well, it's interesting, because, from my perspective, I feel like there's clearly the terrestrial level OT systems in space. They have to share a lot of the same challenges, right> OT systems, a lot of them have been around a long years. There's a lot of satellites that were launched a long time ago, when cybersecurity wasn't a concern.
Dave Bittner: Right.
Jon Check: So you got that whole aspect of it. The satellites can't be- they're not- you don't take him down for downtime. Right?
Dave Bittner: Yeah.
Jon Check: You have to swap out parts. And in OT systems, it runs-
Dave Bittner: Right.
Jon Check: -and you can't- you do not mess with it.
Dave Bittner: You're changing the oil while the engine is running.
Jon Check: Right, so there's certain aspects. I mean, obviously, you can get a physical access to some OT systems.
Dave Bittner: Well, we don't have a space shuttle anymore.
Jon Check: Yeah, that's really, if you think about it, there's definitely some similarities. So one of the things I'm more focused on currently is how do you treat some of those same challenges, because, you know, like a smart person once told me, everybody's a unique snowflake, but human behaviors are all the same.
Dave Bittner: Hm.
Jon Check: And with cybersecurity, space is a unique environment, but all the cybersecurity challenges/opportunities exist the same there as they do here on Earth.
Dave Bittner: As you head back after a conference like this, what sort of things are on your mind? Or do you find yourself energized, a little overwhelmed? What are you going to bring back to your team and your colleagues?
Jon Check: Well, I'm a continuous re framer, so I'm always a glass is almost always full type of person. I'm there to solve the challenges that are come up. I'm not there to worry about them. That doesn't help anything. So one of the key things I'm going to take away from this conference is making sure the team knows we are making progress, that there are good things that are happening, right? You can be overwhelmed by all the things that aren't good, but there's a lot of goodness that's coming out. There's a lot better collaboration. There's starting to be true information sharing, not just for the purpose of, "Hey, here's my information," but people are taking action related to it. They're starting to really- you know, we're getting through the formative stages. We're close to the end of the beginning to where we can really move on and truly start collaborating, because within cybersecurity, without a doubt, 100%, no one can defend on their own, unless you have an environment that you've cultivated over time, which starts with ensuring that you are doing everything you can to persist the fight as long as we can. So from my perspective, if somebody on my team finds another job at another company, I'm thrilled. I'm so totally supportive, because that means there's another friend of mine out there that I can call that will get new experiences that I will probably rely on or they'll rely on me at some point in the future to figure out and solve some tough problems, and that's what, you know, when we think about what's your goal of why you're doing things, that's where I really try to hone in on in. My goal is to protect our way of life, point blank, and persist the fight. After when I'm long gone, you know, sitting on a porch napping next to the cat that's also napping, you know, I'll be sleeping soundly because I know that there'll be a great next team, you know, focused on solving the cybersecurity problems of the day that will be way beyond the problems I experienced when I was doing it.
Dave Bittner: That's Jon Check from Raytheon Intelligence & Space. You can hear more of my conversation with him on today's T-Minus daily space intel briefing.
Dave Bittner: And I am pleased to be joined once again by Andrea Little Limbago. She is Senior Vice-President for Research and Analysis at Interos. Andrea, it's always great to have you back. You know ChatGPT has been in the news a lot, and I wanted to touch base with you about these language models and what we're seeing around the world responding to this when it comes to policy.
Andrea Little Limbago: Yeah, no, thanks, Dave. And there's been great discussion about the technology, and there's been a lot of, you know, sort of fun discussions of what we can do as far as making email sound like they were written by Shakespeare and so forth. Obviously, there's some benefits going along with it, but there are also- you know, there's a negative side where we do see aspects of encouraging hate speech, false information, and so forth. There's actually some new words are being coined based upon this, hallucitations, because some of the citations created are fake that go along with it, and so it adds a lot of complexity that, you know, may say something is very valid. It'll cite a Washington Post article, and it turns out the article does not exist, so there's hallucitations that go along with that. There's algorithmic disgorgement, which is what- which goes along with what- to the point, on the policy side, it's the penalty that the FTC can now wield against companies, and so when they are using deceptive practices on how they obtain the data for train- that's required for training, the algorithms. And so basically, what they have been able to do, and it's- this has been since Cambridge Analytic, that's what's been not just in response to ChatGPT but we're going to obviously see a growing usage of it, that companies need to erase those algorithms if they were using data without consent for training. That's why one of the bigger aspects in the US is the FTC, and there's also an AI copyright lawsuit going on. It's the first class action lawsuit in the US against GitHub and some of the training and output of some of their work. But then even more globally, we're seeing Canada, Italy, Spain, and some EU working groups brought together to either review or block. Italy has taken the stance of trying to block the use of it. Spain just announced recently that they're reviewing now whether they want to be blocking it as well, and they're going to start doing some coordination across the EU in that regard, because some of the concerns over some of the false and negative information that can come from it, as well as the lack of consent required for some of the training. You can essentially think about, you know, facial recognition is a train that without the consent of the people, but then also copyright infringement by training on articles that should be copyrighted. So there's a whole range there, and it's everything from, you know, sort of at that level, to in Australia, there's a mayor who is suing because what the output of ChatGPT said for him was that he was- is a current mayor that he was in jail when he was actually the whistleblower that put someone in jail. So just- not just, but there's things along those lines that continue to happen. There are- it's defamation suits, where information about someone may be false. You know, there's an example recently where one professor looked up other professor- basically asked, you know, "Who are all the professors that had sexual harassment claims against them?" and the list was not accurate. And so that can be really, as you can imagine, really harmful to someone if their name were to show up on that list and be taken as truth.
Dave Bittner: Yeah.
Andrea Little Limbago: So there's a lot of issues starting to go along with it. They're going to start, I think, imposing some guardrails going forward, and it's interesting that it's really been quite quick on the policy side, you know, much faster than we've seen in some other areas.
Dave Bittner: Why do you suppose it's been so quick? Is it just- I mean, is it the amount of attention that it's gotten?
Andrea Little Limbago: I think that helps with it, and I think also just the accessibility of it. So if you think about it, I mean, kids are able to use it to help explore and write papers at this point. So it's very- you know, the user experience that goes along with it that makes it very useful for anyone to be able to use, and I think that alone makes it much more omnipresent than something that would require someone to, you know, you have a computer science PhD to leverage the algorithm. So I think the usability really played a big role in it. I think that coupled with- you know, there was a data breach that occurred a little while ago where some of the search history was leaked, along with some payment information. I think that also added to it. The biggest probably was the usability and how quickly it spread.
Dave Bittner: Do we have any elections coming up where there's concern? I mean, obviously, the US in 2024, but anything closer to horizon, where folks have raised an issue here that there's concerns?
Andrea Little Limbago: You know, that's good point. I think more broadly, there's this growing concern on all elections as far as deep fakes, than even, you know, voice mimicry where it sounds like a politician saying something that they never actually said. And we are seeing- you know, we saw a fair amount of that in the last presidential election, and there has been different aspects of that popping up across the globe. So that, for sure, is something to be concerned about. There were some instances in Nigeria over their recent election and some protests where a lot of disinformation was exposed that then led to, you know, ethnic conflict in the area. So it's, you know, it's a whole range of- and some of that can be automated, and that's where, you know, when you get the connection of, you know, bots with this information to help it spread. It's, again, where some of the algorithms come in and really get to the widespread nature of it.
Dave Bittner: Are you optimistic that there's going to be policy solutions to this? As you say, I mean, the response has been quick, which I suppose is refreshing and good, but is this something we're equipped to handle?
Andrea Little Limbago: I'm not sure, and that's where I think we're going to learn a lot over the next year for what may be working and what doesn't, because on the one hand, you know, I mean, it's interesting. You know, for- under Wassenaar, at one point, which is funding dual-use technologies, there was a discussion on how to treat encryption, and then a little bit of discussion on algorithms. And, you know, part of the pushback was, well, how do you ban math? And, you know, that does make it very hard. So I think there's going to be sort of that tendency, coupled with a tendency for, you know, ideally more of a, you know, guardrail approach that basically helps, you know, provide guidance on how to properly get training data and consent and so forth. And that could actually help move it forward, you know, quite well and progressive. I do worry about the all-out blocking and banning of it, because I do think that when you ban very capable of technologies for some and not others, that then puts you at a disadvantage. So I think if- I'm optimistic that we can find ways to leverage generative AI in a way that can be beneficial and provide the force multiplying power that it could while still protecting and preserving people's- their profiles, you know, the deformation, and help train it in a better way, but I think it's going to be a long ways to go. There's going to be a lot of trial and error I think we'll see over the next few years.
Dave Bittner: Yeah. All right. Well, Andrea Little Limbago, thanks for joining us.
Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. We're privileged that N2K and podcasts like the CyberWire are part of the daily intelligence routine of many of the influential leaders and operators in the public and private sector, as well as the critical security team supporting the Fortune 500 and many of the world's preeminent intelligence and law enforcement agencies. N2K's strategic workforce intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at n2k.com. This episode was produced by Liz Irvin and Senior Producer Jennifer Eiben. Our mixer is Tre Hester with original music by Elliott Peltzman. The show was written by Rachel Gelfand. Our Executive Editor is Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.