New Zealand stock exchange sustains DDoS attacks. Flash alert on GoldenSpy. Cyber mercenaries and industrial espionage. Lèse-majesté online. Offering $1 million to a potential co-conspirator?
Dave Bittner: New Zealand's stock exchange has sustained two distributed denial-of-service attacks this week. CISA and FBI issue an alert about GoldenSpy. Two cyber mercenary groups are engaged in industrial espionage for hire. Thailand decides to crack down on sites that host content the government deems illegal. Joe Carrigan looks at new types of crimes made possible by AI. Our guest is Shane Harris from The Washington Post on an elite CIA unit which failed to secure its own systems. And a Russian national faces U.S. charges of conspiracy to damage a computer.
Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Wednesday, August 26, 2020.
Dave Bittner: NZX Ltd., operator of New Zealand's stock exchange, halted trading for a few hours yesterday as it sustained a cyberattack. Reuters reports that it was the second such attack the exchange had suffered in as many days. According to Security Brief, the incident was a distributed denial-of-service attack - specifically, a volumetric distributed denial-of-service attack from offshore.
Dave Bittner: A distributed denial-of-service attack of this kind in itself doesn't put data at risk, but it does interrupt operations. In this case, as the BBC points out, it's likely that investors and brokers were unable to execute trades. The attack remains under investigation.
Dave Bittner: There's no indication in any of the reports that NZX received any threats or extortion demands before the attack hit. But CERT-NZ did warn back in November that emails styling themselves as being from Fancy Bear threatened denial-of-service attacks against financial services firms unless the companies paid a ransom. But nothing came of it at the time beyond a brief flurry of 30-minute demonstrations. So November's threats were empty.
Dave Bittner: It's also a lead-pipe cinch that the threats didn't come from the real Fancy Bear, which of course is a hacking unit of Russia's GRU. Instead, they were an early instance of copycat criminals attempting to cash in on the intelligence service swank that attaches to the Bears. So, no, not Fancy Bear or any other ursine threat group, just hoods using a booter.
Dave Bittner: Infosecurity Magazine reports that CISA and the FBI have issued a joint flash alert concerning the GoldenSpy malware embedded in tax software that Beijing requires businesses operating in China to use. The alert points out that GoldenSpy is the work of a threat group that knows what it's doing.
Dave Bittner: Quote, "This reveals the actor's high level of sophistication and operational awareness. The software service providers have not provided a statement acknowledging the software supply chain compromise," the alert reads. It goes on to say that "the FBI assesses that the cyber actor's persistent attempts to silently remove the malware is not a sign of resignation, rather it is an effort to hide their capabilities. Organizations conducting business in China continue to be at risk from system vulnerabilities exploited by the tax software and similar supply chains," end quote.
Dave Bittner: Two mercenary groups are drawing attention. The first, DeathStalker - identified and named by the security firm Kaspersky - targets financial services and legal firms. DeathStalker doesn't seem to be monetizing its hacking in any obvious way. It's not demanding ransom, and its take hasn't been seen for sale in any of the usual dark web markets. This suggests that it's a hack-for-hire operation. As the report puts it, quote, "They don't deploy ransomware, steal payment information to resell it or engage in any type of activity commonly associated with the cybercrime underworld. Their interest in gathering sensitive business information leads us to believe that DeathStalker is a group of mercenaries offering hacking-for-hire services or acting as some sort of information broker in financial services," end quote.
Dave Bittner: Kaspersky says they've found that DeathStalker has been active since 2018, with some signs suggesting that the group may have been active as early as 2012. DeathStalker's signature tool is Powersing, a PowerShell-based implant. DeathStalker could be a small group or even a skilled individual taking good advantage of a reliable tool. DeathStalker appears to choose its targets either for their perceived value or because it's been tasked to hit those targets by those who've hired DeathStalker.
Dave Bittner: The other mercenary gang doesn't have a name yet, let alone one so menacing as DeathStalker. Researchers at the security firm Bitdefender this morning described the other mercenary crew as an industrial espionage outfit. The target is an unnamed luxury real estate company with a large architectural practice. The hackers used a maliciously crafted plugin for Autodesk 3D Studio Max, a widely used 3D computer graphic tool. The plugin deploys a backdoor used to scout for valuable files.
Dave Bittner: The threat group's command-and-control infrastructure is based in South Korea. Telemetry suggests to Bitdefender that there may be other unidentified victims in South Korea, the United States, Japan and South Africa. Who's behind the group is unclear. It may be a purely criminal operation. But Bitdefender points out that similar mercenary operations in the past have been connected to state-sponsored groups, perhaps moonlighters.
Dave Bittner: The Washington Post reports that Thailand is cracking down on social media critical of the country's monarchy. The minister of digital economy and society said that when it deemed a web address to contain illegal material, it would obtain a court order to block access in Thailand to that address. Enforcement would then fall on the platform that carries the illegal material. They'd have 15 days to comply with the court order or face legal action.
Dave Bittner: The decision came to general attention because Facebook was directed to take down the Royalist Marketplace group, whose posts were deemed insulting to Thailand's monarchy. Facebook complied, but it's also preparing legal action to challenge the order. A Facebook spokesperson told CNN, quote, "Requests like this are severe, contravene international human rights law and have a chilling effect on people's ability to express themselves. We work to protect and defend the rights of all internet users and are prepared to legally challenge this request," end quote.
Dave Bittner: And, finally, a Russian national, Egor Igorevich Kriuchkov, has been arrested in Los Angeles by U.S. authorities who allege that he was conspiring to intentionally damage a computer. The Las Vegas Sun reports that the FBI maintains that from about July 15 to about August 22, Kriuchkov conspired with associates to recruit an employee of a company to introduce malware into an unnamed company's computer network. That unnamed company was in Nevada, and the feds say that Mr. Kriuchkov was offering prospective co-conspirators up to a million dollars to help him install that malware.
Dave Bittner: Shane Harris writes for The Washington Post, and he joins us with details from his recent story on an elite CIA unit that developed hacking tools but came up short when securing its own systems.
Shane Harris: Well, we have been following this story for more than a year now. This relates to a huge leak or disclosure of CIA computer hacking tools that occurred back in March 2017, when they were published on WikiLeaks, which gave this release the name Vault 7, which your listeners may be familiar with. We were following that when it occurred, and then we later broke the story about the government arresting someone who they suspected in the leak itself, a former CIA employee. And so we've just sort of been on this for a while now, covering his trial as well. And once this report, this internal report, came to light - it was shared with us by a senator who is key on these issues as well, Senator Wyden. And it was really the first look that we had had internally to the CIA at how they believed this leak occurred and the assessment of the damage that they gave it as well.
Dave Bittner: Your article mentions that perhaps there were some misunderstandings between the folks who ran the unit and the people who maintained the network, that there might have been some problem with some contractors?
Shane Harris: Yes. One of the issues that got noted in the report is this question around whether or not this network on which the CIA employees were building these cybertools - and we should emphasize, this is a network that is separate from the larger enterprise network of the CIA. So it's kind of its own discrete little sandbox, if you will. The engineers who are working on that presumed that they had an ability to audit that network. It turns out that that actually was not as well-maintained as these offensive folks thought and that the network itself was being maintained by contractors. And this former official told us that there was this misunderstanding between the people who run the unit and the people who maintained the network.
Shane Harris: And now, of course, we see why that misunderstanding and that disconnect proved to be so disastrous. But what this person was essentially saying is, like, look - these were separate jobs, and - you know, and the offensive guys assumed that the contractors were protecting them in ways that, ultimately, they just weren't.
Dave Bittner: How is the CIA responding to this report? Has there been much pushback, or are they taking their lumps and looking at it as lessons learned?
Shane Harris: I think the latter, really. I mean, this - it's our understanding that the panel that did this review - and they're not identified in the report - are well-respected in the agency. There's a sense that, you know, they did do an adequate job. They know what they're talking about. You know, they have enough familiarity with the subject matter. And - you know, and the CIA recognizes that this was not only a huge breach, but they - the government lawyers prosecuting the alleged leaker have said in court that it was the biggest unauthorized disclosure of classified CIA information in history. You know, it led to the shutting down of operations. It exposed these tools to American adversaries. So I don't think the agency is trying to sugarcoat it. They know how bad this is, and they are very aggressively pursuing this individual who they think was the leaker.
Dave Bittner: That's Shane Harris from The Washington Post.
Dave Bittner: And joining me once again is Joe Carrigan. He's from the Johns Hopkins University Information Security Institute, and also my co-host over on the "Hacking Humans" podcast. Hello, Joe.
Joe Carrigan: Hi, Dave.
Dave Bittner: Interesting article came by from ZDNet. It has what I guess is a somewhat breathless title. It's "Evil AI: These are the 20 Most Dangerous Crimes that Artificial Intelligence will Create." But under the hood there, there's some interesting things in here. Take us through this article, Joe. What's going on?
Joe Carrigan: So what happened was there was a ranking that was put together after scientists from the University College London compiled a list of 20 AI-enabled crimes. And this was kind of like a survey of these scientists. And they ranked these crimes in order of concern based on what harm they could cause, the potential for criminal profit or for gain, how easy they are to carry out and how difficult they would be to stop.
Dave Bittner: Hmm. OK.
Joe Carrigan: So topping the list, not surprisingly, something that we've seen before, are deepfakes. And I've said before that I'm not really concerned about deepfakes for the 2020 election, but I am very concerned about deepfakes and the 2024 election. I think that's going to be enough time for these things to improve to the point where they may become a problem. This article on ZDNet points out that there are tools out there on many of these platforms that can detect deepfakes, but there are plenty of unmoderated or uncontrolled - I don't want to say censored - but plenty of other channels for this information to flow through, this misinformation. So that's actually - I don't actually disagree with that. Deepfakes are potentially one of the most devastating things we're going to be seeing coming out of AI.
Joe Carrigan: In the list of crimes they list of high concern - another thing is the AI-authored fake news. And that's going to be - they predict that's going to be a real problem as well, and it may very well be. This is where we're going to have to have information provenance on these so we can know the history of where this information came from, and there's got to be some kind of technical solution around verifiable information for this. But then, that relies on the populace to understand how this works and how to collect valid information for your own opinion-forming, and not collect this fake information.
Joe Carrigan: The other things they list here are driverless cars being weaponized. Tailored phishing - I think that's a good observation, that tailored phishing is going to become a problem with AI. Large-scale blackmail is interesting, you know, the ability to automate the collection of data on all kinds of people and then essentially threaten them with doxxing. I mean, can you imagine the amount of money you could make on just threatening to dox a million people?
Joe Carrigan: Some of the lower concern things, they have misuse of military robots. Snake oil - I have a real problem with people that sell snake oil. Learning-based cyberattacks, autonomous attack drones, denial of service and online activities. And here's another one - manipulating financial or stock markets. I actually think that's a bigger threat, as well, that that is something that can have an opportunity for huge profits, huge profits.
Joe Carrigan: And then the AI crimes that they have a low concern here are burglar bots, AI-authored fake reviews and AI-assisted stalking. I don't know why that's so low. But burglar bots, I'm not too worried about burglar bots (laughter).
Dave Bittner: Yeah. The only one that stands out to me and leaves me scratching my head is driverless vehicles as a weapon and having that be a high concern. I don't know. I wonder about that. I mean, a driverless vehicle presumably is going to be pretty traceable. It's going to have, you know, some kind of VIN on it. There are all kinds of - you know, it's like, people, folks I've talked to about this sort of thing have said, yeah, but this is one of those things that sort of relies on social norms. I mean, you don't have - it just doesn't really happen, you know.
Joe Carrigan: Right.
Dave Bittner: They grab headlines. They're interesting to think about as worst-case scenarios, but they just don't really happen that much.
Joe Carrigan: Right. I'm not too terribly concerned about that right now. I am concerned about the verifiability of autonomous systems. In fact, we have the Institute for Assured Autonomy now at JHU, where we're focusing research on making sure that autonomous systems are verifiable, among other things. The thing I find most concerning is the deepfakes and the fake news from AI. I think those two are not - I don't think that people will use them for profit, but I do believe people will use them as a means to power. And I don't know if anybody, if our listeners, have picked up on this, but I am very leery of people who seek power.
Dave Bittner: (Laughter) OK, fair enough. All right. Well, again, the article's title - "The Evil AI: These are the 20 Most Dangerous Crimes that Artificial Intelligence will Create." It's over at ZDNet. Joe Carrigan, thanks for joining us.
Joe Carrigan: My pleasure, Dave.
Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. And for professionals and cybersecurity leaders who want to stay abreast of this rapidly evolving field, sign up for CyberWire Pro. It'll save you time, keep you informed, and 4 out of 5 dentists recommend it. Listen for us on your Alexa smart speaker, too.
Dave Bittner: The CyberWire podcast is proudly produced in Maryland at the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Elliott Peltzman, Puru Prakash, Stefan Vaziri, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.