The CyberWire Daily Podcast 8.2.22
Ep 1632 | 8.2.22

Nomad cryptocurrency bridge looted. BlackCat ransomware hits Europenan energy company. DSIRF disputes Microsoft's report on cyber mercenaries. Are there spies under Mr. Putin’s long table?


Dave Bittner: Nomad cryptocurrency bridge is looted. The BlackCat Ransomware Gang hits a Luxembourg energy company. DSIRF disputes Microsoft's characterization of the Austrian firm as cyber mercenaries. Ben Yelin looks at privacy concerns in the education software market. Our guest is P.J. Kirner from Illumo to discuss zero-trust segmentation. And finally, are there spies under Mr. Putin's very, very long table?

Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire Summary for Tuesday, August 2, 2022.

Nomad cryptocurrency bridge looted.

Dave Bittner: Bloomberg reports that Nomad, which provides a bridge over which crypto tokens can be shifted to different blockchains, was hit yesterday by an attack that's caused the loss of nearly $200 million in cryptocurrency. PeckShield, which has been following developments over its Twitter account, is credited with noticing the caper. Apparently, there was a flaw in the platform's blockchain contract that allowed users to withdraw more than they deposited. After the initial exploit, around 40 other copycat attacks followed. 

Dave Bittner: We heard from Comparitech's head of data research, Rebecca Moody, who ranked this attack as the ninth-largest of its kind. She stated, overnight, Nomad bridge was drained for over $190 million in the third-biggest crypto heist of 2022 and the ninth-biggest of all time, according to Comparitech's worldwide cryptocurrency heist tracker. But in a unique twist, the hack on Nomad appeared to be carried out by numerous copy-and-paste actors. Experts suggest that the initial hacker found a fatal flaw in the platform's replica contract, meaning anyone, including those with zero coding knowledge, could locate a transaction that worked, use their address to replace the user's address and rebroadcast it. Over the space of a few hours, almost all of the bridge's $190.7 million was drained, with just $651.54 left. It's unclear how much, if any, of the currency lost will be recovered. Moody says there are suggestions that white-hat hackers removed some of the funds to safeguard them, but it remains to be seen just how much of the $190 million is recoverable. 

BlackCat ransomware gang hits Luxembourgeois energy company.

Dave Bittner: The BlackCat ransomware privateers, also known as ALPHV and generally regarded as a DarkSide successor - or simply as DarkSide rebranded - claimed responsibility for an attack on Creos, a Luxembourg company that operates a major Western European gas pipeline, BleepingComputer reports. According to The Record, the group claims to have stolen 150 gigabytes of data that they said includes contracts, passports, bills and emails. They threatened to leak the data on Monday, but as of the afternoon, no data had been released. Creos's corporate parent, Encevo, said late last week that it was continuing to investigate the incident, which has affected its customer-facing portals. Like its immediate ancestor, DarkSide, responsible for last year's cyberattack against Colonial Pipeline, BlackCat is based in Russia and has shown an interest in targeting Western energy infrastructure. 

DSIRF disputes Microsoft's characterization of the Austrian firm as cyber mercenaries.

Dave Bittner: Reuters quotes a statement by the Austrian firm DSIRF, whom Microsoft had described as cyber mercenaries selling Subzero spyware to customers who abused it. DSIRF said in an emailed statement, "Subzero is a software of the Austrian DSIRF which has been developed exclusively for official use in states of the EU. It is neither offered, sold nor made available for commercial use. In view of the facts described by Microsoft, DSIRF resolutely rejects the impression that it has misused Subzero software." Reuters says it's not clear who DSIRF's legitimate European Union customers are. Microsoft identified DSIRF as the threat group. It tracked as Knotweed. And we note in full disclosure, Microsoft is a CyberWire partner. 

Spies under Mr. Putin’s very very long table?

Dave Bittner: And finally, what if there were a bunch of independent journalists under President Putin's bed? Well, all right. Maybe not under his bed, but how about under that really, really long table he likes to use when giving foreign numeros his own crew of the world - too ridiculous to rebut. But still, you can see those independent journalists like the idea. So what are we talking about? Well, there's a story circulating in disinformation circles. The claim that Bellingcat has compromised the GRU comes from SouthFront, an English-language news service and Russian government front organization. The teaser for SouthFront's video report reads in part, independent journalist Dilyana Gaytandzhieva, founder of Arms Watch and a SouthFront correspondent, appealed to Russian President Vladimir Putin. She says that the Putin's elite inner circle is infiltrated by NATO informants. She asked for a meeting with Ramzan Kadyrov, boss of Chechnya and one of Mr. Putin's more intemperate and brutal political allies, to give him a list of names of identified infiltrated GRU agents. Miss Gaytandzhieva said in her video, according to my source, Ramzan Kadyrov is the only person in your circle who can be trusted. 

Dave Bittner: Why SouthFront would take pains to single out the GRU is an interesting question. It's not at all clear what advantage Russia might see in convincing foreigners that one of its main intelligence services had been so seriously compromised. The publication of this particular story may indicate that a purge of the GRU is in the offing. SouthFront is believed to operate from Crimea and is probably run by the FSB. It's been on the US Treasury Department's list of sanctioned entities since April of 2021. 

Dave Bittner: The singling out of the GRU as a source of leaks, deception and disinformation seems significant. The FSB may be preparing the ground for a purge of its sister and rival service. So far, the FSB has taken the brunt of Mr. Putin's wrath for intelligence failures in Russia's war with Ukraine. More than a hundred officers are believed to have been dismissed and arrested. The FSB may wish to share some of the heat it's feeling. Or it could be that the president wishes to ensure that no one intelligence service grows too powerful. If this proves to be so, the purge would be another throwback to the 1930s, when Stalin used the GRU and the FSB's predecessors to keep one another in check. That's speculation, but one imagines a lot of GRU officers are feeling uneasy today. 

Dave Bittner: One of Bellingcat's leading figures commented on Twitter about the claims that they were running a bunch of GRU agents, saying, the idea that Bellingcat, of all organizations, would have spies breathing in Putin's neck and at the top of the GRU, feeding him disinfo in passing personal secrets to us, as flattering as it is, is so ridiculous it doesn't even warrant a serious rebuttal. Of course, Bellingcat would say that. Who's that under Mr. Putin's very, very long table? 

Dave Bittner: P.J. Kirner is CTO and co-founder of security firm Illumio. They recently published a zero-trust segmentation impact report, and my conversation with P.J. Kirner started with a helpful analogy. 

P J Kirner: The easiest way to understand this is to sort of think about how submarines are built for resiliency, right? So they have redundant systems, and then they have small compartments. You know, so - and what are those compartments for is that if there is a breach, they seal off one of those compartments so the water doesn't, you know, flood the submarine and the submarine doesn't sink. So segmentation in a IT environment is exactly the same thing. How do you sort of compartmentalize and sort of prevent a breach from becoming a giant disaster in your environment? 

Dave Bittner: Yeah, that's a great description. Can you take us through some of the highlights from the report here? What are some of the things that grabbed your attention? 

P J Kirner: So there are different elements of zero trust, right? So one element is this idea of least privilege, right? So that - in the sense that least privilege is the idea of rather than having everybody have access to, you know, a lot of things, it's only allowed things that are necessary and business-critical or business-defined to have access to things. Another concept is this concept of assume breach, and assume breach mentality is you assume the attacker is already inside - right? - already has breached the perimeter. Somebody already clicked on that phishing link. It's maybe sitting there on a laptop. And if you have that mentality, then you start thinking about, you know, cyber controls that, you know, help you prevent things from moving laterally and making it worse disaster than than it is. 

P J Kirner: So that mentality is important. And one thing that the research did was sort of measure - well, first of all, imagine how many people were doing or had zero trust. And, like, one of the metrics is 90% of the people believe it's one of their top three priorities, right? So this was a set of people who believed in zero trust. And - but when we measure it, another thing about assume breach mindset - a question that was asked that was, do you think you're going to be breached? Three percent of people said no. They believe they were never going to be breached. Another 11% of people said they highly likely didn't think they were going to be breached, and a whole 31% said they weren't really sure, right? So if you take all that, that's about - almost 50% of people didn't really, in my opinion, have that assume breach mindset that is necessary for doing zero trust. 

Dave Bittner: That's interesting. I mean, what is your response to that? Is that - are they being realistic? Is that, you know, whistling past the graveyard? What - how do you respond? 

P J Kirner: Well, I think it's the remnant of, like, our perimeter-based approach to security. For a long time - right? - there was the bad internet. And then we trusted everything that was kind of inside the perimeter. Like, once you walked into the building, you are fully trusted. And, you know, that's how security got done for a long time, that perimeter-based control. And that's - I think that's a remnant of people not yet getting past. You know, they'll say they're past this perimeter. They say they believe in it. But it's kind of an indicator of they're not really - don't really have the mindset. They haven't moved past that mentality yet. I'm sure this will happen again, and we'll sort of see trends. And that'll be actually an interesting conversation, like, when we do this again next year. We'll sort of compare and contrast how far that has moved. And we can talk about that then. 

Dave Bittner: And what about the segmentation itself? I mean, in terms of measuring results of that, where do we stand? 

P J Kirner: Yeah, it's interesting you ask that question because zero trust in general - I think zero trust segmentation, you know, is a journey. Right? Like, it's an architecture. It's a philosophy. It's not - there's not just a single product you buy, and then you install it and you're done. 

Dave Bittner: Right. 

P J Kirner: We were talking with some, you know, people around RSA about how the whole - like, it's an organizational change, right? Like, the organization needs to be - you know, have this mentality as a whole. What we've done and what we've learned over the years is you really need to take a very step-by-step approach to things. You need to - you know, there's no boiling the ocean. Boiling the ocean is a recipe for disaster. You need to understand where your crown jewels are, understand what you want to protect, build a ring fence around those crown jewels, do some amount of segmentation, take some small steps along the way, get the organization to sort of see that success, show your board that success, and then sort of repeat that process. So this step-by-step mentality is really important to success of these projects. 

Dave Bittner: You know, when you look at the results that you've gathered in this report, what are the take-homes for you? What do you hope people take away from it? 

P J Kirner: Yeah. Well, I hope people take away - kind of what you sort of said - that zero trust is a mainstream kind of thing and that zero trust segmentation is a key pillar to doing zero trust. The other thing that was interesting is that there is some proven business ROI around this. Right? So averting a cyber disaster or accelerating a - you know, a transformation project because a lot of what you have to do when you're doing a zero trust segmentation is you have to get visibility. You have to understand how things are connected. Right? And once you understand how things are connected for your zero trust goals, they have other benefits, right? Like, you understand how your applications work. You might be able to migrate in applications for the cloud market because you understand its connectivity. So there is other benefits, business benefits, in addition to the security benefits. 

Dave Bittner: That's PJ Kirner from Illumio. 

Dave Bittner: And joining me once again is Ben Yelin. He is from the University of Maryland Center for Health and Homeland Security and also my co-host over on the "Caveat" podcast. Hello, Ben. 

Ben Yelin: Good to be with you, Dave. 

Dave Bittner: Thank you very much. Interesting article from The New York Times. This is written by Natasha Singer, and it's titled "A Cyber Attack Illuminates The Shaky State Of Student Privacy." What's going on here, Ben? 

Ben Yelin: So there was a cyberattack against an education software provider, which is called Illuminate Education. And it is one of these pieces of software that collects data on individual students. So there's been kind of a trend of tracking the progress of students, including things like test scores, but also absenteeism, behavioral incidents. You get a lot of pretty personal information on students if you gain access to one of these very popular databases. 

Dave Bittner: OK. 

Ben Yelin: And they're used at school all across the country. 

Dave Bittner: Yeah. 

Ben Yelin: This one is used in the public school systems in New York City and Los Angeles, so certainly... 

Dave Bittner: Big systems (laughter). 

Ben Yelin: Yeah. And there's a concern now after this hack - which isn't even one of the biggest hacks in the history of cyber education software. But there's a concern that's, I think, been illuminated by this hack, that government regulators and individuals have to be more protective of this data. What could be revealed by accessing these files is extremely personal. We talked about things like absenteeism, but also even behavioral instances that occurred when somebody was very young that stays on your so-called permanent record. 

Dave Bittner: Right. 

Ben Yelin: You could potentially be on something that a college could look for as they consider one's application. So it's beyond just grades and test scores. It's also demographic information that might be sensitive or personal or something that somebody does not want to reveal. There really is this, I think, valid concern that we're just putting too much information into these databases that we now know are vulnerable to cybercrimes and espionage. So there are a couple of action items that can happen here. One is that educational institutions have to be more proactive when they are obtaining a contract for the services of this type of software and making sure that whatever they're using has the most robust cybersecurity protections. That's not a failsafe, but that's something that could certainly help the problem. But then on the broader level, I think it might be incumbent upon policymakers - it would probably start at the state level, but eventually might make its way to the federal level - to institute some type of minimum-security standards for the use of this type of education software. That would have to happen first in the public school system because that's where the government would have jurisdiction. 

Dave Bittner: Right. 

Ben Yelin: But I think we could start to see that pop up in some of these larger school districts where you have to comply with certain NIST cybersecurity standards in order to sell your product to this particular school district. I think the more of these types of incidents that occur, we might see regulators be more motivated to take that type of action. 

Dave Bittner: Now, where does the FTC come in on this? I mean, they have the Children's Online Privacy Protection Act. Would that apply to a situation like this? 

Ben Yelin: Yeah. So the FTC has fined a lot of different companies based on violations of children's privacy, so high-profile cases like YouTube and TikTok. But the agency has yet to enforce the industry's kind of self-policing initiative, which is the Student Privacy Pledge. In May, the FTC announced that regulators were going to try and crack down on ed tech companies that violate the COPPA, the Child Online Privacy Protection Act. 

Dave Bittner: Right. 

Ben Yelin: So they are pursuing a number of non-public investigations into these companies. That's according to the FTC spokeswoman interviewed as part of this article. But we don't know exactly where that's going to go and what will come out of these investigations. But yeah, the FTC is a major player here because they have enforcement authority under the COPA. 

Dave Bittner: I just - you know, I just wonder, like, it's - at what point does all this stuff come to a head? You know, like, this - and I mean, you could joke. I mean, it's a trope. You can say, you know, protect the children, protect the children. But in this case, we're talking about protecting the children, right? 

Ben Yelin: Right. Right. And sometimes we justify the use of this software by saying, you know, we want to keep track of things like absenteeism and behavioral problems so that we can take corrective action. 

Dave Bittner: Yeah. 

Ben Yelin: We can have an algorithm that identifies problem students, and we can do X, Y and Z once those students are identified. That is - sounds really good in theory, but there are certainly some negative aspects of collecting that amount of data that's negative both for the school district, because they're more likely to become a target of cybercriminals, and for students who might have really personal information collected that might hurt future job prospects that might be out there on the dark web, something that's searchable. So it can have real consequences for students. So I think ed tech, while it's very promising, has these pretty pronounced downsides. And I think school districts, when they're making decisions as to how to employ this type of technology, really have to take that into consideration, at least before we have these minimum-security standards, or we know that the software is capable of withstanding some of these attacks. 

Dave Bittner: Yeah. 

Ben Yelin: And it's happened so many times now that we know that this is not an isolated concern. Anybody that - any entity that maintains this type of private information is vulnerable to cyberattacks. 

Dave Bittner: Yeah. 

Ben Yelin: And that certainly does not exclude school districts from being in that category. 

Dave Bittner: I'm just imagining somebody, you know, decades later, trying to get a security clearance, or even just a job, and being asked about, you know, the time that they blew up a watermelon in the cafeteria microwave or something (laughter). 

Ben Yelin: Exactly. We have evidence that when you were 6 years old, you punched Timmy in the face and were put on timeout for 10 minutes, is that correct, sir? Yeah. No, I doubt that is actually going to happen, but I don't think that's that far off, especially if this becomes more widely accessible and searchable. I mean, some of the other big hacks that targeted OPM, something like Ashley Madison... 

Dave Bittner: Right. 

Ben Yelin: ...Like, the information is really used against people, even if it was obtained unlawfully. 

Dave Bittner: Yeah. 

Ben Yelin: And if it's between you and one other candidate, and you have this blot on your record that's discoverable on the internet, maybe that hurts your job prospects. And that's fundamentally unfair to these students. So I think it makes it certainly worthy of our attention. 

Dave Bittner: Yeah. All right, well, Ben Yelin, thanks for joining us. 

Ben Yelin: Thank you. 

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our daily briefing at The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Elliott Peltzman, Tre Hester, Brandon Karpf, Eliana White, Puru Prakash, Justin Sabie, Liz Irvin, Rachel Gelfand, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.