The CyberWire Daily Podcast 12.5.23
Ep 1960 | 12.5.23

Sleeper malware denied at Sellafield nuclear site.

Transcript

Dave Bittner: The UK Government's denial of a cyber incident at Sellafield. There’s been a surge in Iranian cyberattacks on US infrastructure. misuse of Apple's lockdown mode, the mysterious AeroBlade's activities in aerospace, and a clever "Disney+" scam. Plus The latest application security trends,. In our Industry Voices segment, we welcome Matt Radolec, Vice President of Incident Response and Cloud Operations at Varonis explaining the intersection of AI, cloud and insider threats. And insights on resilience from the UK's Deputy PM.

Dave Bittner: Today is December 5, 2023. I’m Dave Bittner. And this is your CyberWire Intel Briefing.

HM Government denies reports of cyber incident at Sellafield nuclear site.

Dave Bittner: The Guardian reported a cyberattack yesterday on the British nuclear facility at Sellafield, allegedly perpetrated by foreign threat actors linked to China and Russia. The attack reportedly involved "sleeper malware," potentially dating back to 2015, and was disclosed in a report by the Office for Nuclear Regulation (ONR), which noted security shortfalls at the facility, primarily engaged in nuclear waste storage and processing.

Dave Bittner: Sellafield Ltd., the facility's operator, and the HM Government have strongly denied these claims. Sellafield Ltd. stated that there is no record or evidence of such an attack and has challenged The Guardian to provide evidence for their allegations. The ONR supported this denial, confirming the absence of evidence for the reported hack. Nonetheless, the ONR did acknowledge ongoing security investigations at Sellafield and noted that the facility is not meeting certain required cyber security standards, resulting in increased regulatory attention.

Dave Bittner: In response to these reports, the Labour opposition has sought clarification from the Government's ministers regarding The Guardian's claims. This development has sparked concerns and prompted political inquiry into the matter, highlighting the critical nature of cybersecurity in sensitive national infrastructure.

Reports of more Iranian cyberattacks against US infrastructure.

Since the CyberAv3ngers, linked to Iran's Islamic Revolutionary Guard Corps (IRGC), claimed attacks on a water utility and a brewery in Western Pennsylvania, citing their use of Israeli-made Unitronics PLCs, three other Iranian-affiliated groups have followed suit. Haghjoyan, CyberToufan Group, and YareGomnam Team have also claimed similar attacks against users of Israeli equipment, as reported by The Register.

In a separate incident, The Record notes that Florida's St. Johns River Water Management District experienced an unspecified cyberattack, potentially ransomware, by an unknown or undisclosed threat actor. The District has managed to implement successful containment measures after detecting suspicious activity in its IT environment.

AeroBlade prospects US aerospace industry.

Dave Bittner: BlackBerry researchers have discovered a new threat actor, named "AeroBlade," targeting the US aerospace sector through a spearphishing campaign. AeroBlade, which emerged late last year, focuses on commercial and competitive cyber espionage. The group's activities extend beyond mere information collection. BlackBerry's report suggests that AeroBlade's primary objective might be to assess the internal resources and vulnerabilities of its targets, potentially setting the stage for future ransom demands. This indicates a strategic approach to cyber espionage, where initial data gathering could lead to more aggressive financial extortion tactics.

Apple lockdown mode in the service of fraud.

Dave Bittner: Jamf has identified a post-exploitation technique where attackers can deceive users by making an already compromised iOS device appear to be in Lockdown Mode, creating a false sense of security. The researchers emphasize that while Lockdown Mode reduces the attack surface on iOS devices, it does not function as antivirus software. It cannot detect existing infections nor prevent malware from operating on a compromised device. Therefore, its effectiveness is limited to preventing attacks before they occur by reducing potential entry points for attackers. This research highlights the importance of understanding Lockdown Mode's capabilities and limitations, underlining that it cannot mitigate threats on already compromised devices.

"Disney+" scam.

Dave Bittner: Abnormal Security has reported a phishing campaign using a "Disney+" theme. The campaign sends emails with PDFs resembling invoices, using the recipient's real name and falsely claiming they will be charged $49.00 for the next month's subscription, significantly higher than the actual cost. The PDFs include a phone number for cancelling the subscription. Upon calling, victims may face two risks: they could be asked for sensitive information like banking details or login credentials, which attackers can use for fraudulent transactions or account compromises. Alternatively, they might be instructed to download software purportedly to stop the charge, but this software actually infects their computer with malware. This campaign highlights the need for vigilance against phishing attempts that use familiar brands and seemingly legitimate documentation to exploit users.

Trends in application security.

Dave Bittner: Synopsys, in its latest Building Security in Maturity Model (BSIMM) report, highlights a significant trend in software security: the increased focus on automation within the software development life cycle (SDLC). Modern toolchains are enabling organizations to integrate security testing and touchpoints throughout the SDLC, not just at the initial stages ("shifting left"), but rather adopting a "shift everywhere" approach.

Dave Bittner: This trend is characterized by the automation of security tasks, making them more accessible and efficient. For instance, security testing in the QA stage can now be automated, similar to static application security testing (SAST) scans conducted earlier in the development process. This allows for scripted actions in response to the outcomes of automated security tests, enhancing the efficiency and effectiveness of security measures.

Dave Bittner: Furthermore, firms are increasingly utilizing automation to gather and leverage intelligence from sensors across the SDLC. This proactive approach helps in preventing vulnerabilities before they pose significant challenges to developers, thereby strengthening software security.

Dave Bittner: Today’s guest on Industry Voices is Matt Radolec, Vice President of Incident Response and Cloud Operations at Varonis. Matt and I discussed the  intersection of AI, cloud and insider threats.

Dave Bittner: Coming up after the break in our "Industry Voices" segment, we welcome Matt Radolec, Vice President of Incident Response and Cloud Operations at Varonis. He's explaining the intersection of AI, cloud, and insider threats. Stay with us. [ Music ] In today's sponsored "Industry Voices" segment, my conversation with Matt Radolec, Vice President of Incident Response and Cloud Operations at Varonis. Our conversation centers on the intersection of AI, cloud, and insider threats.

Matt Radolec: To start thinking about like the insider threat, you've got different categories, right? You have on one end of the spectrum your malicious insider, this is your, you know, Edward Snowden and the like, they have a clear motivation for doing whatever things they're going to do. And they're going to carry them out that way, you know, with high impact. Then you have your -- you know, you have your more -- lesser severity insiders like your, you know -- your person who's knowingly violating a policy or the person who's not knowingly violating a policy, that end user that makes a mistake. And I think when we look at AI, right, AI it's -- at least generative AI, it's enabling our workforce to access and amass information at a higher productivity rate, as in faster and potentially of greater efficacy, than they did before. So, it's going to take these three different scenarios, and you're going to give a tool to people to make them that much worse. So, if we go through those three different insiders again, right, you've got your malicious insider now instead of needing -- and we'll use this Snowden example, instead of needing administrative privileges and the ability to walk in and out of classified rooms with storage drives, you can just chat up your friendly AI bot and ask it to search and amass all this really interesting data for you. So, it makes their job easier in terms of getting the data and potentially getting it out. And then I think where accidents happen, and I'm not sure, Dave, if you use Microsoft 365, but I'm sure many of our listeners do, it's built, you know, Teams and the like, it's built to be so incredibly easy to collaborate, to share. You know, you can't really realize the value of data unless you can share it with someone. And so, Copilot, which is Microsoft's generative AI, is going to leverage all the data that a person has access to when it returns results. So, that means that you'll be able to query in and search a large dataset, create a new file, and share that new file with someone at a speed that I don't think people can keep up with. And so, it's going to make all these little mistakes a lot more apparent or a lot -- what we talk about a lot at Varonis is what we call the blast radius or kind of how bad is it, it's going to make the average thing worse because people are going to get access to and be able to generate data off of a larger dataset than they realize they had before.

Dave Bittner: So, where's the balance here between the utility of these tools, which I think a lot of people think is legit, versus limiting what they have access to?

Matt Radolec: You know, for a lot of organizations, they either think that they've already done that or they don't realize just how much someone has access to. So, I think the reality is that you have to figure that out. You have to determine for your own organization do we have an open access issue, do we need to try to work on that a little bit before we go to full speed with AI? Or is our data fairly locked down? Either way would benefit from trying to figure that out. And are we in a pretty good place in order to enable our workforce to use AI? Because I don't think going against it or not using it is the answer. It's such an innovation. It's just a huge innovation for mankind that even me as a person who sits at the front of, you know, organizations that are having a crisis, right, they're having an incident, they're having a breach, and they need help, I can't look at AI and say don't do this because of the productivity gains and the innovation that it's going to afford us. So, instead, I want to look to organizations, use all my knowledge skills and experience, and encourage them to do this but to think about that critical question are we giving people this ability to access and exfiltrate data at a much higher velocity than we did before? Is our blast radius too big? Should we get a handle on that before we go full speed with generative AI on these large datasets? That's really the point I'd want to challenge organizations on.

Dave Bittner: Is it fair to say that there's a good bit of crossover between insider threat and shadow IT, you know, where a lot of this could be a cultural issue where people are just trying to get their work done and they feel as though there's some friction being put in place by the team who's trying to manage security?

Matt Radolec: Absolutely. I think that's really well stated. I always think about it in terms of balancing, you know, productivity or usability with security. That is the nature of any security team in the ebb and flow that happens over the lifecycle of a security practitioner's career. To an extent, we've got to have security and security needs to be there, but we can't have things be so secure that they're not usable. And so, this tension I think it only gets multiplied when we think of AI because an organization that has these policies in place but never enforce them might not have thought that it was going to be that easy for employees to violate those policies. Well, now it's definitely gotten easier like objectively it's going to be easier for people that don't have policies to do whatever they want, and people that do have policies to likely be able to violate those policies if the organization hasn't mapped their security controls to be just as smart and just as powerful as their AI-enabled workforce now is. I really like that point you made which is like does it change the insider threat? Well, I personally predict it makes those two non-malicious insider threats, it makes them a lot more likely. Because, you know, malicious insider, I mean, it doesn't matter what technology stack you put in front of them, they're going to try to carry out their mission, they have motivation that surpasses means, right? They're going to try to find the means to carry out the mission that they have. But these accidental insiders, these people that are going to -- you know, don't know that they're creating a new spreadsheet with lots and lots of personal data in it don't realize when they go to share it with their group of coworkers that they also added their personal Gmail account on it and don't realize that that personal Gmail account is tied to another app that scrapes the emails and ingests them for searching. And now, there's a copy of that data that exists in a way that is protected as it's supposed to be. And I think these are the real challenges that are going to come from this AI-enabled workforce that we're all, you know, at the forefront of where, you know, people are going to be creating and storing really sensitive content in places that aren't as protected as they should be to house that data. And I think we'll see more of these, I call them more routine breaches, these are the accidental breaches, the, you know, mishandling of information, I predict we'll see more of that, not less of that with an AI-enabled workforce.

Dave Bittner: And so, what are your recommendations then? I mean, how should organizations come at this?

Matt Radolec: Yeah, this concept -- and I've said it a couple of times and I'll probably say it again, Dave, is this concept of a blast radius, right? If you pick up a person from their computer, from their chair, I always like to say, and you try to figure out how much do they have access to, how much data, how many systems, you know, how much of your crown jewels can this person get to from their day 1 of employment. What we find is that, you know, somewhere between 25 and 50 percent of that data is too much. So, if you're one of those organizations where when you go and you do that exercise, you realize that the access is too vast, you need to go through an exercise, you know, just some of the basics of security. You need to do some of that least privileged, you need to use a lot of automation in order to get there, you need to limit what people can have access to. And there are a lot of ways that you can do that to like a high degree of automation and high amount of effectiveness without taking your environment from, you know, what you think it is today to like, you know, government-grade or Ford MAX grade security where everything is locked down and you need multiple layers of access to get through it. Sometimes just getting access control right is a really strong security control. So, just limiting the basics like, you know, kind of getting rid of data that's open to every employee or data that's shared with everyone in the company, you know, kind of limiting those pockets can have a lot of effect and a lot of success in trying to protect that data when you're trying to scale out a program where people end up ultimately getting access to more data with something like generative AI.

Dave Bittner: Our thanks to Matt Radolec, Vice President of Incident Response and Cloud Operations at Varonis for joining us. [ Music ] And finally, in the UK's Deputy Prime Minister's Annual Resilience Statement to Parliament, a significant emphasis was placed on the importance of resilience, especially in the context of cybersecurity. The statement highlighted the necessity for people to be prepared to revert to analog technologies in the case of a cyberattack that disrupts critical infrastructure like the power grid and communication systems. As reported by the Telegraph, the Deputy Prime Minister advised citizens to consider the essentials stored under their stairs, suggesting that items such as battery-operated radio, candles, and the torch -- flashlight, are fundamental. For the Atlantic's western cousins, this list might extend to a bug-out bag, canned goods, bottled water, gold coins, perhaps a feisty dog for added security. So, it seems in the digital age, it's still wise to keep one foot in the analog world just in case you need to tune in, light up, and bug out. [ Music ] And that's the CyberWire. For links to all of today's stories, check out our daily briefing at the cyberwire.com. We'd love to know what you think of this podcast, you can email us at cyberwire@n2k.com. We're privileged that N2K and podcasts like the CyberWire are part of the daily intelligence routine of many of the most influential leaders and operators in the public and private sector, as well as the critical security teams supporting the Fortune 500, and many of the world's preeminent intelligence and law enforcement agencies. N2K strategic workforce intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at n2k.com. This episode was produced by Liz Irvin. Our mixer is Tré Hester with original music by Elliott Peltzman. Our Executive Producers are Jennifer Eiben and Brandon Karpf. Our Executive Editor is Peter Kilpe. And I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow. [ Music ]