Notes from the cyber phases of two hybrid wars. Alerts on Cisco, Atlassian vulnerability exploitation. Updated guidance on security by design.
Dave Bittner: A bogus RedAlert app delivered spyware as well as panic. BloodAlchemy backdoors ASEAN southeast asian targets. A serious Cisco zero-day is being exploited. Valve implements additional security measures for Steam. A warning on Atlassian vulnerability exploitation. Allies update their security-by-design guide. Ukrainian telecommunications providers hit by cyberattack.A Russian credential-harvesting campaign. Russian hacktivist auxiliaries hit Belgian websites. Ben Yelin explains attempts to tamp down pornographic deepfakes. Our guest is Ashley Rose from Living Security with a look at measuring human risk. And, as always, criminals see misery as opportunity.
Dave Bittner: I’m Dave Bittner with your CyberWire intel briefing for Tuesday, October 17th, 2023.
Bogus RedAlert app delivered spyware as well as panic.
Dave Bittner: Cloudflare looked into the compromised RedAlert app that served false alarms of rocket attacks against Israeli users. They traced it to a knock-off of the legitimate RedAlert app, and they found that it had spyware functionality as well as the obvious panic-inducing disinformation. Cloudflare wrote, "The malicious RedAlert version imitates the legitimate rocket alert application but simultaneously collects sensitive user data. Additional permissions requested by the malicious app include access to contacts, call logs, SMS, account information, as well as an overview of all installed apps."
Dave Bittner: The researchers also found that the bogus app was flacked using domain impersonation. The bogus website ("redalerts[dot]me") differed by the single letter "s" from the legitimate RedAlert site ("redalert[dot]me"). The site directed Apple users to the real RedAlert source, but Android users were sent to a site that served a malicious version of the app. Roger Grimes, Data-Driven Defense Evangelist at KnowBe4, urged users of any apps to use only official app stores. The official stores are imperfect, but what is not? And in any case While not perfect, they’re far less risky than going off-brand, A viper may sometimes make its way into those official walled gardens, but it's usually swiftly ejected. And the unofficial sources of apps are a regular reptile house.
BloodAlchemy backdoors ASEAN targets.
Dave Bittner: Researchers at Elastic Security Labs are tracking a new backdoor they’re calling “BLOODALCHEMY” that’s being used to conduct cyberespionage against governments and organizations in the Association of Southeast Asian Nations (ASEAN). BLOODALCHEMY is part of the REF5961 intrusion set described by Elastic earlier this month. The researchers believe the activity is “state-sponsored and espionage-motivated,” launched by a threat actor aligned with the Chinese government.
Dave Bittner: The researchers note that, “BLOODALCHEMY is a backdoor shellcode containing only original code (no statically linked libraries). This code appears to be crafted by experienced malware developers. The backdoor contains modular capabilities based on its configuration. These capabilities include multiple persistence, C2, and execution mechanisms. While unconfirmed, the presence of so few effective commands indicates that the malware may be a subfeature of a larger intrusion set or malware package, still in development, or an extremely focused piece of malware for a specific tactical usage.”
Cisco IOS XE zero-day exploited.
Dave Bittner: Cisco has disclosed an actively exploited zero-day vulnerability (CVE-2023-20198) in the Web User Interface feature of Cisco IOS XE software when exposed to the Internet or untrusted networks. Cisco states, “Successful exploitation of this vulnerability allows an attacker to create an account on the affected device with privilege level 15 access, effectively granting them full control of the compromised device and allowing possible subsequent unauthorized activity.”
Dave Bittner: Cisco says a threat actor has been exploiting the vulnerability since at least September 18th, with broader activity observed in October: “We assess that these clusters of activity were likely carried out by the same actor. Both clusters appeared close together, with the October activity appearing to build off the September activity. The first cluster was possibly the actor’s initial attempt and testing their code, while the October activity seems to show the actor expanding their operation to include establishing persistent access via deployment of the implant.”
Dave Bittner: Cisco strongly recommends that “organizations that may be affected by this activity immediately implement the guidance outlined in Cisco’s Product Security Incident Response Team (PSIRT) advisory.”
Valve implements additional security measures for Stream.
Dave Bittner: Valve will require additional security measures for game developers on Steam in an attempt to prevent compromised developer accounts from being used to push malicious updates, BleepingComputer reports. On October 24th, Valve will begin enforcing SMS-based security prompts for new updates to games’ default release branches. BleepingComputer notes that the move follows a spike in the use of compromised Steamworks accounts to distribute malware over the past few months.
Warning on Atlassian vulnerability exploitation.
Dave Bittner: Yesterday the US Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), and the Multi-State Information Sharing and Analysis Center (MS-ISAC) issued a joint Cybersecurity Advisory (CSA) on the active exploitation of CVE-2023-22515, a vulnerability in Atlassian Confluence Data Center and Server, a widely used collaboration platform. Exploitation enables a malicious actor to create unauthorized Confluence administrator accounts, with the attendant possibility of data exfiltration. The Advisory recommends immediately upgrading to a patched version of the vulnerable product. Organizations detecting exploitation of CVE-2023-22515 should, in addition to collecting relevant artifacts and reporting the compromise to responsible authorities:
"Quarantine and take offline potentially affected hosts."
"Provision new account credentials," and
"Reimage compromised hosts."
Dave Bittner: The Advisory doesn't offer attribution of the ongoing exploitation, but various security firm researchers credibly point to China's Ministry of State Security as the probable responsible threat actor.
Allies update their security-by-design guide.
Dave Bittner: The allies who produced the original guide to security by design, “Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Secure by Design Software” (the Five Eyes plus Germany and the Netherlands) have been joined by their counterparts in the Czech Republic, Israel, Japan, the Republic of Korea, Norway, the Organization of American States, and Singapore in updating the guidelines. CISA described the goal of the updated version, made available yesterday: "This guidance is intended to further catalyze progress toward investments and cultural shifts necessary for measurable improvements in customer safety; expanded international conversation about key priorities, investments, and decisions; and a future where technology is safe, secure, and resilient by design."
Ukrainian telecommunications providers hit by cyberattack.
Dave Bittner: There’s some minor skirmishing in the cyberspace surrounding Russia’s hybrid war against Ukraine.
Dave Bittner: CERT-UA reported Sunday that eleven telecommunications providers in Ukraine had experienced interference by "an organized group of criminals tracked by the identifier UAC-0165." The goal of the attacks seems to be disruption as opposed to theft or extortion. The Hacker News says that "A successful breach is followed by attempts to disable network and server equipment, specifically Mikrotik equipment, as well as data storage systems."
A Russian credential-harvesting campaign.
Dave Bittner: Researchers at Cluster25 are tracking attacks by what they characterize as a "Russia-nexus nation-State threat actor." The campaign aims at harvesting credentials, and it involves phishing with a baited pdf that carries an exploit for CVE-2023-38831, a vulnerability in WinRAR compression software versions prior to 6.23. The phishbait is a pdf that purports to share indicators of compromise associated with malware strains that include SmokeLoader, Nanocore RAT, Crimson RAT, and AgentTesla. Cluster25 offers no more specific attribution than "Russia-nexus," but the Hacker News speculates that the activity may be run by the SVR foreign intelligence service.
Russian hacktivist auxiliaries hit Belgian websites.
Dave Bittner: In what they've declared to be retaliation for Belgian support for Ukraine, the Brussels Times reports. Websites belonging to the Belgian Senate, Federal Public Service Finance, the Prime Minister’s Chancellery, and the monarchy were affected last Sunday. Service had returned to normal on all but the Senate's site by early Monday morning. The hacktivists' posted a message to the Senate's site complaining of Belgium’s commitment last week to supply Ukraine with F-16 fighters by 2025.
Fraud sees opportunity in misery.
Dave Bittner: Finally, returning to the other other major ongoing hybrid war, the one between Hamas and Israel, there’s a surge in scams seeking to steal from people moved to donate to humanitarian relief in the Middle Eastern conflict zone.
Dave Bittner: Financially motivated criminals are using opportunities for charitable donations as phisbait. Last week Bitdefender’s Antispam Lab saw an increase in such fraudulent appeals. Some of them are cast as appeals on behalf of humanitarian organizations, with the look, more-or-less, of a relief agency site. Others are cast as personal appeals, with the diction and false intimacy usually associated with people claiming to be the widow of a Nigerian prince. In any case, be wary, and donate only to organizations you know, and whose activity you can at least to some extent verify. A big flashing whoopie light of warning, Bitdfender points out, is asking for money in certain specific forms. “Donation requests in crypto, wire transfers, and gift cards are a big red flag to be avoided at all costs.”
Dave Bittner: Resist, too, the temptation to let a scammer know that you’re on to them, and what you think of them. That just confirms that your email is being read, and that there’s someone with strong feelings behind your keyboard. They’ll be back with more chum and other phishbait. Here–we’ll do it for you: you, sirrah, are a loser and a base thief. Leave decent people alone.
Dave Bittner: And, of course, donate safely and securely where you think your charity is most needed.
Dave Bittner: Coming up after the break, Ben Yelin explains attempts to tamp down pornographic deepfakes. Our guest is Ashley Rose from Living Security with a look at measuring human risk. Stay with us. [ Music ] Ashley Rose is CEO of Living Security, a firm that specializes in the quantification of human risk. I spoke with her on how to measure human risk as a component of overall cyber risk.
Ashley Rose: Security leaders, organizations, are spending, you know, as much as over $170 billion on IT security, but we're seeing breaches continuing to rise at an unprecedented rate. The rising DBIR -- so they talk about human risk and the kind of percentage of breaches that humans are responsible. As much as 74% of these breaches are caused by some sort of, you know, human behavior or human risk. Yet, we're only spending $2.7 billion on the training or the human problem. So one of the most important things to note is that cybersecurity is a human problem, and we've been trying to solve it through, basically, improper investment in other technology. And here we still are 10 years later.
Dave Bittner: You know, I hear folks talk a lot about insider risk. Is there a nuanced difference between that and human risk?
Ashley Rose: So we look at human risk as an expansion on insider risk because I think oftentimes when we think about insider risk people align it more closely to, you know, insider threat. And when you think about insider threat, oftentimes there's the notion that it's, you know, from a malicious intent perspective. And, obviously, that's, you know, not accurate in its true description. But what we really want to understand with human risk, and specifically human risk management, is how do we take a more proactive view at the behaviors that -- the behaviors that are causing risk to the organization. So it's really this notion of being able to shift left from a prediction and prevent perspective versus a detection and response, which is where I see most of the sort of insider threat or insider risk tools, you know, fitting into the security tech sac today.
Dave Bittner: Well, can you take us through some of the primary elements here that encompass human risk? What are some of the things that you all track?
Ashley Rose: For Living Security, specifically, we think about our human risk and that's our human risk score. We're actually looking at three different components that make up risk. So the easiest one, or the one that people are most familiar with, would just be the behaviors themselves. Some examples of behaviors would be, you know, a user observed using a [inaudible 00:13:49] browser, repeat phishing offenders, you know, phishing followed by an incident or malware, password management adoption, MFA adoption, sharing sensitive data against policy. So there's a number of different behaviors that could cause risk to the organization. But when we think about true holistic risk management, you know, risk to the company exists beyond just the vulnerability. We also need to think about the threat. You know, when we think of our risk model, we have to combine those behaviors with also the events that could be causing risk to that individual or to the organization. So if someone is highly targeted by a lot of malicious or spam-based emails, they're going to be at a higher probability to falling susceptible to phishing, for instance. And then the third component -- so we have our threat, we have our vulnerabilities or our behaviors, and then we also have to think about the impact. So who is that person? What is their job title? What is their role? What kind of data or sensitive data do they have access to? Right? What's the impact if that person is compromised or if there is a breach? And so when we think about human risk, we're actually looking at all three components and then combining it to create this sort of view or quantification of human cyber risk for companies.
Dave Bittner: To what degree do things like security awareness training come into play here?
Ashley Rose: Yeah, so security awareness and training is really where we got our start. And most of the companies that we come into -- the way that they're measuring and monitoring the human side of risk is through traditional security awareness and training metrics. And so those are things like, you know, phishing click rates, phishing report rates, quiz scores on training. Those are the traditional, like, compliance or training metrics that we see, you know, kind of the earliest companies start with. Our goal is actually to expand beyond just those phishing -- the simulated phishing metrics for companies and to take more of a holistic view of risk. And so you had asked earlier, you know, if I think from a categorical perspective what matters. Training and compliance is something that we do track and monitor, but we're also looking at things like account compromise. We're looking at data loss. We're looking at malware. We're looking at phishing in email. And so you can think about human risk management as an expansion of the security awareness and training program, where those metrics are important, but they're only one piece of the overall pie.
Dave Bittner: You know, I think cybersecurity is so focused on a lot of the technical aspects here. I'm wondering, do you find that there are areas that people mistakenly assign a technical side when it really is a human element?
Ashley Rose: Absolutely. And, you know, I think oftentimes, you know, as we've seen even most recently, you know, with some of the social engineering attacks and this [inaudible 00:16:45] are hitting the hospitality industry. The human is traditionally, like, the first point in, right? The first part of the attack. And then there's this kind of kill chain, this -- you know, the technical controls then start failing, right, beyond the human component. And so then I think what we're seeing is, you know, CISA is becoming disillusioned by the opportunity of being able to affect that initial point of entry. Because security awareness and training of phishing for so long have failed to mitigate that and change behavior, and so there is an overemphasis on what happens next. And I think the fact of the matter is, as we've seen in the last 10 to 15 years, no matter how many controls that we have in place, what different types of technologies we have in place, the human is still a -- is still a major vulnerability and point of attack. And we need to be effectively addressing it and thinking about a different way to do that.
Dave Bittner: That's Ashley Rose from Living Security. [ Music ] And joining me once again is Ben Yelin. He's from the University of Maryland Center for Health and Homeland Security, and also my co-host on the "Caveat" podcast. Hello, Ben.
Ben Yelin: Good to be with you, Dave.
Dave Bittner: So, interesting article from WIRED. This is written by Matt Burgess and the title is "Deepfake Porn is Out of Control." And it really highlights some of the issues that folks are facing here. It's certainly a policy issue. We've talked about deepfakes over on "Caveat" quite a bit. And as the tools become more readily available, this trend of people using deepfake technology to generate pornography -- and this article specifically is talking about nonconsensual imagery and videos and things like that that are vastly disproportionately used, and abusive and harassment, towards women and the issues there. Before we dig into some of the details here, is that a decent description of what they're talking about here, Ben?
Ben Yelin: Yeah, I mean, I think there have been a couple of factors. One is the improvement in AI technology; it makes it easier not only to create deepfakes but to make them more realistic. And then there's the proliferation of websites either exclusively devoted to deepfakes or partially devoted to deepfakes, to the extent that you can use search engines, Microsoft or Google, to find specific websites dedicated to hosting these images. So there's some responsibility, obviously, for the website makers themselves, but also potentially for these search engines which are directing people to these websites. And I think deepfakes have been a problem for about a half decade, but the problem is growing exponentially because of these factors.
Dave Bittner: And so what are some of the potential policy solutions to something like this?
Ben Yelin: So it is really hard to target policies against deepfakes. I know here in the State of Maryland we've had long conversations about how, just on a practical level, we can start to regulate it. What California has done is to provide a cause of action in limited circumstances for people who feel that they've been the victim of deepfake porn videos. That is a solution that's going to be opposed by the industry, they don't want to be held liable. Especially some of these search engines. It's certainly hard at the federal level when trying to hold the search engines accountable. They are protected by Section 230 of the Communications Decency Act. Public pressure on the search engines is certainly something that's achievable. I think both Google and Microsoft expressed in this article that it is not their intention to facilitate the distribution of deepfakes. For Microsoft, they said that this violates their policies on what can be displayed in a search engine query and that any result containing deepfakes should be reported. And I think Google said something similar, as well. So there's sort of the rely on the private sector or try to regulate this at the federal or state level. The problem is just jurisdictional. I mean, I think we've seen with a lot of state laws that are targeting any type of internet activity, it's just very difficult to enforce. You only have jurisdiction over your own state. And then there become a bunch of jurisdictional questions. What counts as a deepfake being posted within this state? Are you banning residents of your state from accessing deepfake videos, or are you simply banning people from posting deepfake videos? Which you can only do if they are within the jurisdiction of your state. So this is not an exclusive problem to deepfake videos. We've seen this with states, for example, trying to ban TikTok in app stores that operate within the State of Montana, as one example. And I think that same struggle is manifesting itself on this issue.
Dave Bittner: You know, I was puzzling through this in my own mind and wondering, could this go the way of CSAM, you know, Child Sexual Abuse Materials. But I think because you could make a case where, for example, you know, this is something that consenting adults could enjoy, you'd have trouble with a universal ban of something like this.
Ben Yelin: Yeah, I mean, it's very difficult because CSAM is very clearly unprotected First Amendment activity. There's really a carve-out in the First Amendment for CSAM. It's more complicated here. I think when we're talking about -- what this article is really referring to, which is the nonconsensual use of these deepfake images or videos, that to me is more of a clearcut case where there is no First Amendment public policy rationale for allowing that material. The risks certainly outweigh any of the benefits. But when we're not talking about nonconsensual images, I think however disgusted you are, you have to recognize that the First Amendment comes into play. And there could be some artistic value or political value or just, kind of, any value adding something to the public square of conversation in some of these images that are going to trigger First Amendment protections. I think any challenge to both consensual and nonconsensual deepfake videos are going to run into those First Amendment challenges because it is an inhibition on First Amendment-protected activity. You know, we have decided that a bunch of things that are technically -- or restrictions that are not allowed under our First Amendment jurisprudence should nonetheless be allowed for public policy reasons. We've done that in a number of circumstances, including certain types of obscenity, false advertising. So I think it's possible for us to make a societal choice that this type of nonconsensual pornography with deepfakes is unacceptable and falls outside of First Amendment-protected activity. We have not made that decision yet as a society. So I think it's going to be part of our national conversation.
Dave Bittner: Is this another example of the technology perhaps outstripping the policy's ability to deal with it?
Ben Yelin: Yeah, it always does. You know, we've now gone about a half-decade with deepfakes being an issue. I think, you know, there have been Congressional hearings on deepfakes and the deleterious impact of them. A lot of social psychology experts from many of our best universities have been writing about the harmful mental health effects of deepfakes on mainly the women being depicted in them. So it's certainly entered into the zeitgeist, but we have not seen a lot of concrete -- outside of California, we haven't seen a lot of concrete policy changes in this area. So it is true that technology always outpaces the ability of our legal system to respond. And I think that's definitely the case here.
Dave Bittner: All right, we'll point you back to the article here again, written by Matt Burgess, "Deepfake Porn is Out of Control." That's over on WIRED. Ben Yelin, thanks so much for joining us.
Ben Yelin: Thank you. [ Music ]
Dave Bittner: And that's the "CyberWire." For links to all of today's stories, check out our "Daily Briefing" at thecyberwire.com. We'd love to know what you think of this podcast. You can email us at email@example.com. Your feedback helps us ensure we're delivering the information and insights that help keep you a step ahead in the rapidly changing world of cybersecurity. We're privileged that N2K and podcasts like the "CyberWire" are part of the daily intelligence routine of many of the most influential leaders and operators in the public and private sector. As well as the critical security teams supporting the Fortune 500 and many of the world's preeminent intelligence and law enforcement agencies. N2K's Strategic Workforce Intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at n2k.com. This episode was produced by Liz Irvin and senior producer Jennifer Eiben. Our mixer is Tre Hester, with original music by Elliot Peltzman. The show was written by our editorial staff. Our executive editor is Peter Kilpe and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow. [ Music ]