The CyberWire Daily Podcast 10.23.23
Ep 1932 | 10.23.23

How people get over on the content moderators.

Transcript

Dave Bittner: Okta discloses a data exposure incident. Cisco works to fix a zero-day. DPRK threat actors pose as IT workers. The Five Eyes warn of AI-enabled Chinese espionage. Job posting as phishbait. The risk of first-party fraud. Hacktivists trouble humanitarian organizations with nuisance attacks. Content moderation during wartime. Malek Ben Salem of Accenture describes code models. Our guest is Joe Oregon from CISA, discussing the tabletop exercise that CISA, the NFL, and local partners conducted in preparation for the next Super BowI. And the International Criminal Court confirms that it’s sustained a cyberespionage incident.

Dave Bittner: I’m Dave Bittner with your CyberWire intel briefing for Monday, October 23rd, 2023.

Okta discloses a data breach.

Dave Bittner: Identity and access management company Okta has disclosed a data breach affecting some of the company’s customers. The company stated, “The threat actor was able to view files uploaded by certain Okta customers as part of recent support cases. It should be noted that the Okta support case management system is separate from the production Okta service, which is fully operational and has not been impacted. In addition, the Auth0/CIC case management system is not impacted by this incident.”

Dave Bittner: KrebsOnSecurity notes that “it appears the hackers responsible had access to Okta’s support platform for at least two weeks before the company fully contained the intrusion.”

Cisco works to fix zero-day.

Dave Bittner: Cisco has disclosed a new zero-day vulnerability (CVE-2023-20273) that was used to deploy malware on IOS XE devices devices compromised via CVE-2023-20198, another zero-day the company disclosed last week, BleepingComputer reports. According to data from Censys, as of October 18th nearly 42,000 Cisco devices had been compromised by the backdoor, though that number is steadily falling. Cisco said in an update on Friday that “[f]ixes for both CVE-2023-20198 and CVE-2023-20273 are estimated to be available on October 22.”

DPRK threat actors pose as IT workers.

Dave Bittner: The FBI has issued a public service announcement offering “guidance to the international community, the private sector, and the public to better understand and guard against the inadvertent recruitment, hiring, and facilitation” of North Korean IT workers. The Bureau notes that “[t]he hiring or supporting of DPRK IT workers continues to pose many risks, ranging from theft of intellectual property, data, and funds, to reputational harm and legal consequences, including sanctions under U.S., ROK, and United Nations (UN) authorities.”

Five Eyes warn of AI-enabled Chinese espionage.

Dave Bittner: In an “unprecedented” joint call by Five Eyes counterintelligence leaders last Tuesday, the officials called out Beijing for what they characterized as theft of intellectual property on an "unprecedented" scale. The Five Eyes--Australia, Canada, New Zealand, the United Kingdom, and the United States--called on industry and universities to help counter this threat of Chinese espionage. 

Dave Bittner: Such espionage is nothing new, but what the Five Eyes find particularly unsettling is the use of artificial intelligence in these campaigns. AI can amplify and augment an already serious threat. 

Dave Bittner: The Five Eyes' counterintelligence leads have been unusually open in their assessment of the Chinese espionage threat. They took their concerns to the broader public in an unprecedented joint appearance on CBS News' "60 Minutes" yesterday evening. They clearly want as many people to get the message as possible.

Job posting as phishbait.

Dave Bittner: WithSecure is tracking a cluster of Vietnamese cybercriminal groups that are using phony job postings to distribute malware-laden documents. The researchers say, “TheWithSecure Detection and Response Team (DRT) detected and identified multiple DarkGate malware infection attempts against WithSecure Managed Detection and Response (MDR) customers in the UK, US, and India. It rapidly became apparent that the lure documents and targeting were very similar to recent DuckTail infostealer campaigns, and it was possible to pivot through open source data from the DarkGate campaign to multiple other infostealers which are very likely being used by the same actor/group.”

Dave Bittner: The criminals are primarily interested in stealing information and hijacking Facebook Business accounts.

The risk of first-party fraud.

Dave Bittner: Hey, America: we hear that about a third of you are thieves. Socure has published a report finding that first-party fraud costs US financial institutions more than $100 billion per year. First-party fraud sounds exotic, but it’s just fraud where those who commit it use their own identity.

Dave Bittner: Additionally, the survey found that “more than one in three Americans (35%) admit to committing first-party fraud themselves.” The researchers explain, “This includes requesting a refund on an online purchase by falsely claiming that a delivery has been lost, choosing not to pay off credit card bills indefinitely, making a purchase through a ‘Buy Now Pay Later’ (BNPL) loan or maxing out a credit card with no intention of paying it off, or disputing a legitimate financial transaction.”

Dave Bittner: We hate this stuff. Shame on you, Mr. and Mrs. United States.

Hacktivists trouble humanitarian organizations with nuisance attacks.

Dave Bittner: Pro-Hamas (or at least anti-Israeli) hacktivists disrupted some online services in an unspecified cyberattack against Tel Aviv’s Sheba Medical Center at Tel Hashomer. The hospital took itself offline and reverted to manual operations, but patient care has continued. The Jerusalem Post reports that the Israeli health ministry has disconnected several other hospitals from the Internet as a precautionary measure. The Jerusalem Post also reports that the website of the Israeli Chevra Kadisha (Jewish Burial Society) was defaced Saturday with anti-semitic slurs and images. The defaced pages displayed the coup-counting claim "hacked by x7root."

Dave Bittner: These incidents appear to be instances of a larger trend. It’s important to note that Palestinian as well as Israeli organizations have been affected. The Wall Street Journal reports that humanitarian organizations serving people on both sides of the conflict have increasingly come under hacktivist attack, 

Content moderation during conflict.

Dave Bittner: The European Commission is waiting for satisfactory responses from X (the platform formerly known as Twitter), TikTok, and Meta (corporate parent of Facebook and Instagram) to allegations that they're out of compliance with the anti-disinformation and anti-hate speech provisions of the EU's Digital Services Act.

Dave Bittner: The European Commission's inquiries are directed principally against disinformation and hate speech aligned with Hamas. But content moderation, ineffectual as it may have been, has apparently had adverse effects on the Palestinian population in Gaza. WIRED describes some of the ways in which moderation amounts to shadow banning. Reports say that it can make it difficult for Palestinians to share warnings, information about basic necessities, and personal news concerning family members.

Dave Bittner: Eastern Europe and the Middle East aren't the only regions where conflict is outrunning platforms' content moderation capabilities. Bellingcat describes how Hindu nationalists are taking advantage of YouTube's Art Tracks autogeneration functionality to produce Hindutva Pop. The genre is associated, bellingcat says, with incitement to violence against Muslims, and with calls for Muslims' expulsion from India.

Dave Bittner: Content moderation has remained notoriously labor-intensive and difficult. It becomes more so as people determined to communicate come up with codewords, slang, typographic substitutions, and the like. Their hope is to slip past automated gatekeepers. Suppose you wanted to talk about your pooch on a determinedly anti-canine platform. Calling your dog a “doge” isn’t going to fool an actual cryptographer, but you might well put that one over on the screening tools. They don’t have the street smarts an actual anti-canine human moderator would have.

Dave Bittner: The Washington Post has an account of how, for better or for worse, pro-Palestinian social media users are employing such measures to circumvent platforms' content moderation.

Cyberespionage at the ICC.

Dave Bittner: And, finally, TechCrunch reports that the International Criminal Court (ICC) has confirmed that a cyberattack it sustained last month was indeed cyberespionage. The ICC said, “The attack can therefore be interpreted as a serious attempt to undermine the court’s mandate.”  

Dave Bittner: It looks like a government-sponsored operation. The ICC hasn't determined what government is behind the attack, but it's almost certainly Russia. Moscow has been determinedly hostile to the court since the ICC issued a warrant for President Putin's arrest. (Russia retaliated by issuing its own arrest warrants for the court's president, deputy, chief prosecutor, and one judge.) The ICC expects to be the target of disinformation campaigns designed to destroy its legitimacy. It views September's cyberespionage as preparatory work for that disinformation. 

Dave Bittner: The ICC has briefly outlined the steps it's taken to mitigate the attack, and says that Dutch police are investigating. Good hunting to them.

Dave Bittner: Coming up after the break, Malek Ben Salem from Accenture describes code models. Our guest is Joe Oregon from CISA, discussing the tabletop exercise that CISA, the NFL, and local partners conducted in preparation for the next Super Bowl. Stay with us. Joseph Oregon is chief of cybersecurity for CISA Region Nine, where they recently collaborated with security professionals from the NFL as well as local partners for a tabletop exercise exploring potential vulnerabilities around this season's Super Bowl. For Joseph Oregon, it's a prime example of the type of partnering that CISA hopes to promote.

Joseph Oregon: A tabletop exercise, in a nutshell, it's an informational kind of a discussion-based walk-through of different scenarios, and they're created or customized by us, by CISA, to help stakeholders address their roles and responsibilities during a specific incident. So as an example, we may help stakeholders by creating a scenario which helps them walk-through how they would respond to a ransomware incident or maybe even an incident response plan or a physical incident at their location. So I take a moment just to highlight that this resource, and in fact our regional offices and our headquarter elements have dedicated professionals who help craft tabletop exercises for our partners is for free, right, and something that a lot of organizations, whether they're public or private, kind of leverage, because it comes with a lot of benefits. We have an actual team that will work with organizations, that will actually deploy out to a location, help them walk-through the scenario. We try to look at it from a humble approach. So we help facilitate what we actually -- we take our cues from those partners. So NFL is one of such partners who reached out to CISA. And because of their involvement with the Super Bowl and other various events, they partnered with CISA in order to kind of put on a tabletop exercise that not only covers what they do within the NFL to manage particular incidents, but also to understand what private sector and public sector entities in the location of their event, how they manage an incident. So really, it's this huge collaboration as an example of private and public sector entities that are coming together and walking-through, you know, this tabletop exercise. And to your initial point, David, with regards to, you know, why did they approach CISA, and it's more so as looking at as a collaborative relationship, right. They know that we are a government agency and that's the operational lead for federal cybersecurity and national coordination for critical infrastructure security and resilience. Knowing that, they want to make sure that, you know, they're kind of checking the boxes as well and kind of understand the processes from a federal government perspective. So they reach out and they work with us, and work with the local partners there to kind of get involved and provide that assistance -- or not assistance, rather, but provide that awareness of their events and what they look for as it pertains to security and security scenarios that they can walk-through with both public and private sector.

Dave Bittner: What's your message to folks who aren't operating at the scale or level of someone like the NFL? You know, an organization that's in, you know, one of the 50 states and perhaps has a manufacturing facility or, you know, something of moderate scale think that they may want to reach out and start a relationship with CISA. Is that something that you're looking to encourage?

Joseph Oregon: Oh, we encourage it all the time. And the fact that we work, for this example that we used earlier with the NFL, we work with organizations that vary in all kind of sizes, whether they're private or public. We work through K-12 and cities and counties. We work with critical infrastructure such as water and wastewater. We work on a number of state partners as well as private sector partners. So as we look at smaller organizations that are looking to leverage resources that the federal government provides for free, so as in the case of tabletop exercise, we facilitate those resources to our partner sets across the board. So we heavily encourage our partners, if they're interested, to definitely reach out to the CISA reps that we do have in the field, or they can go to our website at cisa.gov, identify who those points of contact might be and their respective state. I'd like to make a quick note, in we're going into Cyber Awareness Month. So on September 29th, today, CISA officially kicks off our 20th Cybersecurity Awareness Month. So throughout October, the month of October, CISA and our cooperative agreement recipients, the National Cybersecurity Alliance, will focus on ways to secure our world. We educate individuals and organizations on how to stay safe online. So this is a collaborative effort between government and industry to enhance cybersecurity awareness on a national and global scale. We're trying to build off of last year's initiatives, that is using strong passwords and password managers, turning on multifactor authentication, recognizing and reporting phishing, and finally, updating software. So we're building off that strong message. And as we look at CISA, what we're trying to do is help shape behavior, a behavioral change plan, by adopting and improving ongoing cybersecurity habits that reduce risk while online or on a connected device.

Dave Bittner: That's Joseph Oregon, chief of cybersecurity for CISA Region Nine. [ Music ] And joining me once again is Malek Ben Salem. She is the managing director for Security and Emerging Technology at Accenture. Malek, it's always great to welcome you back. I want to talk today about code models. There's been a lot of excitement here with AI and some of the tools that can help people here. Can you unpack this for us here? What are we talking about?

Malek Ben Salem: Yeah, absolutely. I mean, we've seen, over the past year, we've seen a lot of large language models being published or announced. Some of those large language models that generate text, they also generate code, right, source code -- so Java code, Python code, etcetera. Some are even dedicated to generating code. So they don't generate regular text but just focused on code. Some of them are open source, others are proprietary. But many of my clients are really interested in deploying at least experimenting with these code models, potentially deploying them to help with application development. And even the numbers that Gartner has published do support that. Gartner, for example, expects that 15% of the new applications will be automatically generated by AI without any human in the loop by 2017. And they expect that 30% of enterprises will have implemented an AI augmented development and testing strategy by 2025, so just in two years. But what I wanted to share with the audience today who are considering these code models is, you know, to think through some of the, you know, potential risks and, you know, considerations as they select the right code model. So as I mentioned, some of these code models have been trained using open source data, others have been trained using proprietary data. And so those two different types of training approaches or data sets pairing with, you know, potential liability risks and IP ownership risks, you're probably aware of certain lawsuits going on against certain models where, you know, open source repo contributors are claiming ownership, or at least some of IP ownership, or copyright infringement of the code that they have contributed to those repos. So that may carry some liability for the end-users of these models, the organizations developing these models. And that question of IP ownership is not clear. So does the code generated by -- suppose if you're deploying a code model within your organization, does the code generated by that model, is that owned by you as the organization; is that owned by the vendor who is providing that code model for you; or is it owned by the developers who contributed the training data for that model? You know, that's a grey area. So that's something to keep in mind. I mean, I'm not discouraging clients to experiment and think through their use cases. I think there are tremendous benefits in terms of developer productivity, but I'd like to highlight some of the risks. The other thing I'd like to point out is, you know, definitely there are improvements in efficiency, but I think at this point at least these code models can work well with developers. They're wonderful pair programmers. But I don't think they're ready for completely, you know, generating code on their own. The capability is not there. But also, there are security risks associated with that. So it has been shown that these models generate code that may work functionally but carry some security vulnerabilities. And that's not really surprising because it's been trained with codes, you know, that's out there in the public, you know, open source code, that, you know, carries some and may be riddled with security vulnerabilities. And, you know, they're mimicking or regenerating the steps of vulnerabilities. So if you're considering deploying these code models, I think it's critically to double down on your security scanning processes; make sure that you perform, you know, SAS scans, source code scans, to discover these types of vulnerabilities. The other thing, you know, to consider is, in the long term, I'm sure the performance of these code models will improve in the long term. But something to keep in mind is when new zero-days get discovered, probably the time to retrain those code models so that they generate source code that is secure that is not exploitable through those zero-days is much longer than the time it would take, you know, the security scanning companies, if you will, to be able to detect that type of zero-day attack. So again, that's another consideration to think through as you're, you know, assessing the value and risk of the use and deployment of these code models.

Dave Bittner: Yeah. It really strikes me as being -- I mean, is it fair to say it's a supply chain risk here? I think about open source software and how we've seen examples of, you know, people inserting bad things into popular libraries and so on and so forth. But, you know, that has the eyes of the community on it, where we think about LLM's as being kind of a black box here. It seems to me that's a significant difference.

Malek Ben Salem: Yeah, absolutely. I think that's one piece of it, that there is a supply chain risk as well, or, you know, in this case, a data poisoning risk if the security vulnerabilities are inserted on purpose, definitely one risk. In other cases, you can opt for proprietary models, or models that have been trained with proprietary data. But it's important to understand, you know, the trade-offs. It's important to also compare these models with respect to the quality of their output. And they vary significantly, right, their performance vary significantly. And luckily, there has been some data sets published for benchmarking these models, so organizations can do that as part of the due diligence as they're selecting the right code model for their organization. But overall, I think what I'll recommend is use them as pair programmers, use them for tasks like, you know, quick code translation or explanation. I don't think they're ready for independent code generation. And definitely, you know, focus on your source code security testing and other types of application testing to deploy or to adopt these models safely.

Dave Bittner: Kind of think of them as your junior partner, right?

Malek Ben Salem: Exactly, yes.

Dave Bittner: Someone who can help you, but you've got to keep an eye on them, yeah.

Malek Ben Salem: Yep, absolutely.

Dave Bittner: All right, Malek Ben Salem, thank you so much for joining us.

Malek Ben Salem: Thanks for having me.

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our Daily Briefing at the cyberwire.com. We'd love to know what you think of this podcast. You can email us at cyberwire@n2k.com. Your feedback helps us ensure we're delivering the information and insights that help keep you a step ahead in the rapidly changing world of cybersecurity. We're privileged that N2K and podcasts like the CyberWire are part of the daily intelligence routine of many of the most influential leaders and operators in the public and private sector, as well as the critical security teams supporting the Fortune 500 and many of the world's preeminent intelligence and law enforcement agencies. N2K strategic workforce intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team, while making your team smarter. Learn more at n2k.com. This episode was produced by Liz Irvin and senior producer Jennifer Eiben. Our mixer is Tré Hester, with original music by Elliott Peltzman. The show was written by our editorial staff. Our executive editor is Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.