A high-stakes swap.
Notorious Russian cybercriminals head home after an historic prisoner exchange. An Israeli hacktivist group claims responsibility for a cyberattack that disrupted internet access in Iran. The U.S. Copyright Office calls for federal legislation to combat deep fakes. Cybercriminals are using a Cloudflare testing service for malware campaigns. The GAO instructs the EPA to address rising cyber threats to water and wastewater systems. Claroty reports a vulnerability in Rockwell Automation’s ControlLogix devices. Apple has open-sourced its homomorphic encryption (HE) library. CISA warns of a high severity vulnerability in Avtech Security cameras, and the agency appoints its first Chief AI Officer. We welcome Tim Starks of CyberScoop back to the show today to discuss President Biden's cybersecurity legacy. Can an AI chatbot recognize its own reflection?
Today is Friday August 2nd 2024. I’m Dave Bittner. And this is your CyberWire Intel Briefing.
Notorious Russian cybercriminals head home after an historic prisoner exchange.
A significant prisoner exchange took place between the United States, Russia, and Germany, involving the release of prominent cybercriminals and others. This exchange included Wall Street Journal reporter Evan Gershkovich and former U.S. Marine Paul Whelan from Russia. The U.S. released Russian cybercriminals Roman Seleznev and Vladislav Klyushin.
Seleznev, a notorious hacker, was sentenced in 2017 to 27 years for his role in major credit card fraud. Known by aliases like "Track2," he operated large-scale cybercrime operations and sold stolen credit card data online. Klyushin was extradited to the U.S. for a hack-to-trade scheme that earned $93 million by trading on confidential information. Convicted in February 2023, he was sentenced to nine years.
President Biden described the swap, which involved several countries, as a diplomatic achievement. Experts consider it the largest such exchange since the Cold War. Both Seleznev and Klyushin are now returning home as part of this extensive diplomatic effort.
Ransomware attacks rising in healthcare with Errol Weiss
The American Hospital Association (AHA) and Health-ISAC yesterday issued a joint threat bulletin regarding ransomware attacks in the healthcare industry, citing recent attacks against Octapharma, Synnovis, and OneBlood. While these attacks "appear to be unrelated and have been conducted by separate Russian-speaking ransomware groups," the report states that "the unique nature and proximity of these ransomware attacks - targeting aspects of the medical blood supply chain within a relatively short time frame, is concerning."
An Israeli hacktivist group claims responsibility for a cyberattack that disrupted internet access in Iran.
An Israeli hacktivist group called WeRedEvils claimed responsibility for a cyberattack that disrupted internet access in parts of Iran, including Tehran. The group announced the attack on Telegram, warning of imminent disruptions to Iranian internet services. Reports confirmed internet outages in Iran, though the extent is unclear.
WeRedEvils stated they accessed Iran's communications system and shared information with Israeli security forces. The group has launched multiple attacks since the October 2023 Hamas attack on Israel, escalating tensions with Iran. Their actions coincide with increased hostilities following the Israeli assassination of Ismail Haniyeh, Hamas’ political leader, in Tehran.
The Biden Administration is preparing for potential Iranian retaliation, with expectations of involvement from Hezbollah. WeRedEvils previously claimed responsibility for hacking Iran's oil infrastructure and disabling Tehran's electrical grid, highlighting their ongoing cyber warfare efforts.
The U.S. Copyright Office calls for federal legislation to combat deep fakes.
The U.S. Copyright Office has released the first part of a comprehensive report examining the impact of artificial intelligence, focusing initially on the issue of "digital replicas" or deepfakes. The report highlights the rapid advancements in AI that enable the creation of sophisticated deepfakes, which can include AI-generated music, impersonations of political figures, and pornographic videos. It stresses the urgent need for federal legislation to address the challenges posed by these technologies.
The NO FAKES Act, recently introduced in the Senate, aims to provide individuals the right to control the use of their likeness in digital replicas. The report supports the bill, emphasizing the importance of protecting artists, individuals' dignity, and public security from fraud. Future reports from the Copyright Office will explore other AI-related issues, including copyrightability and liability.
U.S. Register of Copyrights Shira Perlmutter underscores the transformative impact of AI on creativity, raising questions about the role of human authorship and the balance between technological innovation and copyright protection. The report acknowledges AI's potential to amplify creativity while also presenting existential challenges to copyright law and policy.
Cybercriminals are using a Cloudflare testing service for malware campaigns.
Researchers at Proofpoint have identified a rise in cybercriminals using Cloudflare Tunnel's TryCloudflare service for malware campaigns delivering remote access trojans (RATs) like AsyncRAT and Remcos RAT. Detected since February, these campaigns exploit TryCloudflare's ability to create temporary encrypted tunnels, which mask IP addresses and avoid detection. Threat actors target sectors like law and finance, distributing malware via tax-themed emails. Proofpoint observed over 1,500 malicious emails sent since July 11, highlighting the service's exploitation for large-scale operations due to its free and reliable infrastructure.
The GAO instructs the EPA to address rising cyber threats to water and wastewater systems.
The US Government Accountability Office (GAO) reports that the Environmental Protection Agency (EPA) must address rising cyber threats to water and wastewater systems. These systems face increased risks from nation-state actors, including Iran's Islamic Revolutionary Guard Corps and Chinese group Volt Typhoon. The EPA has not conducted a comprehensive risk assessment or developed a risk-informed strategy, limiting its ability to tackle the most significant risks. Challenges include aging technology, increased automation, and workforce skills gaps. Many operators underestimate their vulnerability, especially in smaller or rural areas. The GAO recommends that the EPA conduct a sector-wide risk assessment, develop a cybersecurity strategy, evaluate its legal authorities, and revise the Vulnerability Self-Assessment Tool (VSAT). The EPA has accepted these recommendations, with plans to implement them by 2025.
Claroty reports a vulnerability in Rockwell Automation’s ControlLogix devices.
On August 1, Claroty reported a vulnerability (CVE-2024-6242) in Rockwell Automation’s ControlLogix 1756 devices, affecting GuardLogix and other controllers. This flaw allows attackers to bypass the Trusted Slot feature, enabling them to execute CIP commands that could alter user projects or device configurations. Claroty found that attackers could exploit this by jumping between slots in the 1756 chassis via CIP routing, bypassing security barriers. Rockwell and CISA have issued advisories, and patches are available. Exploitation requires network access to the device.
Apple has open-sourced its homomorphic encryption (HE) library.
Apple has open-sourced its homomorphic encryption (HE) library under the Apache 2.0 license, providing Swift libraries and executables for developers. Homomorphic encryption allows computations on encrypted data without revealing the underlying information, enhancing privacy across various applications. Historically, HE implementations were complex and resource-intensive, but recent advancements have made them more practical for production use. Apple's implementation in iOS 18 for Live Caller ID Lookup enables encrypted queries for caller ID and spam blocking without exposing user data. The library uses a quantum-resistant scheme.
Homomorphic encryption, a key privacy-enhancing technology, holds potential for securely leveraging data across jurisdictions. While companies like Microsoft and IBM offer HE libraries, Apple's open-source initiative is a notable step in expanding HE's practical applications. Experts like Enveil CEO Ellison Anne Williams emphasize the transformative power of HE for secure data utilization and its role in the privacy-enhancing technology ecosystem.
CISA warns of a high severity vulnerability in Avtech Security cameras, and the agency appoints its first Chief AI Officer.
The US Cybersecurity and Infrastructure Security Agency (CISA) has issued an advisory regarding a high-severity vulnerability, CVE-2024-7029, found in Avtech Security cameras. This flaw affects Avtech AVM1203 IP cameras with specific firmware versions, allowing remote, unauthenticated command injection. CISA reports active exploitation but notes Avtech's lack of response to address the issue, leaving the vulnerability unpatched. Discovered by Akamai and confirmed by a third-party, the vulnerability could impact various sectors, including healthcare and finance. Despite the critical nature, CISA has not yet included it in its Known Exploited Vulnerabilities Catalog. Avtech cameras have previously been targeted by IoT botnets like Hide ‘N Seek and Mirai.
Unrelated, CISA has appointed Lisa Einstein as its first Chief Artificial Intelligence Officer. Einstein, previously CISA’s Senior Advisor for AI and Executive Director of the Cybersecurity Advisory Committee, has been instrumental in shaping CISA's AI initiatives. Her appointment aims to strengthen the agency's AI expertise and ensure safe AI adoption for critical infrastructure. CISA Director Jen Easterly praised Einstein’s leadership and vision in advancing AI efforts. Einstein emphasized her commitment to enhancing cybersecurity and infrastructure reliability through AI. Her achievements include developing CISA’s AI roadmap and leading a pilot program for testing AI cybersecurity tools, with findings recently shared with the White House.
Next, we welcome our friend Tim Starks back to the show. Tim joins us from CyberScoop to discuss President Biden’s cybersecurity legacy. We’ll be right back
Welcome back. You can find links to Tim’s piece on the Biden Administration’s cybersecurity legacy and to the official cybersecurity strategy in our show notes.
Can an AI chatbot recognize its own reflection?
And finally, our Hal 9000 appreciation desk tells us a team of Swiss researchers is delving into a question straight out of a sci-fi movie: Could chatbots become self-aware? While this sounds like the setup for a blockbuster thriller, the researchers are taking it seriously, given the potential security implications. They devised a clever test to see if AI models can recognize their own outputs, akin to finding their reflection in a sea of digital doppelgängers.
Historically, the notion of AI self-awareness was dismissed by experts. Despite the skepticism, recent chatter around Anthropic's Claude 3 Opus being able to detect trick questions has reignited the debate. A majority of ChatGPT users even believe in some form of chatbot consciousness.
The research team, led by Tim Davidson from the École Polytechnique Fédérale de Lausanne, discovered that some AI models can identify their own responses from a lineup with more than a 50% accuracy. This might suggest some self-recognition, but the reality is a tad more mundane. The models, it turns out, are merely selecting what they perceive as the “best” answer, not necessarily their own. It’s like asking a dog to find its reflection and having it pick the shiniest bowl instead.
Despite the models’ penchant for vanity, Davidson highlights the importance of this line of inquiry. If AI models eventually become capable of true self-recognition, it could lead to intriguing scenarios. Imagine AI-powered lawyers negotiating with one another; if one model recognizes it's sparring with a twin, it could gain an unfair advantage by predicting its counterpart's moves.
While this may seem like a far-off dystopian tale, Davidson advises cautious optimism. After all, as he puts it, "You start fireproofing your house before there’s a fire." Keeping an eye on these developments ensures we're prepared for whatever AI’s digital evolution brings, even if it’s just making sure our chatbots don’t outsmart us at their own game.
And that’s the CyberWire.
For links to all of today’s stories, check out our Daily Briefing at the cyberwire dot com.
Fridays:
Research Saturday plug.
And that’s the CyberWire.
We’d love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like the show, please share a rating and review in your podcast app. Please also fill out the survey in the show notes or send an email to cyberwire@n2k.com
We’re privileged that N2K CyberWire is part of the daily routine of the most influential leaders and operators in the public and private sector, from the Fortune 500 to many of the world’s preeminent intelligence and law enforcement agencies.
N2K makes it easy for companies to optimize your biggest investment, your people. We make you smarter about your teams, while making your teams smarter. Learn how at n2k.com.
This episode was produced by Liz Stokes. Our mixer is Tré Hester, with original music and sound design by Elliott Peltzman. Our executive producer is Jennifer Eiben. Our executive editor is Brandon Karpf. Simone Petrella is our president. Peter Kilpe is our publisher, and I’m Dave Bittner. Thanks for listening.