Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1,850 words, this briefing is about an 8-minute read.
At a glance.
- Federal agencies are considering banning a popular WiFi device.
- Getty Images loses London lawsuit against Stability AI.
Several US agencies support banning a popular home router.
The news.
Last Thursday, more than half a dozen federal agencies and departments announced their support for banning the future sales of one of the most popular home routers within the United States (US). The proposal was originally introduced by the Department of Commerce because the domestic vendor has ties to mainland China, creating a potential national security risk.
More specifically, the proposal calls for a blocking of sales of networking devices manufactured by TP-Link Systems of Irvine, California, which is connected to the Chinese company, TP-Link Technologies. The proposal was originally submitted over the Summer following a months-long risk assessment process.
Since proposing the ban and undergoing an interagency review process, the Commerce Department has not taken any formal action to impose or withdraw the proposed ban.
Ricca Silverio, a spokeswoman for TP-Link Systems, responded to this proposal, stating:
“TP-Link vigorously denies any allegation that its products present national security risks to the [US]. TP-Link is a US company committed to supplying high-quality and secure products to the US market and beyond.”
The knowledge.
When conducting their risk assessment, Commerce officials found that TP-Link Systems products were at a higher risk given the sensitive nature of the data routers handle and how officials believed the company remained subject to influence by the Chinese government. Notably, in the Commerce Department’s original proposal, the agency stated that it would allow the company to offer a deal to satisfy the government’s concerns and forestall the ban.
Given TP-Link Systems' substantial market share of home routers, which some congressional testimonies estimate to be above 50%, these concerns could carry a significant impact on markets if a ban were to be enforced.
This would not be the first time such a ban would be implemented over concerns related to potential foreign influence. In 2024, former Commerce Secretary Gina Raimondo blocked Kaspersky Lab’s antivirus programs, citing concerns about the company’s ties to Russia. At the time, former Secretary Raimondo stated that “Russia has shown it has the capacity…to exploit Russian companies like Kaspersky to collect and weaponize the personal information of Americans.”
Alongside the Commerce Department having the power to ban these potentially vulnerable products, the US government has been paying closer attention to Chinese influence amongst American consumers. In 2024, one of the most impactful efforts was the banning of TikTok. For context, in 2024, former President Biden signed a law that gave ByteDance, TikTok’s parent company, several months to divest from the application or face a ban. While this deadline was extended several times by the Trump administration, the threat of a ban was successful, as a deal to sell the social media company has been potentially reached.
These efforts to close potential vulnerabilities within US markets have only continued to gain more traction within the US.
The impact.
While the Commerce Department has announced no formal ban, it does have the legal authority to institute one. If a ban were to be enacted, it would have significant impacts on the router market within the US. Additionally, if the Commerce Department were to ban these routers, this could be indicative of similar future efforts that seek to counter potential foreign influence.
Technology companies that have strong ties to China should understand the implications of this potential ban. By understanding the US government’s reasoning and what remediation solutions it is willing to accept, potentially impacted businesses can avoid any unnecessary risks to their operations.
Getty Images mostly loses its London AI lawsuit.
The news.
On Tuesday, Getty Images lost most of its United Kingdom (UK) lawsuit against Stability AI regarding the company’s image generator tool. When filing their original lawsuit, Getty alleged that Stability AI used its images to train its Stable Diffusion system, which Stability uses to generate images from text inputs.
Alongside this complaint, Getty also claimed that the content produced by Stability’s generative system was reproducing Getty’s copyrighted images. However, Getty dropped this claim mid-trial, partially due to a lack of evidence.
Throughout the case, Stability’s lawyers argued that the lawsuit posed “an overt threat to…the wider generative AI industry,” whereas Getty claimed that it was protecting its intellectual property rights.
In this ruling, Judge Joanna Smith stated that while Getty had succeeded “in part” on trademark infringement, her findings were “both historic and extremely limited in scope.” Judge Smith also stated that “Stable Diffusion…does not store or reproduce any copyright works.”
In response to the ruling, Getty Images released a statement, writing:
“We urge governments, including the UK, to establish stronger transparency rules which are essential to prevent costly legal battles and to allow creators to protect their rights.”
The knowledge.
This case, like many similar ones in both the UK and the US, centers around how artificial intelligence (AI) developers are allegedly training their advanced models using copyrighted material. While many of these other lawsuits are still being resolved, this latest ruling marks a significant step back for copyright protections within the UK. UK lawyers commented on this ruling, emphasizing that it has exposed the current inadequacies in UK copyright law and how laws need to be reevaluated to better account for AI.
Gill Dennis, a lawyer at Pinsent Masons, noted how the government needs to provide “clear, timely policy guidance” regarding whether training AI on copyrighted material constitutes infringement. Rebecca Newman, a lawyer at Addleshaw Goddard, added:
“Today’s findings means that copyright owners’ exclusive rights to reap what they have sown have been avoided on a technicality.”
Meanwhile, the cases within the US have seen greater progress. In September 2025, Anthropic settled its case between itself and a group of authors and publishers. Similar to Getty’s case, these authors and publishers were suing the AI developer for copyright infringement for their works. While the case did not establish any legal precedent given its settlement, the settlement does act as a guide for resolving similar cases, given Anthropic's agreement to pay $1.5 billion, which equated to around $3,000 per the estimated 500,000 books covered.
The impact.
While this case is a setback for content creators in the UK, this latest ruling does expose a significant gap in the UK’s current copyright laws. Though it will take time to address this gap, it will eventually be resolved, creating greater certainty for both creators and AI developers alike on how to treat copyrighted material.
Given the mounting pressure to address these concerns, businesses should understand these cases and their potential impacts, especially within the AI marketplace. As these lawsuits are settled or ruled upon, they will certainly have impacts on how AI businesses operate, consume copyrighted material, and deliver their solutions to clients. Understanding these implications will be critical for many businesses outside of those directly affected by these rulings.
Highlighting key conversations.
In this week’s Caveat Podcast, our team sat down with Dr. Sasha O’Connell, Senior Director for Cybersecurity Programs at Aspen Digital and former FBI Chief Policy Advisor for Science & Technology. During this conversation, our team and Dr. O’Connell speak on how cyberattacks on critical infrastructure have impacted national security priorities and how modern solutions need to extend beyond the federal government to better impact state, local, tribal, and territorial systems.
Like what you read, and curious about the conversation? Head over to the Caveat Podcast for the full scoop and additional compelling insights. Our Caveat Podcast is a weekly show where we discuss topics related to surveillance, digital privacy, cybersecurity law, and policy. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you.
Other noteworthy stories.
Microsoft secures US export licenses for Nvidia chips.
What: Microsoft has gotten approval to export Nvidia chips to its data centres in the United Arab Emirates (UAE).
Why: On Monday, Microsoft announced that it gained its exporting licences alongside committing to investing $15 billion into the UAE by the end of 2029. The chips that have been approved have not been shipped yet, but are expected to be transported in several months. So far, Microsoft has already invested $7.3 billion in the UAE from 2023 to now, and they are expected to invest an additional $7.9 billion by the end of 2029. This future investment will include the ongoing AI expansion and cloud infrastructure efforts.
With this announcement, Microsoft Vice Chair and President Brad Smith stated:
“The biggest share of (the investment), by far, both looking back and looking forward, is the expansion of AI data centres across the UAE. From our perspective, it’s an investment that is critical to meet the demand here for the use of AI.”
EU aims to engage online platforms to combat hybrid threats.
What: The European Union (EU) is considering requiring online platforms to help detect and tackle hybrid threats.
Why: Last Thursday, a new proposal was announced, which will require regulated online platforms to assist with managing hybrid threats. These threats could include disinformation campaigns, preventing coordinated attacks, or the use of social media to influence political narratives. Alongside enlisting platform support, the document will also urge companies to engage in threat analysis related to both deepfakes and AI to help identify effective countermeasures.
The proposal, labeled as the European Democracy Shield, is a part of a larger initiative within the EU to better counter foreign interference. The official proposal is expected to be announced on November 13th.
Meta rejects French watchdog ruling.
What: Meta Platforms rejected a French rights watchdog ruling against its algorithm.
Why: On Tuesday, Meta rejected an independent watchdog Defenseur des Droits ruling, which alleged that the company’s algorithm had discriminatory job advertisements. This ruling was originally published on October 10th, 2025, alleging that Meta’s systems treated users differently based on gender. After this ruling, the watchdog recommended that Meta Ireland and Facebook France take steps to ensure job advertisements were non-discriminatory.
In response to this report, a Meta spokesperson stated that “we disagree with this decision and are assessing our options.”
US and UAE sign new agreement on AI and energy.
What: The US and the UAE signed a memorandum of understanding to expand cooperation efforts related to AI and energy.
Why: Over the weekend, Interior Secretary Doug Burgum and Sultan Ahmed Al Jaber signed this new agreement in the Emirati capital during the ADIPEC energy conference. With this agreement, the two nations look to boost “advanced industrial capabilities” and promote the adoption of “future-ready smart manufacturing technologies.”
The release stated:
“The collaboration aims to deliver a major leap in industrial processes, production planning, and logistics - strengthening long-term competitiveness and resilience by tapping into the potential of [AI], improving energy efficiency, managing smart grids, enabling predictive maintenance, and enhancing energy storage systems.”
