Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1,650 words, this briefing is about a 7-minute read.
At a glance.
- HPE-Juniper merger settlement raises concerns.
- The UK’s Online Safety Act now requires increased ID verification.
Senators concerned over HPE-Juniper merger.
The news.
On Friday, Senate Democrats started calling for an investigation into the recent settlement between the Department of Justice (DOJ) and Hewlett Packard Enterprise’s (HPE) acquisition of Juniper Networks. In a letter to the DOJ’s Inspector General, William Blier, Senators Blumenthal, Booker, Warren, and Klobuchar raised their concerns about the settlement. The senators sent this letter after the DOJ fired Roger Alford, the principal deputy assistant attorney general, and Bill Rinner, the deputy assistant attorney general and head of merger enforcement.
In their letter, the Senators wrote:
“In all, these events reflect a concerning pattern of behavior within the DOJ and point to possible politicization of the process by which the DOJ analyzes proposed mergers and acquisitions, as well as undertakes and resolves enforcement actions. We are concerned that, in addition to improper interference in the enforcement of our laws, the full extent and parties involved in this coercive campaign are not known and that other improper conduct could have occurred.”
The knowledge.
This news emerged after the DOJ announced that it was settling its lawsuit related to the merger at the end of June. While the settlement still needs court approval, this settlement clears the way for the transaction to close. When announcing the settlement, HPE’s president and CEO, Antonio Neri, stated:
“Our agreement with the DOJ paves the way to close HPE’s acquisition of Juniper Networks and preserves the intended benefits of this deal for our customers and shareholders, while creating greater competition in the global networking market.”
More specifically, the settlement requires HPE to divest its “Instant On” business and mandates that the merged firm license critical Juniper software to independent competitors. This must happen within 180 days of the approval and be a DOJ-approved buyer.
The DOJ’s antitrust lawsuit was originally filed in January 2025 over concerns that the merger would eliminate competition, raise prices, and diminish innovation. When filing that lawsuit, Acting Assistant Attorney General Omeed A. Assefi stated:
“HPE and Juniper are successful companies. But rather than continue to compete as rivals in the WLAN marketplace, they seek to consolidate, increasing concentration in an already concentrated market. The threat this merger poses is not theoretical…This proposed merger would significantly reduce competition and weaken innovation, resulting in large segments of the American economy paying more for less from wireless technology providers.”
The settlement signals a potential lessening of the strong antitrust stance that the administration has taken so far this year. Outside of this case, the Trump administration has been actively pursuing significant antitrust cases against Google, Meta, and Amazon, pursuing significant divestitures to resolve their monopoly concerns.
The impact.
While it is unlikely that these senators’ concerns will derail the settlement, the controversy raised broader questions regarding how the DOJ will approach future merger cases.
This controversy reflects growing concern among lawmakers about whether antitrust efforts are remaining consistent and independent. As the DOJ continues its high-profile cases against Big Tech companies, these future efforts could further contribute to these concerns or prove this to be a notable exception.
For those who utilize HPE or Juniper’s services, users should take time to understand what this merger will impact, what its timeline is, and how the settlement’s conditions will impact existing services.
UK’s online safety act continues rollout.
The news.
As the United Kingdom’s (UK) Online Safety Act has continued its phased rollout, critics have raising concerns about the law's reach and effectiveness. When originally passed, the act aimed to make the UK “the safest place” in the world for online activities. However, in this effort, the UK government has drawn significant pushback from privacy and civil liberties advocates.
Critics have taken aim at the age verification requirements and their data collection implications. The law mandates that online platforms have a duty of care to protect their users from harmful content, such as social media sites, search engines, and adult content providers, by enforcing age verification checks. Companies that fail to comply could face fines of up to ten percent of their global revenue or service blockages.
These effects took effect at the end of July, with impacted companies already beginning to institute ID checks. Some of the required age verification data includes submitting identification card pictures, email addresses, and face scans.
The Electronic Frontier Foundation (EFF) criticized these efforts as both ineffective and dangerous. The EFF wrote:
“If the UK really wants to achieve its goal of being the safest place in the world to go online, it must lead the way in introducing policies that actually protect all users - including children - rather than pushing the enforcement of legislation that harms the very people it was meant to protect.”
The knowledge.
The Online Safety Act classifies harmful content into three categories:
- Primary priority content: Content that is pornographic or promotes suicide, self-harm, or eating disorders.
- Priority content: Content that is related to bullying, encourages violence, incites abuse or hate against specific people, or depicts violence.
- Non-designated content: Content that presents a material risk to a considerable number of children.
If platforms may host this content, they would need to take measures to address these risks, such as implementing:
- Robust age verification systems
- Create safer content recommendation algorithms
- Effective content moderation
Alongside the UK’s efforts to impose stronger restrictions on online sites, other governments have also followed suit. In July, the United States (US) Supreme Court ruled on the Free Speech Coalition v. Paxton case, writing that “no person - adult or child - has a First Amendment right to access speech that is obscene to minors without first submitting proof of age.” The decision upheld these limited age verification requirements but stopped short of endorsing sweeping government age requirement mandates.
Additionally, the European Commission is also pushing forward to implement an age-verification application, and Australia is preparing for a new ban, which would bar those under sixteen from accessing social media sites.
Together, these instances are examples of a global trend to tighten access to online content, especially for minors. However, these efforts have also reignited a longstanding debate about free speech, data privacy, and government overreach.
The impact.
As this law and similar ones continue to roll out and enter enforcement, they revisit an important question centered on how to protect children online without infringing on an adult’s rights or privacy. Given the global movement behind restricting minors' access to social media platforms and adult sites, it appears that this approach of using stricter regulation is becoming the default solution.
Parents should take time to understand the implications of these laws and how they will restrict access to specific sites for their children, and how enforcement will work. For adults, it means being aware that visiting certain sites may now require submitting sensitive personal information.
By staying informed, users can make informed decisions about what sites they use, what data they share, and what trade-offs they are willing to accept.
Highlighting key conversations.
In this week’s Caveat Podcast, our team discusses the UK’s Online Safety Act and its impacts on the nation. Additionally, our team also looks into Flock Safety, a company known for its automatic license plate reader cameras, and its intentions to expand into schools. During this effort, Flock is partnering with Raptor Technologies to integrate its cameras into dismissal management systems. The goal of this project is to create “safe corridors” to monitor bus stops, walking routes, and nearby roads.
Like what you read, and curious about the conversation? Head over to the Caveat Podcast for the full scoop and additional compelling insights. Our Caveat Podcast is a weekly show where we discuss topics related to surveillance, digital privacy, cybersecurity law, and policy. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you.
Other noteworthy stories.
Google agrees to curb power use for AI data centers.
What: Google signed agreements with US electric utilities to reduce power consumption.
Why: On Monday, Google signed agreements with two electric companies. The agreements, with both Indiana Michigan Power and Tennessee Power Authority, involve Google scaling back its power usage when grids become too taxed.
This is the first formal agreement where Google has agreed to curtail machine learning workloads. In a blog post, Google wrote:
“It allows large electricity loads like data centers to be interconnected more quickly, helps reduce the need to build new transmission and power plants, and helps grid operators more effectively and efficiently manage power grids.”
US approves federal use of several major AI providers.
What: The General Services Administration (GSA) has allowed federal agencies to adopt major AI tools.
Why: On Tuesday, the GSA approved the use of ChatGPT, Gemini, and Claude as approved AI models to be used by government agencies. When making this announcement, the GSA stated that these providers “are committed to responsible use and compliance with federal standards.”
With the GSA’s approval, federal agencies can establish contracts with these providers to adopt their tools. This move continues to accelerate the federal government’s adoption of AI.
