
Europe clamps down on global hackers.
The EU imposes sanctions after cyberattacks. DHS boosts surveillance spending. AI firms recruit weapons-risk experts. Stryker disruption, no patient impact. LeakNet leans on ClickFix. Sears chatbot data spills. A Chinese security firm leaks a private key. Tech giants team up on scams. Teens sue xAI over alleged AI-generated abuse. On today’s Threat Vector segment, David Moulton and guest Erica L. Shoemate, founder of The EN Strategy Group, explore how AI is fundamentally reshaping the security landscape. Cyber crooks cause a complimentary curbside convenience.
Today is Tuesday March 17th 2026. I’m Dave Bittner. And this is your CyberWire Intel Briefing.
The EU imposes sanctions in response to cyber attacks.
The European Union has imposed targeted sanctions on three foreign companies and two individuals linked to cyberattacks against its member states. The measures affect China-based Integrity Technology Group and Anxun Information Technology, along with Iran-based Emennet Pasargad. EU officials say Integrity facilitated the compromise of more than 65,000 devices across six countries between 2022 and 2023. Anxun allegedly provided hacking services targeting critical infrastructure, while its co-founders were also designated. Emennet Pasargad is accused of breaching a French database, selling the data on the dark web, and conducting disinformation operations during the 2024 Paris Olympics.
The sanctions prohibit EU entities from providing financial resources and impose travel bans on individuals. The EU’s cyber sanctions regime now covers 19 individuals and 7 entities, reflecting a broader response to escalating global cyber threats.
DHS ramps up surveillance spending.
The Department of Homeland Security is preparing a major expansion of surveillance technology spending in 2026, with contract forecasts outlining hundreds of millions of dollars for enhanced detection and tracking systems. This includes a $1 billion ceiling agreement with Palantir and additional investments in AI-driven platforms, mobile surveillance tools, and data extraction technologies. Officials and advocacy groups say increased funding, including a $191 billion package passed in 2025, has significantly accelerated these efforts.
Critics argue oversight has not kept pace. Lawmakers and watchdogs have raised concerns about civil liberties risks tied to tools capable of facial recognition, phone data extraction, and large-scale monitoring. Questions have also emerged about transparency, as Privacy Impact Assessments declined sharply and none have been filed this year.
Internal tensions are also surfacing. The DHS inspector general alleges the agency has obstructed oversight efforts, while lawmakers continue to push for investigations and limits on surveillance authorities.
AI giants seek to hire kinetic experts.
Anthropic is seeking a chemical weapons and explosives expert to help prevent what it calls “catastrophic misuse” of its AI tools, amid concerns they could reveal how to build dangerous weapons. The role requires experience in weapons defense and knowledge of radiological devices. OpenAI has posted a similar position, reflecting a broader industry trend. 
While companies frame these hires as safety measures, some experts warn they may introduce new risks by exposing AI systems to sensitive weapons knowledge.  Critics also highlight the lack of international regulation governing AI and weapons-related information, raising concerns about oversight as the technology continues to advance.
Stryker says a recent cyberattack disrupted operations but not patient safety.
Stryker says a recent cyberattack was contained to its internal Microsoft environment and triggered a mass device wipe, disrupting operations but not products or patient safety.
The company reports that tens of thousands of employee devices were remotely erased after attackers gained administrator access and used Microsoft Intune to issue wipe commands. Investigators found no evidence of malware deployment or data exfiltration, despite claims by the Handala group that it destroyed over 200,000 systems and stole data. Electronic ordering remains offline, forcing manual processing, while restoration efforts continue.
The incident shows how compromised identity and cloud management tools can cause large-scale disruption without ransomware. According to Stryker and investigators, medical devices were unaffected and recovery is underway.
LeakNet ransomware is using a ClickFix social engineering lure.
LeakNet ransomware is using a ClickFix social engineering lure and a legitimate Deno runtime to gain initial access and execute malware in memory, reducing detection.
Researchers at Reliaquest report that victims are tricked into running malicious scripts, which deploy Deno, a signed JavaScript runtime, to execute payloads directly in memory. This “bring your own runtime” approach helps bypass security controls and leaves minimal forensic evidence. Once active, the malware fingerprints the system, connects to command-and-control infrastructure, and enables follow-on actions like credential theft, lateral movement, and data exfiltration via Amazon S3.
Attackers are increasingly abusing trusted tools to evade defenses. According to ReliaQuest, consistent behaviors like unusual Deno use or abnormal PsExec activity may help defenders detect these attacks.
A Sears chatbot overshares in publicly accessible databases.
Millions of customer interactions with Sears Home Services’ AI chatbot, “Samantha,” were exposed in publicly accessible databases, according to security researcher Jeremiah Fowler.
The data included 3.7 million chat logs, 1.4 million audio files, and transcripts containing sensitive customer details such as names, addresses, phone numbers, and appliance information. Some recordings captured hours of ambient audio after calls ended, potentially exposing private conversations. The databases, owned by Transformco, were secured after disclosure, but it remains unclear how long they were exposed or if others accessed them.
This matters because exposed service data can enable targeted phishing and fraud. Researchers warn that rapid AI adoption without strong data protections increases privacy and reputational risks for companies handling large volumes of customer interactions.
A Chinese security firm exposed a sensitive private key inside a public installer.
Chinese security firm Qihoo 360 reportedly exposed a sensitive wildcard SSL private key inside the public installer for its 360 Security Claw AI assistant, creating serious security risks.
Researcher Lukasz Olejnik found the key embedded in an uncompressed archive, allowing anyone to extract it and potentially authenticate as the company’s servers. The certificate, valid until 2027, covers all subdomains, meaning attackers could impersonate services, intercept traffic, or launch convincing phishing campaigns. The issue is notable given Qihoo 360’s role as a major cybersecurity provider with hundreds of millions of users.
Leaked private keys undermine core internet trust mechanisms. According to available reports, the company has not yet revoked the certificate or issued a public response, leaving potential exposure unresolved.
Major tech companies join forces against online scams and fraud.
Google and major tech companies have signed the Industry Accord Against Online Scams and Fraud at the UN Global Fraud Summit, aiming to coordinate defenses against increasingly sophisticated global scam networks.
The agreement brings together firms like Amazon, Microsoft, and Meta to share threat intelligence and align efforts. Google also plans to expand its $15 million investment with AI-driven detection tools, increased collaboration with law enforcement, and initiatives like the Global Signal Exchange.
Scams are becoming more organized and cross-border, requiring unified industry and government responses to reduce financial and emotional harm.
Teenage girls sue xAI over alleged generated CSAM.
Three teenage girls have filed a lawsuit against Elon Musk’s xAI, alleging its Grok image generator was used to create and distribute AI-generated child sexual abuse material (CSAM) using their photos.
The complaint says altered nude images of the minors were shared on platforms like Discord and Telegram without consent, with one case leading to a suspect’s arrest after CSAM was found on his device. Plaintiffs allege the content was generated through a third-party app using Grok’s technology, arguing xAI still bears responsibility because it licenses and powers the system.
The case highlights growing risks of AI-generated exploitation and questions platform accountability. According to the lawsuit, xAI failed to prevent misuse despite known risks, contributing to reputational and psychological harm. The company has not publicly responded.
Cyber crooks cause a complimentary curbside convenience.
Drivers in Perm, Russia, got an unexpected perk this week: free parking, courtesy of a cyberattack rather than civic generosity.
A “large-scale” distributed denial-of-service, or DDoS, attack overwhelmed the city’s parking payment systems, knocking the permparking.ru portal offline and making it impossible to pay. Officials responded pragmatically, suspending enforcement from March 10 to 13 and effectively turning paid zones into a temporary free-for-all, with hopes of restoring service by March 16.
The incident is a reminder that when attackers flood systems with traffic, even routine services grind to a halt. This matters because disruptions like this can ripple into daily life, sometimes with oddly welcome side effects. According to local authorities, the outage was caused by a massive DDoS attack, though drivers may remember it more fondly than most cybersecurity incidents.
And that’s the CyberWire.
For links to all of today’s stories, check out our Daily Briefing at the cyberwire dot com.
We’d love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like the show, please share a rating and review in your podcast app. Please also fill out the survey in the show notes or send an email to cyberwire@n2k.com
We’re proud that N2K CyberWire is part of the daily routine of the most influential leaders and operators in the public and private sector, from the Fortune 500 to many of the world’s preeminent intelligence and law enforcement agencies.
N2K helps cybersecurity professionals and organizations grow, learn, and stay ahead. We’re the nexus for discovering the people, tech, and ideas shaping the industry. Learn how at n2k.com.
N2K’s lead producer is Liz Stokes. We’re mixed by Tré Hester, with original music by and sound design Elliott Peltzman. Our contributing host is Maria Varmazis. Our executive producer is Jennifer Eiben. Peter Kilpe is our publisher. And I’m Dave Bittner. Thanks for listening.
