At a glance.
- China responds to EU 5G bans and a critical US report.
- New head of NECC named.
- More on the EU's AI legislation.
China responds to EU 5G bans and a critical US report.
As we noted last week, EU industry chief Thierry Breton is urging EU member states to ban Chinese-owned companies Huawei and ZTE from their 5G telecoms networks. Ten member states have made the move so far, and Breton says the others should follow suit as soon as possible. Reuters reports that the two companies are voicing their responses to Breton’s comments, and they’re less than pleased. A Huawei spokesperson said, "As an economic operator in the EU, Huawei holds procedural and substantial rights and should be protected under the EU and Member States' laws as well as their international commitments. As Computer Weekly notes, Huawei, citing an Oxford Economics report, said excluding the company’s tech could increase 5G investment costs by tens of billions of euros. In an email, ZTE stated it simply wants to be treated like its competitors. "ZTE's only request is to be treated fairly and objectively by regulators and legislators - just like any other vendor…We welcome external assessment and scrutiny of our products by regulators and technical supervisory bodies at any time."
The US was also on the receiving end of strong words regarding Chinese relations, these coming directly from Chinese government officials. American cybersecurity firm Mandiant recently released a report which revealed that Beijing-backed hackers targeted the email accounts of government workers in foreign ministries in Southeast Asia in order to steal confidential information for the Chinese government. On Friday, China issued a statement calling the report "far-fetched and unprofessional." As Business World explains, the report was released after US Secretary of State Antony Blinken’s recent visit to Beijing, which was focused on repairing strained relations between the two countries. The Washington Post reports that Blinken’s visit seems to have had little success in easing cyber tensions, and officials did not even mention if cybersecurity was on the agenda. Javed Ali, a former U.S. national security official said of the talks, “Despite U.S. warnings and moves like economic sanctions, diplomatic demarches, and criminal indictments against Chinese cyber hackers and intelligence officers, these have not yet deterred China from engaging in these attacks, and they will likely continue well into the future — especially with Beijing’s anger over expanding U.S. military and economic ties with Taiwan.” However, Annie Fixler, director at think tank the Center on Cyber and Technology Innovation at the Foundation for Defense of Democracies, says the talks could have a positive effect. “What we could see is maybe a quieting down of the worst of the most overt kind of hacking and intrusion,” Fixler stated.
New head of NECC named.
James Babbage, the head of the UK’s National Cyber Force (NCF), has been named the new director general of the National Economic Crime Centre (NECC), the National Crime Agency’s (NCA) directorate for economic and organized crime threats. Babbage has nearly three decades of experience under his belt serving with Britain’s cyber and signals intelligence agency Government Communications Headquarters. He has been at the reins at NCF since its creation in 2020, working to take down some of the same ransomware groups NECC is focused on combating. As the Record explains, Babbage’s appointment follows the demotion of Steve Rodhouse, the agency’s director general for operations, after a scandal involving his investigation of false child abuse allegations.
More on the EU’s AI legislation.
As we previously noted, last week the European Parliament voted to approve its draft rules for the AI Act, which is poised to be the world’s first comprehensive legislation focused on regulating artificial intelligence. The MIT Technology Review offers an overview of the key points of the measure, noting that some of them are less straightforward than they might seem. The AI Act calls for a ban on emotion-recognition AI as well as real-time biometrics and predictive policing in public spaces like schools and airports, but how this ban will be enforced in law is unclear, especially given that some police forces say biometric tech is essential for modern policing. Using AI for social scoring will also be prohibited, and new restrictions for generative AI would call for developers to be more transparent about the material used to train large language models and require that AI generated content be clearly labeled. Furthermore, the AI Act assigns recommender systems in social media to a “high risk” category, which means tech companies might be held accountable for the impact of user-generated content.