At a glance.
- More on the challenges of regulating AI.
- Proposed developments in international law for cyberspace.
- Industry comment on the SEC's proposed cybersecurity rules.
More on the challenges of regulating AI.
World governments continue to debate how best to regulate artificial intelligence without stifling the benefits of this relatively new, ever-expanding technology, and ComputerWorld offers an overview of the challenges lawmakers are facing. While each country will be responsible for creating its own measures, a certain amount of cooperation is expected to minimize conflicting legislation. Sophie Goossens, a partner at law firm Reed Smith who specializes in AI, explains, “[When it comes to] tech issues, even though every country is free to make its own rules, in the past what we have seen is there’s been some form of harmonization between the US, EU, and most Western countries. It's rare to see legislation that completely contradicts the legislation of someone else.” The European Commission began discussing AI in 2019 and in April of this year drafted its first attempt at an AI Act, and as a blog post on Medium explains, the EU’s Artificial Intelligence Act of 2021 has set a precedent for AI regulation. However, experts predict that US legislators, who are historically more reluctant to regulate unless absolutely necessary, will likely take a slower approach.
The breakneck speed of AI’s growth further complicates matters, leaving lawmakers struggling to stay abreast of new developments and the risks they impose. As Reuters notes, AI-powered chatbot ChatGPT has become the fastest growing consumer app of all time, and with the increased use of similar apps Bingchat and Bard, European Commission deputy head Vera Jourova issued a statement yesterday urging companies to be more transparent about the use of such generative AI tools "Signatories who integrate generative AI into their services like Bingchat for Microsoft, Bard for Google should build in necessary safeguards that these services cannot be used by malicious actors to generate disinformation," Jourova stated. "Signatories who have services with a potential to disseminate AI generated disinformation should in turn put in place technology to recognise such content and clearly label this to users.” She added that companies who have agreed to adhere to the EU Code of Practice –which include Google, Microsoft, and Meta – will need to demonstrate how they’ve implemented such labels by July.
Proposed developments in international law for cyberspace.
Addressing a conference on cyber conflict last week, Estonian President Alar Karis urged the International Criminal Court to ensure it’s adequately penalizing those who commit digital war crimes. “This is about ensuring justice, but also strengthening deterrence by punishing those who violate the most sacred international laws and norms,” President Karis stated. As the Record explains, Russia’s invasion of Ukraine has demonstrated just how debilitating cyberattacks can be in times of war, and President Karis referenced this conflict in his remarks. “In Ukraine, as in other armed conflicts, we should not think of cyberattacks during armed conflict as something separate from the rest of the military campaign.”
Also addressing state-backed cyberagression, India’s National Cybersecurity Coordinator Dr. Rajesh Pant on Monday issued an outline draft of the Delhi Declaration, a series of commitments to “responsible state behaviour in cyberspace” for G20 member countries. As The Hindu reports, the responsibilities stem from non-binding norms already agreed upon by the United Nations. Speaking at the B20 Conference on Cyber Security, Dr. Pant stated, “Vulnerabilities [in cyberspace] will continue to exist as long as we depend on systems that are based on hardware and software, and increasing software-isation. If vulnerabilities continue, then cyber attacks will continue to take place at a pace faster than what they’re doing now, because of various reasons, including the latest generative Artificial Intelligence (AI).” The draft asks G20 countries to refrain from damaging critical infrastructure, to properly mitigate and investigate cyberattacks (especially ransomware), to adhere to international law in cyberspace, and commit to protecting the humanitarian sector. ABP Live adds that Pant also underlined the need for governance structures and training to improve cyber hygiene, adhere to standard operating procedures, and implement cyber crisis management plans. The Delhi Declaration will now be further debated by G20 nations.
Industry comment on the SEC’s proposed cybersecurity rules.
In March the US Securities and Exchange Commission SEC issued proposed new cybersecurity rules for broker-dealers, investment advisors, and asset managers which could require them to notify individuals impacted by certain types of data breaches. If adopted, the proposed plan would update Regulation S-P, which was adopted in 2000 before major developments in the use of tech by the financial sector. The comment period for the proposed rules ended yesterday, and ThinkAdvisor shares some of the comments issued by industry experts. Nonprofit Better Markets submitted an official comment letter in support of the new rules, and legal director Stephen Hall said the SEC “has rightly proposed a rule that requires market participants to notify affected individuals. Notification can make the difference between identity theft that inflicts major financial losses and a swift response that results in minimal harm.” In his comments, North American Securities Administrators Association President Andrew Harnett said the term “cyberattack” should be included as an event that “could give rise to the customer notice obligation.” And David Bellaire, general counsel for the Financial Services Institute in Washington, recommended that the SEC should allow an extended implementation period of two years, or three years for small firms, to give smaller broker-dealers adequate time to comply with the new rules.