At a Glance.
- ICC launched probes into cyberattacks that targeted Ukraine as potential war crimes.
- Policy recommendations for securing AI and managing its governance.
ICC probes cyberattacks in Ukraine as possible war crimes.
The News.
On Friday, the International Criminal Court (ICC) began investigating a series of cyberattacks against Ukraine that Russia allegedly carried out. ICC prosecutors are looking to explore these alleged Russian cyberattacks to assess if potential war crimes were committed. More specifically, the probe is examining a series of attacks that endangered lives by disrupting water and power systems, interfering with emergency responders, and disabling data services tied to air raid warnings. Throughout the investigation, ICC officials will be working with Ukrainian teams to investigate these alleged incidents. Additionally, two other sources confirmed that the ICC prosecutors were looking into cyberattacks in Ukraine dating back to 2015.
In previous responses to these accusations, Russia has denied that they have engaged in these cyberattacks and Russia has stated that these accusations are attempts to incite anti-Russian sentiment.
The Knowledge.
One of the core pillars of international law covering armed conflict explicitly bans attacks on civilian objects. However, as cyber-attacks have begun to be increasingly used to target civilian centers, such as infrastructure attacks on energy centers or water systems, legal authorities have begun to pay increasing attention to these incidents and how to legally resolve them. While this case is not the first time where legal scholars have discussed the relationship between cyber attacks and humanitarian law, this ICC case has the potential to set a significant precedent for international law when handling cyberattacks.
Until this point, legal scholars have had difficulty resolving cyber attack cases as it remains unclear whether or not data would be classified as an “object” of an attack banned under international humanitarian law and if its destruction would be classified as a war crime. If this case did hold Russia accountable for their cyber attacks as war crimes, the international landscape for assessing these cases would change dramatically. Professor Michael Schmitt, who leads the Tallinn Manual international law and cybercrime process, stated that “if the court takes on this issue, that would create great clarity for us.” Professor Schmit continued stating that out of all of the cyber attacks allegedly conducted by Russia, the hack of Kyivstar would meet the criteria as a war crime as the foreseeable consequences that would have been understood by the attackers would have clearly shown that humans were being put at risk. For context, the attack on Kyivstar took place in late 2023 where Russian hackers allegedly targeted Ukraine’s biggest telecoms operator which services over twenty-four million users a day. During this attack, Ukraine’s largest mobile service provider was attacked causing over of Ukraine's population to lose services as well as damaged IT infrastructure and disrupted air assault alarm systems.
The Impact.
While the ICC’s investigation has only just begun, this case has the potential to dramatically change how cyber incidents are handled and prosecuted on the international level. As this initial probe and potential subsequent legal case play itself out over the coming months and years, the impacts that this case could have on both nations and large-scale organizations would be substantial. While this case would most likely have no impact on the average person around the world, it would give nations and large organizations impacted by state-sponsored cyber attacks an avenue to hold hostile nations more accountable especially if their attacks targetted critical infrastructure related to healthcare, water, energy, or other similar sectors. As this case continues to develop and unfold, we will continue to highlight and discuss critical information and updates.
A Policy Roadmap for AI Governance.
The Publication.
In a report published by the Data & Trust Alliance, Camille Stewart Gloster analyzes the current state of artificial intelligence (AI) and what policies must be implemented to better secure AI’s future. Throughout this report, Gloster discusses a series of different policy solutions that could be implemented to help manage and secure AI over the coming years. Of these policies, Gloster highlighted four key policy areas:
- Regulating AI risk, not AI algorithms.
- Use existing sector-specific regulatory authorities which are best able to regulate AI use, and use supported and effective existing regulations.
- Differentiate the compliance responsibilities of developers and deployers, including data collection and usage practices.
- Mitigate risks by investing in R&D, education, and workforce development.
Gloster discusses each of these policy recommendations by breaking each one down further and highlighting more specific requirements.
Regarding regulating AI risk and not AI algorithms, Gloster discusses several key needs. Throughout this recommendation, Gloster highlights how future regulation needs to encourage AI innovation while simultaneously managing risks based on intended use cases and purposes. Gloster highlights that regulations should provide explicit categories with sets of high-risk AI use cases, mandating impact assessments and bias testing, requiring greater transparency, and preventing and stopping harm. While Gloster does further elaborate on other key areas within this policy recommendation, she routinely emphasizes that overregulating AI with burdensome obligations, such as licenses to operate, would only serve to hamper AI growth, stifle competition, and incur significant economic costs for developers. Furthermore, Gloster emphasizes that when implementing any AI regulation factors such as the application, end-user, how reliant end-users are on the technology, and how much human oversight would be required need to be taken into account.
The other major policy recommendation that Gloster emphasized involved using already existing sector-specific regulatory authorities to regulate AI use, and use already existing regulations as much as possible. Throughout this recommendation, Glosters emphasizes that policymakers need to recognize that AI is going to be used in a variety of fields that already have existing regulations that can be used to regulate use cases, such as the PTO with the Copyright Office for helping manage intellectual property and copyright issues. By using existing regulations, Gloster highlights that various agencies can address AI risks within their existing areas of expertise and enable a more agile, collaborative, and consistent approach to AI.
Gloster concludes her publication by emphasizing how this document is not meant to be doctrine, but rather as “a helpful departure point” to help governments around the world better handle critical discussions, especially for issues impacting cross-sector issues.
The Impact.
While this document is only a policy roadmap and not an official piece of policy that has been proposed in Congress, the document is representative of a growing consensus worldwide on how AI should be approached. Already the document has garnered notable support from influential stakeholders such as IBM’s Senior Vice President Software and Chief Commercial Office, Rob Thomas, who expressed his support as well as received support from Transcarent’s CEO, Glen Tullman.
Coupling this publication with the Senate’s AI policy roadmap released in early May, parallels can already be drawn between the two as both emphasize the critical nature of supporting AI innovation while simultaneously securing it to ensure that transparency is maintained and copyright and intellectual property rights are protected. Another key element that was emphasized in both roadmaps centered around using already existing laws and regulations to manage AI, especially when it comes to high-impact use cases.
However, one of the most notable instances of potential differences between the two revolves around the need for future or more comprehensive legislation. While each document stressed the need to use existing regulatory frameworks to manage AI, the Senate’s roadmap highlighted how it will look to create new legislation and regulations as needed for any potential areas not being adequately addressed. Gloster highlighted her concerns regarding the government and its agencies becoming too involved with AI usage citing concerns with implementing AI licenses to operate among others and how these oversteps would only serve to stifle innovation without properly addressing concerns. While it is unclear exactly what regulations or new legislation the Senate would propose, it is clear that both private and public organizations are continuing to pay greater attention to and invest in AI’s development and usage. As new policies are passed, organizations involved in AI need to take the time to understand the associated risks and regulations to ensure that any pitfalls or concerns are appropriately mitigated.
Other noteworthy stories.
Meta pauses AI models launch in Europe due to Irish request.
What: Meta announces it will delay the launch of its AI models in Europe over data privacy concerns.
Why: On Friday, Meta announced that it would pause launching its new AI models in Europe after Irish privacy regulators requested its delay. This announcement comes after numerous complaints were filed by None Of Your Business (NOYB) over the advocacy group's concerns surrounding how these models would train its AI models using user data without seeking public consent. With this delay, Meta wrote that they were “disappointed by the request from the Irish Data Protection Commission (DPC)... particularly since we incorporated regulatory feedback and the Europan DPAs have been informed since March.”
It is unclear when Meta intends to move forward with launching its AI models in Europe.
Lawmakers hold hearing with Microsoft over recent cyber lapses.
What: House Representatives met with Microsoft Vice Chair and President Brad Smith over the company’s recent security issues.
Why: On Thursday, Microsoft’s Vice Chair and President Brad Smith met with House Representatives over concerns regarding the company’s recent security breaches. During this hearing, Representatives discussed the recent Cyber Safety Review Board report that was released in April. This report was launched to investigate an attack that targeted the company in 2022 and found a “cascade of failures at Microsoft” allowed the breach to occur. Throughout the hearing, Representatives routinely highlighted that Microsoft needed to be held accountable, especially given how reliant the government is on Microsoft services.
During the hearing, Smith stated that Microsoft was committed to “making the changes we need to make, learning the lessons we need to learn, [and] holding ourselves accountable.”
Cisco plans to create new cybersecurity center in Taiwan.
What: Cisco has announced its plans to invest in a new cybersecurity center in Taipei, Taiwan.
Why: On Monday, Cisco revealed its intentions to invest in creating a new center in Taipei as part of its Taiwan Digital Acceleration 3.0 plan. This plan centers on cybersecurity with Cisco planning to partner with the government to implement training programs and address talent shortages. In a statement, Cisco stated that they aim “to collaborate with relevant tech associations to establish a security centre in Taiwan for enhanced threat intelligence and cyber readiness.” Additionally, Taiwan’s Vice President, Hsiao Bi-khim stated that she was grateful for Cisco’s continued partnership with Taiwan.