At a glance.
- JCDC lays out plans for the new year.
- Dutch government limits use of TikTok.
- NIST’s new guidance on artificial intelligence.
JCDC lays out plans for the new year.
The US Cybersecurity and Infrastructure Security Agency formed the Joint Cyber Defense Collaborative (JCDC) in 2021 as a partnership between the government and the private sector to address cyber security planning, defense, and response. CISA yesterday released the JCDC’s 2023 agenda, which aims to “strengthen protection of civil society organizations who are at higher risk of being targeted by foreign state actors through collaborative planning with key government and industry stakeholders.” The agenda will focus on three main areas: systemic risk, collective cyber response, and high-risk communities. The plan also incorporates a new collaboration process, “ a new multidirectional real-time information sharing initiative—which is built on trust and a willingness to work together.” As Eric Goldstein, Executive Assistant Director for Cybersecurity, explains, in 2022 the JCDC focused mainly on emergent threats, but in 2023 the group hopes to take a more proactive approach. Goldstein states, “We must also look over the horizon to collaboratively plan against the most significant cyber risks that may manifest in the future. This proactive planning is foundational to JCDC, as first envisioned by the Cyberspace Solarium Commission and then codified by Congress.”
Dutch government limits use of TikTok.
We’ve been following the ever-growing list of US states and other entities that have blocked the use of TikTok due to concerns that the app poses a risk to national security. Politico reports that the Netherlands seems to be heading in the same direction. The popular video streaming app has 3.5 million Dutch followers, but two government officials say Dutch ministries and agencies are following a recommendation issued by the general affairs ministry in November to "suspend the use of TikTok for the government until TikTok has adjusted its data protection policy.” The Dutch government’s recommendation is more limited in scope and enforcement than those seen in the US. It's more of a pause than a ban, mainly focused on preventing the use of TikTok for media and advertising. The move comes as Dutch officials’ work to strengthen the country’s relationship with the US, where the White House is working to limit the sale of sensitive tech to China, including devices made by Dutch chip manufacturer ASML. Dutch Prime Minister Mark Rutte met with US President Joe Biden this month to discuss security and trade concerns linked to China. TikTok has responded by saying it’s open to engaging with the Dutch government "to debunk misconceptions and explain how we keep both our community and their data safe and secure."
NIST’s new guidance on artificial intelligence.
Earlier this month US Department of Commerce’s National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework (AI RMF 1.0), which offers voluntary guidance for organizations designing, developing, deploying or using AI systems. The result of a directive from Congress, the AI RMF was created in partnership with the private sector and is intended to evolve alongside the ever-changing world of AI tech. Deputy Commerce Secretary Don Graves states, “This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values. It should accelerate AI innovation and growth while advancing — rather than restricting or damaging — civil rights, civil liberties and equity for all.”
As NIST explains, the framework supplies organizations with a flexible yet structured approach to managing the risks of AI, while encouraging organizations to realistically assess AI’s positive and negative impacts on society with a view to cultivating societal trust in AI tech. The first part of the guidance outlines the characteristics of trustworthy AI systems, and the second details four specific functions for addressing the risks of AI systems. As the AI landscape continues to develop, NIST will periodically update the playbook with input from the AI community. An updated version of the framework incorporating community comments will be released in spring 2023.