At a glance.
- EU becomes AI regulation pioneer.
- Senate Judiciary Committee considers reauthorization of Section 702.
- Texas passes data privacy law.
EU becomes AI regulation pioneer.
EU lawmakers have become the first in the world to pass comprehensive legislation regulating the development and use of artificial intelligence. As AP News explains, the rules classify AI tech into four levels of risk, wth the riskier products facing more stringent regulations. The legislation will crack down on products that threaten health and safety, infringe on human rights, or put vulnerable individuals like children at risk. This means that predictive policing tools and social scoring will be prohibited. Addressing concerns about AI-powered chatbots like ChatGPT, use of any copyright material to teach AI systems to better resemble human work will have to be documented. Violators could face fines of up to 40 million euros ($43 million) or 7% of a company’s annual global revenue. Noting that when it comes to AI other countries have offered guidance and recommendations rather than regulations, Kris Shrishak, a technologist and senior fellow at the Irish Council for Civil Liberties, stated, “The fact this is regulation that can be enforced and companies will be held liable is significant…Other countries might want to adapt and copy.” That said, the rules won’t go into effect until completion of negotiations involving EU member countries, the Parliament, and the European Commission. Once final approval is reached, anticipated at the end of this year, there will be a years-long grace period for organizations to adjust to the new rules.
Before the vote was taken, Edward Machin, a senior lawyer in Ropes & Gray's data, privacy and cybersecurity team, commented about some of its probable effects. “Despite the significant hype around generative AI," he wrote, "the legislation has always been intended to focus on a broad range of high-risk uses beyond chatbots, such as facial recognition technologies and profiling systems. Whatever the result of today’s vote, the AI Act is shaping up to be the world’s strictest law on artificial intelligence and will be the benchmark against which other legislation is judged. It remains to be seen whether the UK will have second thoughts about its light-touch approach to regulation in the face of growing public concern around AI, but in any event the AI Act will continue to influence lawmakers in Europe and beyond for the foreseeable future.”
After the measure passed this morning, Craig Burland, CISO at Inversion6, expects the regulatory regime to have an impact comparable to that of GDPR. He wrote, "Let the debate begin! Similar to data privacy years ago, the EU has just taken a position at the far end of the spectrum to frame the parameters of the discussion. Putting aside the many challenges of enforcement as well as the ubiquitous use of AI in modern technology projects, the EU has documented intriguing concepts centered on ensuring the validity of the content and proper use cases. Contrast this with Google’s pronouncement last week that focused primarily on protecting the technology itself. What was announced today will shift and transition as the debate plays out in the media and behind closed doors. But, in planting this flag, the EU has started what will be a fascinating dialog that affects businesses and individuals alike."
Eduardo Azanza, CEO of Veridas, also expects the measures to have far-reaching effects, and strongly cautions organizations who might be inclined to think it won't affect them.“The passing of the Artificial Intelligence Act is a significant moment and should not be underestimated at all. For technologies such as AI and biometrics to ever be successful, it is essential that there is trust from businesses and the wider public," he wrote. "It’s critical that we have established agreed standards and deliverables to ensure that AI and collected biometric data are used responsibly and ethically. There must be clearly defined responsibilities and chains of accountability for all parties, as well as a high degree of transparency for the processes involved. As the UK and US look to introduce their own Artificial Intelligence Act, it is essential they work with the EU to define minimum global standards – only then can we guarantee the ethical use of AI and biometrics. Ultimately, it’s businesses’ duty to responsibly and ethically use AI technology, as its capability to replicate human abilities raises huge concerns. Organizations need to be conducting periodic diagnoses on the ethical principles of AI. Confidence in AI security technology must be based on transparency and compliance with legal, technical, and ethical standards.”
Christopher "Tito" Sestito, co-founder and CEO of HiddenLayer, sees a major role for the cybersecurity sector in the development and implementations of the standards the EU envisions. "The EU AI Act is one step in what we expect to be continued AI regulatory actions globally," he said in an email. "HiddenLayer believes the Cybersecurity community is uniquely prepared to play a key role in the continued development of regulations for AI. We have decades of experience with new technologies, their unique attack types and development of appropriate controls to balance security and technology adoption. We are prepared to work with private and public institutions to understand the impact and to ensure compliance all while maximizing the benefits of AI in their organizations."
Senate Judiciary Committee considers reauthorization of Section 702.
The US Senate Judiciary Committee yesterday convened for a hearing on renewal of Section 702 of the Foreign Intelligence Surveillance Act, which authorizes the warrantless collection of intel on foreign targets. Calling for reauthorization, Biden administration officials revealed newly declassified details to demonstrate how Section 702 helped to identify the culprit responsible for the 2021 ransomware attack targeting the Colonial Pipeline, the Washington Post reports. Supporters of 702 offered additional instances of how the ability to search foreign communications swiftly and without a warrant was essential in mitigating the damage of cyberattacks against critical US infrastructure. Paul Abbate, deputy director of the Federal Bureau of Investigation, testified, “As crucial as 702 authority is now, it will only become more important over the next five years, as foreign cyberattacks continue to escalate in sophistication and frequency.” Still, some lawmakers remain unpersuaded, remaining firm in their stance that 702 infringes on Fourth Amendment rights and should not be reinstated without modification. Senate Judiciary Chairman Richard J. Durbin, a Democrat out of Illinois, stated, “I will only support the reauthorization of Section 702 if there are significant, significant reforms. And that means first and foremost, addressing the warrantless surveillance of Americans in violation of the Fourth Amendment.”
Texas passes data privacy law.
At the end of May, Texas became the latest US state to pass data privacy legislation, joining the ranks of states like Iowa, Indiana, and Tennessee who also passed legislation this year. Once signed by Governor Greg Abbott, the Texas Data Privacy and Security Act will go into effect on March 1, 2024, becoming an additional thread in the complicated web that is state-level data protection regulation. As SHRM explains, the bill will apply to entities that conduct business in Texas or produce a product or service consumed by Texas residents or process personal data of Texas residents, but it will not apply to small businesses as defined by the US Small Business Administration. That said, all businesses, regardless of size, will be prohibited from selling sensitive data without consent. The legislation gives consumers the right to confirm that the data controller is processing their data, access that personal data, correct any inaccuracies, and delete it if they choose. They also have the right to opt out of having their data collected for targeted advertising, sale, or profiling. The legislation also lays out rules for data collectors, including a requirement stating that covered entities must publish a privacy notice clearly outlining how they process personal data. As well, a data controller must conduct a data protection assessment before undertaking certain higher-risk types of data processing, including targeted advertising or profiling that could lead to unfair treatment. Consumers will be able to submit complaints via an official online mechanism, and the Attorney General will notify the alleged offender, giving them thirty days to cure the alleged violation.
Eric Andrews, VP of Marketing at Securiti, sees the law as an instance of a larger trend in which US states enact their own regulatory systems. “Privacy continues to be a key concern for consumers across the nation, and in lieu of a national policy, we continue to see a series of state-specific regulations being proposed and adopted. These rules are key to protecting an individual's rights to their data and ensuring companies handle personal data responsibly," Andrews writes. But this trend renders compliance more challenging. "However, the proliferation of these new data privacy laws also presents a challenging and complex compliance landscape for businesses, as they need to adapt to the requirements of each successive law. Navigating which laws apply, factoring in user residency, data location, global and statewide obligations, and more can be daunting. Failure to comply can result in huge loss such as consumer trust, class-action lawsuits, and hefty fines. Organizations must implement a solution capable of complying with a myriad of data privacy legislations efficiently. To keep up with the ever-growing complexity of managing data risk in this digital era, businesses must strive for an integrated, automated approach to data privacy, rooted in a profound understanding of the data owner and their personal data. This will streamline data protection obligations and automate incident responses to make decisions following applicable laws.”