At a glance.
- More on AI regulation, in the EU and elsewhere.
- Two US Senators say Section 230 does not apply to AI.
More on AI regulation, in the EU and elsewhere.
As we discussed yesterday, on Wednesday the European Parliament voted overwhelmingly in favor of the European Union Artificial Intelligence Act, the world’s first comprehensive legislation regulating AI. As government officials and tech experts alike warn that advances in AI could lead to increased surveillance, the spread of misinformation, and discriminatory profiling, the EU is taking the lead and progressing legislation to rein in the powerful tech. The Washington Post explains that the AI Act will categorize different AI products by risk, giving the riskiest applications the most stringent regulatory scrutiny. As well, developers of generative AI, like the overwhelmingly popular chatbot ChatGPT, will be required to be more transparent about their training processes and disclose the use of copyrighted material.
Francine Bennett, acting director of London’s Ada Lovelace Institute, called the proposed legislation an “important landmark.” Bennett told the New York Times, “Fast-moving and rapidly repurposable technology is of course hard to regulate, when not even the companies building the technology are completely clear on how things will play out. But it would definitely be worse for us all to continue operating with no adequate regulation at all.” The BBC spoke with Margrethe Vestager, the EU's competition chief and European Commission's executive vice president, who warned that the potential for discriminatory outputs from AI tech is at the top of her list of concerns. "If it's a bank using it to decide whether I can get a mortgage or not, or if it's social services on your municipality, then you want to make sure that you're not being discriminated [against] because of your gender or your colour or your postal code," she said. She also said a global approach to AI regulation is necessary, though it might be a long time coming. "Let's start working on a UN approach. But we shouldn't hold our breath," she said. "We should do what we can here and now." She also warned of the perils of AI-fueled misinformation campaigns, especially when it comes to elections. “If your social feed can be scanned to get a thorough profile of you, the risk of being manipulated is just enormous," she said, "and if we end up in a situation where we believe nothing, then we have undermined our society completely."
Ani Chaudhuri, CEO of Dasera, joins other experts who see the regulatory move as having global significance. "European Union lawmakers have taken a decisive step in shaping the future of artificial intelligence by adopting the E.U. AI Act. This landmark legislation challenges the power of American tech giants and sets unprecedented restrictions on AI usage. This move is long overdue as it prioritizes data security and protects individuals from potential harm caused by unchecked AI systems," Chaudhuri writes. "The E.U. AI Act introduces essential guardrails to prevent deploying AI systems that pose an "unacceptable level of risk." By banning tools like predictive policing and social scoring systems, the legislation safeguards against intrusive and discriminatory practices. Furthermore, it limits high-risk AI applications, such as those that could influence elections or jeopardize people's health."
The attention paid to generative AI is particularly noteworthy. "One significant aspect of the legislation is its focus on generative AI, including systems like ChatGPT. Requiring content generated by such systems to be labeled and mandating the publication of summaries of copyrighted data used for training promotes transparency and protects intellectual property rights. These measures address growing concerns and ensure responsible AI development. While some voices express concern over the potential impact on AI development and adoption, the European Parliament's determination to lead the global dialogue on responsible AI should be applauded. European lawmakers have proactively developed comprehensive AI legislation that accounts for evolving technologies and potential risks. The E.U.'s commitment to data privacy, tech competition, and social media regulation aligns with its ambitious AI regulations. This cohesive framework ensures that European companies adhere to high standards, promoting consumer trust and privacy. It also strengthens Europe's position as the global tech regulator, setting precedents that will shape international tech policies."
The EU's regulations, Chaudhuri believes, should spur US action. "As Europe leads in establishing AI standards, the United States must step up its efforts to keep pace. Congress must pass comprehensive legislation addressing AI and online privacy. Falling behind Europe risks hindering innovation and surrendering the opportunity to lead the global debate on AI governance. We believe that responsible AI development should be a global endeavor. As Europe sets the bar, it is incumbent upon the United States to catch up and play an active role in shaping AI policies. We can strike the right balance and ensure AI benefits society by fostering innovation while safeguarding individual rights. While concerns and challenges exist, the E.U. AI Act represents a significant step toward building a responsible and secure AI ecosystem. Europe's commitment to protecting individuals and upholding data security sets an example for the world. As the AI landscape continues to evolve, we must embrace robust regulations that foster trust, innovation, and global cooperation."
Two US Senators say Section 230 does not apply to AI.
Remaining on the topic of artificial intelligence, Axios takes a look at the ongoing debate among US lawmakers over whether AI-created material should qualify for legal immunity under Section 230 of the Communications Decency Act. Section 230 shields platforms from lawsuits over third-party content, and without it social media likely wouldn't exist. If the section does apply to AI, it could have a far-reaching impact on the swiftly growing technology. Some legal experts say denying AI developers protection under Section 230 would hamper innovation, as fear of prosecution would stifle AI tech makers’ ability to create effective products. In response, US Senators Josh Hawley and Richard Blumenthal yesterday introduced the succinctly-named "No Section 230 Immunity for AI Act," a bipartisan bill that states that Section 230 doesn't apply to AI-generated work. As Hawley’s office explains, the measure would amend Section 230 "by adding a clause that strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI.” Blumenthal told Axios, "AI companies should be forced to take responsibility for business decisions as they’re developing products—without any Section 230 legal shield. This legislation is the first step in our effort to write the rules of AI and establish safeguards as we enter this new era."