At a glance.
- European Parliament debates AI regulation.
- What to expect from Australia's new privacy law.
- The challenges of online age verification.
European Parliament debates AI regulation.
With the swift growth of artificial intelligence, governments across the globe have turned their attention to regulating the powerful technology. Wired offers a look at the EU’s efforts to establish new laws to reign in AI while allowing for innovation and benefiting from the technology’s growth. Dan Nechita, head of cabinet for one of the two rapporteurs leading negotiations over the EU's proposed new AI law, says “We had an ideological divide between those who would want almost everything to be considered high-risk and those who would prefer to keep the list as small and precise as possible.” As the General Data Protection Regulation has demonstrated, the EU stands as a role model for world governments developing data legislation, and many are looking to Europe for guidance on how to approach AI regulation. So far, MEPs have developed a list of AI practices that will likely be banned: social scoring, predictive policing, indiscriminate scraping of internet images, and real-time biometric recognition in public spaces (though some members of the conservative European People's Party argue biometrics should not be prohibited). Other activities, like using AI for migration processes, have been labeled high-risk. There’s also a call for transparency from companies on how they train their AI, and a new body will be established for enforcement of the new rules. With big tech companies lobbying for their own interests, insiders say the law will be a compromise. German MEP, Sergey Lagodinsky, from the left-wing Greens group says, “Of course, there are people who think the less regulation the better for innovation in the industry. I beg to differ. We want it to be a good, productive regulation, which would be innovation-friendly but would also address the issues our societies are worried about.” On May 11, the European Parliament will vote on the new AI rules, which will then go to EU member states for negotiation.
What to expect from Australia’s new privacy law.
In February the Australian Attorney General’s office released the Privacy Act Review Report, an examination of the country’s current privacy law, the Privacy Act of 1988. Through one hundred sixteen proposals, the report recommends an overhaul of the decades-old legislation to bring it into the digital era and make it more comparable to the EU’s General Data Protection Regulation (GDPR). JDSupra discusses the proposals most likely to impact businesses, and while some of the changes will streamline privacy practices, others will result in increased regulation. The report recommends expanding the scope of the Privacy Act by changing the definition of the term “personal information” to be more in line with the GDPR’s definition of “personal data.” As well, the report calls for a strengthening of privacy protections by requiring that all data collection, use, and disclosure is “fair and reasonable” and requiring that all entities carry out privacy impact assessments before engaging in higher risk activities like handling the data of minors or employing geolocation tracking. There will also be measures focused on protecting children and regulating the use of automated decision-making. As the reforms near finalization, companies should prepare for changes to global privacy assessment, new rules regarding cross-border data transfers, and an increased focus on enforcement. Consultation on the report closed on 31 March 2023, but some proposals have been flagged for further consideration.
The challenges of online age verification.
In order to ensure that social media users under the age of eighteen have parental consent, the US states of Utah and Arkansas have already passed laws requiring platforms to verify user ages, and at least seven other states are considering similar measures. As well, a bipartisan group of Senators has introduced federal legislation that would require social media platforms to use age-verification tech. However, some experts say the limitations of current age-verification technology make rules like this difficult to implement. Bailey Sanchez, policy council at the Future of Privacy Forum, told CyberScoop, “From a technical standpoint, there’s just not a lot of options out there that get companies in line with what lawmakers would like to be done right now.” Many platforms have already incorporated age limits into their terms of agreement, but these are often ignored, so age must be determined through self-declaration, age estimation, or age verification. Julie Dawson, director of regulatory and policy at UK-based identity verification platform Yoti, says age estimation software, which uses biometrics to approximate user age, is her clients’ most popular choice, and big tech companies like Meta have already started experimenting with such software. Experts also warn that a secure, trusted age verification infrastructure does not currently exist and that many age verification methods rely on collecting user data like government IDs, making them an inherent risk to privacy. Senior policy adviser at the law firm Venable Zack Martin, said “Without the digital identity infrastructure to enable this in a privacy-enhancing way, this proposed legislation is putting a lot out there that is going to lead to bigger privacy issues down the line.”