At a glance.
- Australia decides not to require age verification software for adult sites.
- Clean Code could be key to safer software.
- UK officials issue report on AI regulation.
- NSA selects new Deputy Director.
Australia decides not to require age verification software for adult sites.
Earlier this week the Australian federal government released its roadmap for age verification for online pornographic material, and lawmakers have decided not to force sites to employ age verification technology as anticipated. Instead, the Guardian reports, eSafety commissioner Julie Inman Grant will collaborate with industry leaders to develop code to better educate parents on using filtering software that limits the kind of content minors can access. The government explained the choice by saying that the various types of age verification tech they’d reviewed posed many issues related to privacy, security, effectiveness, and implementation.
“The roadmap makes clear that a decision to mandate age assurance is not yet ready to be taken,” the government’s response stated. While larger sites like Pornhub might be equipped to support such age assurance tech, smaller, local creators might not have the resources to do so. “Between these two ends of the spectrum are a variety of businesses with different business models and levels of size, maturity, capacity, and capability to adopt technological measures to promote children’s safety,” the report states. “What constitutes appropriate steps for one provider might create an undue burden for another.” The government also announced it will launch an independent statutory review of the Online Safety Act in 2024, and that it will look at the UK’s approach to age assurance as a potential model for future actions. The report further noted that many minors were using pornographic web content to make up for a lack of formal sex education, and officials stated the government was devoting $83.5m to supporting age-appropriate, evidence-based sex education.
Clean Code could be key to safer software.
Washington has been working to make software companies take greater responsibility for vulnerabilities in their products, as opposed to passing the burden on to consumers. The Biden administration’s National Cybersecurity Strategy, released earlier this year, states, “Too many vendors ignore best practices for secure development, ship products with insecure default configurations or known vulnerabilities, and integrate third-party software of unvetted or unknown provenance.” The EU’s Cyber Resilience Act also calls for more liability to be placed on software vendors, meaning such companies need to prioritize the development of products that are secure by design. The SD Times offers a look at what vendors can do in order to adhere to new regulations. Developers must exercise caution when relying on artificial intelligence in generating code, which could result in the unintentional introduction of security flaws. As well, adherence to Clean Code principles, an approach that hinges on collaboration between code developers, can ensure that developers create software that is reliable and safe by identifying and addressing potential vulnerabilities early in the development lifecycle.
UK officials issue report on AI regulation.
A new report from the UK House of Commons Science, Innovation, and Technology Committee (SITC) recommends that the government speed up its implementation of legislation regulating the use of artificial intelligence. As CSO Online explains, the interim report lists twelve challenges that must be addressed in order to develop an effective AI framework that minimizes harm without inhibiting the inherent benefits of the technology. Among them are AI’s perpetuation of bias and misrepresentation, privacy issues, reliance on large datasets and significant computer power, and a lack of transparency.
The report states, "Without a serious, rapid, and effective effort to establish the right governance frameworks - and to ensure a leading role in international initiatives - other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer." In order to promote an international understanding of AI, the report recommends the establishment of a forum of countries with similar liberal, democratic foundations to defend against cyberthreats, including foreign actors, that might exploit AI tech.
NSA selects new Deputy Director.
The Record reports that Wendy Noble has been chosen as the new Deputy Director of the US National Security Agency (NSA). Noble, who will be replacing previous Deputy Director George Barnes, comes from a role as a senior official at the Department of Defense with a focus on international partnerships, and previously worked as NSA’s executive director. Current NSA director General Paul Nakasone, who is set to retire in the coming months, stated, "I am confident in Wendy's ability to lead NSA as the next Deputy Director. She has consistently been recognized for her outstanding contributions and dedication to our mission.”