At a glance.
- The European Parliament’s position on the AI Act.
- CISA and MITRE release this year’s who’s-who of software weaknesses.
The European Parliament’s position on the AI Act.
If all goes according to plan, the EU will be the first nation to adopt legislation focused on the regulation of artificial intelligence. Earlier this month the European Parliament adopted its negotiating position on the Artificial Intelligence Act, taking the measure one step closer to becoming law. Parliament’s stance also offers better understanding of what the final measure could look like, and cyber/data/privacy insights offers an overview. Parliament has expanded its initial list of AI processes designated as having an unacceptable risk level, and this includes real-time remote biometric identification systems in in public areas, biometric systems that categorize sensitive characteristics like race or religion, predictive policing systems, emotion recognition systems in schools or workplaces, and untargeted scraping of facial images. AI foundation models will be required to mitigate potential risks to health, safety or fundamental rights before release, and generative AI systems must adhere to a set of transparency requirements which will, among other things, make it clearer when content is AI-generated. Parliament would like to allow exemptions for research activities and AI components provided under open-source licenses. The position also calls for the establishment of at least one regulatory body in every EU member state to provide supervision and testing before AI products are offered to the public. The AI Act is expected to go into effect by the beginning of 2024.
CISA and MITRE release this year’s who’s-who of software weaknesses.
The US Cybersecurity and Infrastructure Security Agency (CISA) reports that the 2023 list of Common Weakness Enumeration (CWE) Top 25 Most Dangerous Software Weaknesses was released yesterday. Developed by the Homeland Security Systems Engineering and Development Institute, which is sponsored by the Department of Homeland Security and operated by MITRE, the list is calculated by analyzing CWE weaknesses discovered and reported over the past two years. As Bleeping Computer explains, ranking is based on significance of the weakness’s impact, severity, and frequency. CISA warns, “These weaknesses lead to serious vulnerabilities in software. An attacker can often exploit these vulnerabilities to take control of an affected system, steal data, or prevent applications from working.” The agency goes on urge developers to review the list in order to determine what mitigations should be adopted. The CWE program will be issuing a series of articles detailing how the list can be used to shift the balance of cybersecurity risk from the public to developers.
Jeff Williams, co-founder and CTO at Contrast Security, appreciates the effort, but thinks the research that leads to the compilation of such lists tends to be influenced by certain biases:
“First off, I appreciate the hard work done by the CWE-Top-25 team to gather the data and do the analysis. I’m generally skeptical of the results obtained by looking at CVEs, because the dataset is so small – only 7,466 CVEs in this study. By comparison, the OWASP Top Ten dataset includes vulnerabilities from over 500,000 applications and APIs – roughly 20 million vulnerabilities overall. Because most of the CVE data comes from incredibly talented volunteer researchers, there are biases towards easy to find issues like XSS and against harder to find issues like authorization, weak crypto, etc.
“But generally, the CWE-Top-25 list is in alignment with OWASP and other similar lists. We’ve had similar lists since I wrote the first OWASP Top Ten in 2002. And that’s the real problem with these lists. Neither the average number of vulnerabilities in a piece of software nor the types of those vulnerabilities has significantly changed in 20 years. After 20 years, maybe people aren’t thinking of these vulnerabilities as res ipsa loquitor that would allow the assumption of negligence simply based on their presence. Instead, these oopsies seem to be tolerated and frequently ignored.
"The original idea with these Top-N vulnerability lists was to stamp out some of these items and 'raise the bar' every few years to improve the industry. Unfortunately, that doesn’t seem to be working. Perhaps the idea of Top-N lists inadvertently creates a ceiling that nobody seriously tries to meet instead of a floor that nobody should allow.”