At a glance.
- New year, new post-quantum cryptography standards.
- OSS and SBOM guidance from the National Security Agency.
- Trusting misinformation.
- Is effective altruism as altruistic as it claims to be?
- The impact of the European Data Act.
New year, new post-quantum cryptography standards.
As we welcome the new year, the digital community also welcomes a new era when it comes to data encryption. At the end of 2023 the US National Institute of Standards and Technology (NIST) began the process of finalizing three Post-Quantum Cryptography (PQC) algorithms designed to replace RSA and other long-held protocols used to encrypt digital communications. As quantum computing technology advances, it’s only a matter of time before these old encryption techniques become useless in the face of quantum decryption tactics.
Now, as Breaking Defense explains, it’s up to government agencies and private companies to remove the outdated algorithms and replace them with the NIST-approved PQC protocols. And it’s a race against time, as rivals might already be harvesting data that’s unprotected by PQC and holding onto it until a quantum computer can crack the code. In fact, NIST senior cybersecurity engineer Bill Newhouse says that any data that’s already fallen prey to this “harvest now, decrypt later” strategy could be rendered unusable. At a recent event hosted by the Advanced Technology Academic Research Center, Newhouse stated, “This migration [to PQC] should be the biggest one ever undertaken.”
That said, technically organizations cannot not yet implement the new protocols, at least not until they’re officially finalized. They must still undergo a slate of adjudication and validation processes that, according to NewHouse, could take “months or years.” In the meantime, organizations are strongly encouraged to take inventory of their software and create a detailed list of the applications where RSA or other outdated encryption protocols are being used. And that could be a very long list. Wanda Jones-Heath, principal cyber advisor for the Air Force, explains, “It impacts everything we do, from switches to routers to our most prized possessions, our critical weapons systems. If we had not started this two years ago, we would be even further behind.”
OSS and SBOM guidance from the National Security Agency.
Last month the US National Security Agency (NSA) released guidance on software supply chain security, and it focused on best practices concerning open-source software (OSS) and software bills of materials, or SBOMs. While this publication is not the first to concentrate on securing the software supply chain – which has increasingly become an attractive target for malicious actors – it builds upon guidance previously doled out by the White House and requirements issued by federal agencies like the Office of Management and Budget.
CSO Online provides an overview of the guidelines, which are broken down into four main areas: open-source software management; creating and maintaining a company-internal secure open-source repository; open-source software maintenance, support, and crisis management; and SBOM creation, validation, and artifacts. Highlights include primary considerations for using OSS, which include evaluating OSS components for vulnerabilities and ensuring that vulnerable components aren't included in products, and staying abreast of licensing considerations and export controls, especially given the continued evolution of EU regulations. The publication also notes that SBOMs not only serve as a way of inventorying OSS components, but can also provide increased transparency for downstream consumers. (NSA urges organizations to use the minimum element requirements documented in the National Telecommunications and Information Administration’s "Minimum Elements for a SBOM.”) Regarding SBOM creation, NSA acknowledges that SBOMS can be created at various phases of the software development lifecycle, and as such, the guidance breaks SBOM tools into four categories: source, binary, package, and runtime extractors.
To better maintain and protect these OSS components, NSA recommends adherence to secure code signing requirements, such as performing code signing, using proven cryptography, and securing the code signing infrastructure. Building on previous guidance like NIST's Incident Handling Guide, NSA also calls for organizations to have a crisis management plan at the ready.
Trusting misinformation.
As artificial intelligence, social media “experts,” and political rumors become more prevalent than genuine news on the internet, a plague of misinformation has descended upon the digital world. The New York Times offers an in-depth look at the spread of fake news from the entry point of an element that simply can’t be defined in digital terms: trust. At the New York Times’s DealBook Summit in November billionaire-turned-Twitter (ahem, X) head Elon Musk was asked why the public should trust him, especially given that the social media platform has devolved into a haven for misinformation and even hate speech since his reign.
His response? “You could not trust me. It is irrelevant.” Asked about his goals for X, Musk gave an even more ambiguous reply: “My aspiration for the X platform is that it is the best source of truth, or the least inaccurate source of truth.”
Thomas Rid, a political scientist at Johns Hopkins and expert on political disinformation, says the internet has made the spread of falsehoods “harder to control, harder to steer and harder to isolate engineered effects.” Indeed, over the past few years there's been a running battle over who or what is the ultimate holder of the “truth,” even as the web makes the word harder to define.
Journalist Joseph Bernstein recently wrote about the system of think tanks, media companies, and academic centers that have surfaced since the Trump era to separate fact from fiction. “Big Disinfo,” as he calls it, has received support from Silicon Valley, where tech companies benefit from searching for a tangible fix to something, the author posits, is far too ethereal to pin down. “Recognizing how deep this crisis goes leaves us in a difficult place. Getting people to reject demonstrable lies isn’t simply a matter of bludgeoning them with facts,” the writer states.
Is effective altruism as altruistic as it claims to be?
As advances in artificial intelligence have made the technology simultaneously more accessible and more powerful than ever before, an interest group is vying to influence how Washington approaches AI regulation. Politico discusses the rise of “effective altruism,” or EA, a movement out of Silicon Valley focused on ensuring that the power of AI is not used for evil. Some insiders say these self-proclaimed good guys have a cult-like fixation on the idea that AI will lead to the downfall of society, and soon.
EA advocates like Eliezer Yudkowsky believe an AI superintelligence capable of outsmarting humans is lurking just around the corner and could bring about the extinction of humanity, perhaps through the creation of a super bioweapon. Emilia Javorsky, director of the futures program at the EA-founded Future of Life Institute, states, “If we don’t start drawing the lines now, the genie’s out of the bottle — and it will be almost impossible to put it back in.”
However, some experts say the EA movement is taking its good intentions too far. While regulators are concerned with more tangible issues, like the likelihood that AI could promote racial profiling or spread disinformation, EA crusaders are focused on AI as an existential crisis. Recovered EA advocate Robin Hanson, an economist at George Mason University, explains, “The EA people stand out as talking about a whole different topic, in a whole different style. They’re giving pretty abstract arguments about a pretty abstract concern, and they’re ratcheting up the stakes to the max.” The regulations EAs are calling for include stricter reporting rules for advanced AI models, tighter licensing requirements for AI companies, restrictions on open-source models, and even a total pause on certain largescale AI projects.
And they have the Big Tech funding to back up their movement. EA nonprofit the Future of Life Institute is funded in part by a foundation financed by tech billionaire Elon Musk. And Facebook co-founder Dustin Moskovitz and his wife Cari Tuna are the benefactors behind Open Philanthropy, a research and grantmaking foundation that has helped put EA proponents in congressional offices and federal agencies. Furthermore, some insiders say the EA movement’s real motive is to distract the government from AI’s more realistic, immediate negative impacts and decrease competition in the tech field. Hanson explains, “Many [EAs] do think that fewer players who are more carefully watched is safer, from their point of view. So they are not that eager to reduce concentration in this industry, or the centralization of power in this industry.”
Whether the EA movement will have a real impact on government regulation remains to be seen. One AI and biosecurity researcher in Washington stated, “You can wow a policymaker in your first two meetings with scary hyperbole. But if you never show up with something they can do about it, then it falls off their mind.” An opposing movement calling themselves “effective accelerationists” has already cropped up and are looking to limit the impact of EA in Washington. And last month Meta and IBM joined forces to establish an international consortium focused on promoting the development of open-source AI as a foil to the EA movement.
The impact of the European Data Act.
The European Data Act, which was published in the Official Journal of the European Union just before Christmas, is scheduled to come into effect on January 11. By facilitating data sharing in order to establish a fair and competitive data market, the law stands to protect consumers and their data while also benefiting data collecting businesses and aftermarket services providers alike. cyber/data/privacy insights provides a primer on what the new law means for the EU’s data strategy. In order to make data accessible to users, third parties, and public sector bodies under certain conditions, the European Data Act imposes obligations on manufacturers of connected devices and providers of related services.
The Data Act also addresses unfair contractual terms and dictates new rules making it easier for customers to switch between different data processing providers without undue delay or cost. Additionally the law expands the obligations for international data transfers under the General Data Protection Regulation (GDPR) and the Schrems II ruling to data processing services providers, requiring them to provide necessary safeguards to protect data from incompatible or unlawful access by third governments.
For instance, third parties receiving data at the request of a user will be allowed to use the data only for the agreed upon, and will be required to erase the data when it is no longer needed. The Data Act’s third chapter states the conditions and compensation under which data holders can make data available to data recipients and states that the data holder cannot discriminate regarding arrangements for making data available between comparable categories of data recipients. The prediction is that by promoting innovative services and healthier competition for aftermarket services, the Data Act will increase the gross domestic product by 270 billion euros by 2028. Although the law will come into effect later this month, most of its rules will begin to apply in September 2025, with some rules taking effect as late as 2027.