At a glance.
- Canada launches probe into OpenAI.
- The dangers of the AI race.
- A proposed Cyber Service.
Canada launches probe into OpenAI.
As we noted earlier this week, Italy’s privacy regulator has banned artificial intelligence chatbot ChatGPT due to concerns about surrounding user data collection and storage without consent. The Office of the Privacy Commissioner of Canada (OPC) announced yesterday that Canada has similar concerns, and as a result is launching an investigation into ChatGPT’s parent company OpenAI, BetaKit reports. There are reports that Germany and Ireland might take similar action. As Wired explains, in order to power ChatGPT’s generative text system, OpenAI scrapes data from social media posts, books, and other sources on the open web, and some of that data can include personal information shared by users online. Now privacy regulators are questioning the legality of such data collection. Tobias Judin, the head of international at Norway’s data protection authority, stated, “If the business model has just been to scrape the internet for whatever you could find, then there might be a really significant issue here.” US data privacy rules are a bit murky, but the EU’s General Data Protection Regulation clearly protects personal data even if it's publicly available online. Italy’s regulator also points out that ChatGPT’s data sweeps include the info on minors. What’s more, the data collected might be inaccurate, and perhaps most importantly, ChatGPT has no legal basis for gathering such data to train its tech. Jessica Lee, a partner at law firm Loeb and Loeb, comments, “How to collect data lawfully for training data sets for use in everything from just regular algorithms to some really sophisticated AI is a critical issue that needs to be solved now, as we’re kind of on the tipping point for this sort of technology taking over.”
The dangers of the AI race.
Remaining on the topic of artificial intelligence, AI tech is evolving at such a rapid pace that some experts say it has the potential to become smarter than humans, and global governments are competing to be the first to fully capitalize on its power. America is currently leading the race as far as AI equipment production, but China is hot on its heels and is arguably moving faster when it comes to AI adoption. As well, US immigration and chip export restrictions might inadvertently give China an edge in retaining AI talent and making the country less dependent on US hardware. American academics are struggling to keep up with AI’s rapid developments, and if the US wants to maintain its lead in the field, Foreign Affairs posits that the federal government will need to devote funding to AI research. The recently released 2023 AI Index indicates that AI developments are entering a new phase as powerful AI tools like ChatGPT and Midstream become readily available to the general public. The report, a collaborative effort between Stanford University and industry leaders like Google, Anthropic, and Hugging Face, shows that while academics used to be the driving force behind AI tech, industry is rapidly taking the reins. “In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia,” the report states. As the Verge explains, powering AI requires a great amount of the financial resources, and this gives industry the edge over academia.
The report also notes that as AI expands, so do cases of misuse. Between 2012 and 2021, there was a 26-fold increase in incidents of ethical misuse. AI developments come with their share of dangers, and industry leaders and government leaders alike are taking notice. On Tuesday US President Joe Biden told science and technology advisers that the responsibility lies with tech developers to make sure their products are safe before releasing them. He noted that AI could have a positive impact on many world issues, from climate change to disease, but he also noted that the dangers remain to be seen. Reuters says President Biden likened AI to social media, stating, "Absent safeguards, we see the impact on the mental health and self-images and feelings and hopelessness, especially among young people.” CSO Online also adds that if AI makes deductions based on inaccurate input, such technologies could be responsible for spreading false information. For instance, a query about a recent Microsoft 365 bug resulted in vastly different answers from ChatGPT and AI-powered search engine Bing. Last month the Future of Life Institute drafted an open letter calling for a six-month pause on AI advancements, stating that AI is moving at a pace too fast to slow “dangerous” advancements. Wired questions the effectiveness of such a pause, noting that AI research is perhaps too expansive a beast to control, and that there’s no true way to limit dangerous progress while allowing for safe advancements.
A proposed Cyber Service.
The Military Cyber Professionals Association (MCPA) is a leading voice among those who've called for the US to establish a seventh military Service to be devoted to cyberspace. Cyberspace is, the MCPA argues, the only operational domain to lack an aligned service, and cyberspace is sufficiently distinctive to warrant the special attention a dedicated Service would bring to it. Federal News Network summarizes Congressional sentiment on the question (and much of that is along the lines expressed by the MCPA) and the reservations about a new Service within the Department of Defense (which has a joint command dedicated to cyberspace, with components of multiple Services).
Avishai Avivi, CISO of SafeBreach, wrote to offer some thoughts on the matter:
“When considering this question, we need to first evaluate whether the new proposed branch is dedicated to a specific role and a unique mission. The quick answer is yes. Like the other branches, the Cyber Force would focus on a unique environment/domain – cyberspace. We should also consider that, unlike the other six branches of the military, this domain does not abide by the traditional kinetic properties of conventional warfare. Although cyberattacks can certainly result in kinetic impact, e.g., a power plant exploding due to a malicious computer code injected into the power-generating turbines, or a fuel shortage due to a cyberattack crippling the supply chain…
"Additionally, cyberattacks against the U.S. do not happen in the conventional domain – not by land, not by air, not by sea, and technically not by space. So while the transit medium may be telephone lines, radio communications, underwater communication lines, or satellites, the attack itself is not confined to these mediums. This also brings up the issue of attribution. In the kinetic domains, the attribution of the attack to the enemy is fairly simple. You can see where the tanks are, you can detect airborne threats and track them to their origin, and you can locate enemy combatant vessels in the water. These domains also have the notion of borders and, more importantly, international borders/spaces.
"Cyberspace does not operate by the same rules. Adversaries can, and often do, go through several hops and initiate attacks from a ‘friendly’ space, making it much harder to trace these attacks back to their origin.
"These differences also require a different mindset and approach to defensive and offensive capabilities. It should be noted that there is quite a bit of affinity between this and the military intelligence’s electronic warfare branch and the Navy’s cyber warfare. There might be room to discuss moving the latter into the new Cyber Force.
"While the current U.S. Cyber Command can be viewed as addressing these aspects, it still comes across as an overlay command rather than a force dedicated to a unique domain. A newly created Cyber Force would require a different command structure, reflecting its unique challenges and opportunities.”