Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1,850 words, this briefing is about an 8-minute read.
At a Glance.
- Chinese Researchers developing military-focused AI model using Meta’s Llama.
- With Trump’s election, expect antitrust efforts to shift directions.
Chinese researchers are using Meta’s Llama to develop AI model for military use.
The News.
Over the weekend, reports emerged that several top Chinese research institutions have used Meta’s publicly available Llama model to create an artificial intelligence (AI) tool for military purposes. This military-focused AI model, also known as “ChatBIT,” has been developed using an early version of Llama as a foundation and was created by six Chinese researchers from three different institutions. These reports cited how ChatBIT was specifically “optimized for dialogue and question-answering tasks in the military field.” Additionally, this research found that ChatBIT outperformed other AI models that were around 90% as capable as ChatGPT-4.
Sunny Cheung, an associate fellow at the Jamestown Foundation, stated with this research that “it’s the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs…for military purposes.” While this development is concerning, researchers did note that ChatBIT only incorporated 100,000 military dialogue records, a relatively small number compared to other models.
In response to these research findings, Meta’s director of public policy, Molly Montgomery, stated that “any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy.” Another Meta spokesperson stated that “in the global competition on AI, the alleged role of a single, and outdated, version of an American open-source model is irrelevant when we know China is already investing more than a trillion dollars to surpass the US on AI.”
The Knowledge.
This research is representative of a problem that both the Biden administration and Congress have been attempting to address over the past several months. With AI’s rapid proliferation and advancement, United States (US) officials have been scrambling to create comprehensive legislation that will ensure the technology has been safely developed and implemented while also ensuring that the US maintains its competitive advantage over other nations. While the federal government has been unable to pass any comprehensive AI legislation, federal agencies have implemented numerous exportation rules that have been explicitly aimed at limiting the exports of technologies and equipment.
Over the past year and in 2023, the Biden administration has steadily implemented numerous export rules and closed trade loopholes to keep sensitive technologies within the nation. For example, this past September, the Department of Commerce released export rules that aimed at controlling exports to quantum computers and components, advanced chipmaking tools, and high-bandwidth chips among other technologies. In another instance during 2023, the administration addressed several loopholes that were allowing chipmakers to sell advanced semiconductors to China. These are just two instances of the administration’s efforts to limit technology exports. Aside from creating new rules and closing loopholes, the administration and Congress have also used its influence to adjust significant business deals.
Earlier this year, Microsoft announced its partnership with a Middle Eastern technology firm, G42, where the two companies planned to work together to develop emerging technologies. However, this deal was heavily criticized by both political parties citing concerns about G42’s relationships with China as well as the lack of security measures that would surround the critical technologies. After the deal faced significant pressure, Microsoft announced in August that it would scale back the deal to increase technology security, and G42 cut many of its ties with China to appease US concerns.
The Impact.
While no formal actions have been taken to better manage open-source AI models at this time, this research is reflective of many of the concerns that US officials have talked about for years. Despite the US currently maintaining its competitive edge in AI compared to many other nations, critics have been vocal about their concerns related to hostile actors gaining access to these technologies, often through legal means, and using them to undermine US advancement or potentially threaten national security.
AI developers and policy-makers need to understand that this emerging technology has become a critical focus for all nation-states. While the administration is likely to continue implementing these export restrictions, until comprehensive legislation is passed, open-source models represent a vulnerability that hostile actors could utilize and exploit for their means. Even in this example, Chinese researchers were utilizing an outdated open-source model to create their military-focused AI that yielded notable results. Both developers and companies who create and distribute open-source AI models need to ensure their acceptable use policies are properly enforced not just on their latest and most popular models, but also on old models that still can be exploited for malicious use.
Given the US election, future US antitrust efforts remain unclear.
The Knowledge.
With Trump’s reelection, many of the current administration’s policies are expected to shift once Trump takes office again in early 2025. One of the key policy changes that is likely to see a shift will be how the administration pursues its antitrust policies. For greater context, over the past several years, the Biden administration has pursued several antitrust lawsuits against major technology corporations and has implemented new policies aimed at better enforcing antitrust laws.
Regarding existing lawsuits, the Biden administration has several major cases pending against major companies, including Google, Apple, and Amazon. In Google’s case, two lawsuits have been filed with one of these cases resulting in Google being found guilty and is now awaiting a second sentencing hearing. While experts do believe that the upcoming Trump administration will continue these lawsuits, some of which he originally began during his first administration, it is unclear what results the incoming administration will aim to achieve.
Taking a deeper dive into the case that found Google guilty, while a sentencing trial is planned, experts believed that a likely result would have been to break up the company to some degree due to its dominance in online searches under a Harris administration. However, already that result seems uncertain, as last month, Trump spoke on the case asking “if you do that, are you going to destroy the company?” He continued, stating, “What you can do without breaking it up is make sure it’s more fair.” William Kovacic, a law professor at George Washington University, commented on the matter highlighting how the Trump administration will have time to change course since the trial is likely to begin in April 2025 with a final ruling most likely coming in August.
The Impact.
With Trump’s reelection, the nation will likely see a notable policy shift across many sectors, with antitrust legislation likely returning to how it was handled previously. While it is unclear how some of these policy changes will impact these lawsuits at the moment, what is clear is that the incoming administration will not only change how it approaches these lawsuits but also how it intends to address mergers and acquisitions and significant antitrust policies in general.
One likely change will involve assessing the merger review guidelines that were crafted by the Antitrust Division in the Department of Justice in 2023. Under the incoming administration, these guidelines will likely be scraped given how these guidelines gave agencies greater ability to pursue more aggressive merger enforcement. Another key impact area would revolve around noncompete agreements. With over twenty percent of US workers having signed a noncompete clause, these contracts faced greater scrutiny under the current administration. To address these contracts, the Federal Trade Commission (FTC) announced a rule that would ban these contracts earlier this year. While the FTC is currently appealing a court ruling blocking this rule, under the Trump administration, this rule could be revoked if the appointed Chair decided to drop the case.
During the Biden administration, efforts to better enforce antitrust laws made significant progress as the administration pursued lawsuits and crafted new policies. While the Trump administration has not expressly detailed its plan to handle these efforts, businesses and citizens alike should expect some of these policies to continue. Given Trump’s previous administration’s efforts to bring merger cases, it is likely that many of these lawsuits will continue with the notable difference being how these cases could be settled and what punishments could be imposed. Regarding policies, it is likely that the incoming administration will rewrite or completely remove these policies given Trump’s favorability to large businesses.
Highlighting key conversations.
In this week’s Caveat Podcast, our team sat down with Brad Auerbach from Outside GC. During this conversation, our team and Brad discussed a first-of-its-kind state law, known as the ELVIS Act. Centering on a recent blog post, titled “Trailblazing Tennessee Legislation - the ELVIS Act,” this conversation discusses how this act offers protections for an individual’s voice and likeness against unauthorized clones and fakes generated using AI. For context, this law passed on July 1st earlier this year and was created in response to emerging AI technologies enabling people to create unauthorized fake works using the images and voices of others.
Like what you read and curious about the conversation? Head over to the Caveat Podcast for the full scoop and additional compelling insights. Our Caveat Podcast is a weekly show where we discuss topics related to surveillance, digital privacy, cybersecurity law, and policy. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you.
Other noteworthy stories.
EU assesses if Apple’s iPad OS complies with tech rules.
What: European Union (EU) antitrust regulators will assess if Apple’s iPads comply with the region’s rules after Apple published its compliance report for its iPad’s operating system (OS).
Why: On Monday, the European Commission announced that its regulators will investigate Apple’s iPad OS after the company published the device’s compliance report. With this announcement, the EU antitrust watchdog stated that “the Commission will now carefully assess whether the measures adopted for iPad OS are effective in complying with the DMA obligations.” Additionally, the watchdog stated that “the Commission’s assessment will also be based on the input of interested stakeholders.”
For context, the DMA, also known as the Digital Markets Act, came into force earlier this year and would require Apple to let users change their default web browser, permit alternative app stores, and other similar requirements. DMA breaches can result in fines as much as 10% of the company’s global turnover.
OpenAI in talks with California to become for-profit company.
What: OpenAI has entered early talks with the California attorney general’s office to change its corporate structure from a non-profit model to a for-profit business.
Why: On Monday, reports emerged that OpenAI is beginning the process to become a for-profit company. If this restructuring were approved, OpenAI would partially remain a non-profit organization that would own a minority stake in the for-profit version of the company. Aside from the company owning a stake in the for-profit version, Sam Altman, the company’s chief executive, would also receive equity in the company. When this plan was originally announced in September, an OpenAI spokesperson stated, "we remain focused on building AI that benefits everyone, and we’re working with our board to ensure that we’re best positioned to succeed in our mission.” This spokesperson also stated, "the non-profit is core to our mission and will continue to exist.”