At a Glance.
- The FTC is investigating relationships between AI startups and tech giants.
- Italian Watchdog claims ChatGPT breaches privacy rules.
- The NSA has been buying web browsing data without warrants.
FTC is investigating relationships between AI startups and large tech giants.
The News.
The Federal Trade Commission (FTC) has launched an inquiry into several multibillion-dollar investments launched by major technology companies into generative AI firms. Specifically, the FTC said it is inquiring further about the partnerships between Microsoft and OpenAI, Amazon and Anthropic, and Google and Anthropic. The FTC’s chair, Lina Khan, announced that the FTC is hoping its inquiry on these five companies will show if these big tech partnerships will distort potential innovation or undermine fair competition.
A Google spokesperson has responded that the company supports the inquiry and hopes that the inquiry will “shine a bright light on companies that don’t offer the openness of Google Cloud or have a long history of locking-in customers.” Rima Alaily, the corporate vice president of the Competition and Market Regulation Group at Microsoft, also commented that Microsoft's partnership with OpenAI is critical for “promoting competition and accelerating innovation.” Companies involved in the inquiry have forty-five days to respond.
The Knowledge.
This new inquiry was created specifically to focus on the partnerships between AI developers and their financial backers and continues the trend of governmental institutions becoming increasingly involved with both AI and big tech firms. With this new inquiry, the FTC is seeking information related to these business deals' impacts on market shares, competition, and potential for sales growth. While the FTC has not announced what it will do with this collected information, it has announced that it is looking specifically to prevent companies from forming monopolies on developing and monetizing AI.
The FTC issued its inquiry under Section 6(b) of the FTC Act. This act authorizes the FTC to conduct studies allowing its enforcers to better understand market trends and business practices. The FTC stated that this inquiry aims to discover more about the “competitive dynamics” regarding the key products and services needed for generative AI.
The Impact.
This new inquiry reflects the growing pressure that federal regulators have been placing on big tech companies, like Microsoft, and their ability to control and influence various markets. While previous federal involvement in AI has primarily focused on controlling AI’s development and safeguarding its usage, this new inquiry is focused on securing fairness when creating AI. While no regulations or legislation has been created or proposed, it is important to understand that this inquiry is looking to discover any monopolistic practices that may currently exist or could potentially develop.
For people involved in creating generative AI, this inquiry demonstrates a new focus for the federal government when controlling the technology’s development. This inquiry aims to ensure that generative AI developers have a healthy and fair market to compete in. Depending on the FTC’s findings, legislative impacts could follow suit in the form of anti-monopolistic lawsuits, fines, or comprehensive legislation. Companies involved in creating generative AI should understand this new involvement and the potential legal outcomes that could come from federal oversight.
Italian watchdog says ChatGPT breaches privacy rules.
The News.
On Monday, Italy’s data protection authority, Garante, told OpenAI, the creator of ChatGPT, that the AI application breaches data protection rules. This announcement comes after the regulatory agency opened an investigation into the AI application last year. While the investigation has not fully concluded at this time, the announcement is particularly notable given that this same regulatory agency temporarily banned the AI tool last year.
With this announcement, Garante has given OpenAI thirty days to present a defense to these claims or face regulatory punishments. Open AI has released a statement claiming that its practices are currently aligned with the EU’s privacy laws and that the company “plans to continue to work constructively with the Garante.” At this time, Garante has not announced what potential data privacy violations exist.
The Knowledge.
This recent announcement reflects the EU’s more proactive approach to overseeing generative AI developers. As mentioned earlier, this is not the first time that Garante has addressed ChatGPT before, as last year, Italy was the first Western European Country to temporarily ban ChatGPT citing numerous privacy concerns for its citizens. Garante cited that ChatGPT had a notable “absence of any legal basis that justifies [its] massive collection and storage of personal data” to train the AI. This ban was lifted after OpenAI implemented the ability for users to decline or consent to having their data used to train the AI algorithm.
The Impact.
As the interest surrounding AI has only continued to grow over the past year, numerous governmental agencies have launched inquiries into the emerging technology to better understand both its uses and its associated risks. While governments continue these investigations over the coming months, AI users should expect similar announcements and regulatory implications on the technology. Additionally, AI users should take care to protect themselves and their data when utilizing AI applications.
The National Security Agency (NSA) buys web browsing data without a warrant.
The News.
The US NSA has been buying American browsing information from commercial brokers without a warrant according to a letter between the agency’s director, Paul Nakasone, and Democratic Senator, Ron Wyden. Senator Wyden released the letter while simultaneously calling for US intelligence officials to stop buying the personal information of Americans without their consent or express knowledge.
In this letter, Paul Nakasone confirmed these data purchases to Wyden, saying that the data could “include information associated with electronic devices being used outside and in certain cases, inside the US.” Wyden claimed that the collected data revealed what websites Americans visited and what apps they used.
The Knowledge.
While the NSA has defended its actions saying that it followed its compliance regime and did not buy phone location data without a court order, this letter demonstrates the second time in recent years that disclosures have found US intelligence agencies to be purchasing data that may contain a US citizen’s information. In 2021, the Defense Intelligence Agency (DIA) was discovered buying and using domestic smartphone location data that the agency had acquired from data brokers.
The NSA’s data collection follows the FTC’s recent rulings where the agencies have been increasing their overwatch of data brokers. In the past few weeks, the FTC banned both InMarket and Outlogic from selling location data without express user consent. With Senator Wyden releasing this letter and the FTC increasing its efforts to regulate data brokers, a new paradigm is developing within the US with how citizen data is collected and utilized.
The Impact.
Collecting, selling, and using US citizen data has been a significant debate topic for many years, and this letter demonstrates how several key policymakers are renewing their interest in protecting privacy. While no comprehensive privacy legislation has been passed at a federal level at this time, these efforts are contributing to the calls for the administration and Congress to pass a more modern and comprehensive data protection and privacy act that clearly outlines what data can be collected, sold, and used within the US.
For US citizens, people should understand what data is currently being collected about them, who it is being sold to, and how it is being used. Additionally, those who sell or buy citizen data should expect increased governmental oversight and regulations as calls for more comprehensive data security continue to grow.
Other Noteworthy Stories.
Alleged ISIS cyber work prompts US sanctions on two Egyptian nationals.
What: The US Treasury Department has sanctioned two Egyptian nationals for their alleged work in training ISIS members on cybersecurity as well as overseeing the terrorist group's funding efforts. The department sanctioned both Mu’min Al-Mawji Mahmud Salim and Sarah Jamal Muhammad Al-Sayyid for their involvement with the Electronic Horizons Foundation (EHF) platform, which is believed to be connected with the terrorist group. The FBI has also offered a reward of $20,000 for information on the two's whereabouts.
Why: With the Treasury Department’s sanctioning of these two Egyptian nationals, the Department accused the two of helping ISIS establish a violent anti-West propaganda outlet as well as providing support and guidance when using cryptocurrencies. This sanctioning marks a notable case of US agencies targeting terrorist affiliates for assisting in cyberwarfare efforts as well as beginning to monitor international cryptocurrency exchanges.
Germany is set to approve the European Union’s (EU) planned artificial intelligence act.
What: Germany has announced that it will approve the EU’s new AI act after a compromise was reached between the EU and Germany’s digital issues minister. This new act aims to establish a regulatory framework for the development of AI.
Why: With this compromise, the German minister aimed to implement more innovation-friendly rules and implemented greater improvements for small to medium-sized businesses to avoid disproportionate requirements. While the minister did outline these details, he did mention that the compromise “lays the foundations for the development of trustworthy AI.” As AI continues to become more popular across various industries, governmental authorities worldwide will continue to increase their oversight to ensure the product is safely developed and used.
Tech CEOs set to testify on children's safety.
What: Yesterday, the CEOs of the major social media platforms testified before the Senate to discuss their alleged failures to remove child abuse materials on their platform. The session featured CEO testimonies from Meta, TikTok, Snap, Discord, and X.
Why: On Wednesday, the US Senate met with the CEOs of several major social media platforms with the hopes of creating momentum to pass new federal safeguards that would address the rise of child sexual abuse material (CSAM) online. During the hearing, senators heavily criticized the CEOs blaming their platforms as key contributors to the problem. From this hearing, it also became clear that while each of these platforms has existing solutions in place to address this CSAM, each platform uses a different solution to varying degrees of success.
Additionally, the hearing resulted in strong bi-partisan support to take broad action in securing these social media platforms. While no legislation has been put forward at this time, US citizens should expect renewed efforts by both Congress and the administration to pass new legislation that will aim to directly deal with the rise of CSAM online.