Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1750 words, this briefing is about a 7-minute read.
At a Glance.
- Justice Department files lawsuit against RealPage.
- California AI bill generates notable regulatory debate.
Justice Department sues RealPage accusing the company of violating antitrust laws.
The News.
Last Friday, the Department of Justice (DOJ) and eight states filed a lawsuit against RealPage, a property management software company. In this lawsuit, the DOJ and states allege that RealPage utilized a pricing algorithm that required landlords to share sensitive pricing information and dictated rental pricing structures. With this announcement, Attorney General Merrick Garland stated that “Americans should not have to pay more in rent because a company has found a new way to scheme with landlords to break the law.” Garland continued stating that “using software as the sharing mechanism does not immunize this scheme from Sherman Act liability.”
In this lawsuit, the DOJ alleges that RealPage has formed contracts with various landlords that require them to share “nonpublic competitively sensitive information” about various details such as rental rates. This data is then allegedly fed into an algorithm to generate recommendations on pricing and other contract terms based on the collected data from other landlords. However, the DOJ and states are alleging that RealPage then used this information to collude with landlords to consistently raise rent rates.
The states joining the DOJ in this lawsuit are North Carolina, California, Colorado, Connecticut, Minnesota, Oregon, Tennesse, and Washington.
The Knowledge.
This lawsuit marks another major instance where the Biden administration has made greater efforts to better enforce antitrust laws regarding Big Tech. Aside from targeting RealPage, the administration has also begun several other large antitrust investigations into other major technology companies like Google, Apple, Meta, Microsoft, Nvidia, and Amazon. Out of all these investigations, the DOJ’s case against Google has made the most shockwaves as the Department has brought a lawsuit against the company. Similar to the DOJ’s case against RealPage, the Google lawsuit also alleges that the company has engaged in monopolistic behavior.
While this case is still awaiting a second sentencing trial, the lawsuit against Google made major progress earlier this month when Judge Amit Mehta ruled that Google had abused a monopoly over the search business and violated Section 2 of the Sherman Act. Aside from this case having the potential to significantly disrupt the technology industry, this case also stands as a significant development in US antitrust enforcement. Rebecca Allenswork, a professor at Vanderbilt University’s law school, stated that “this [case] is the most important antitrust case of the century, and it's the first of a big slate of cases to come down against Big Tech.”
Microsoft has also come under the federal government’s eye due to the company’s involvement with the prominent artificial intelligence (AI) firm, OpenAI. Already federal agencies have opened investigations into this partnership to determine if any concerning monopolistic behaviours are taking place. While no formal lawsuit has been filed against Microsoft, pressure is mounting on the company. For example, Microsoft relinquished its non-voting board seat from OpenAI in July. With that move, many reacted stating that the effort was in response to the growing regulatory pressure as agencies have begun investigating this relationship. While many of these cases are still playing themselves out, if this momentum continues, it could signal a drastic change in how the technology industry behaves and delivers its services to the general population.
The Impact.
As the DOJ proceeds with its case against RealPage, this event is reflective of the Biden administration’s growing efforts to target monopolistic technology companies. If this pressure continues to mount, people should expect many of these large technology companies to face significant regulatory enforcement that could change how existing services are delivered, how services are priced, and how competitive various technology markets are.
Given the administration’s initial success against Google and the new lawsuit against RealPage, both US citizens and businesses should expect the administration to continue its efforts to pressure similar companies, especially if Vice President Harris were to win the upcoming November elections. Given Vice President Harris’s background and voiced support of these actions, a Harris administration would likely continue to increase regulatory efforts to better monitor and control tech company behaviors. While these cases are unlikely to impact the average US citizen for some time, the results of these efforts could fundamentally change how consumers engage with the market.
California AI bill generates significant debate.
The News.
California Senate Bill 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, has generated significant debate amongst both tech figures and policymakers. If passed, the bill would require AI models to perform safety testing before being released to the public as well as would hold AI developers accountable for severe harm caused by their models. More specifically, this bill would require large companies to test their AI models for hazards, ensure shutdown capabilities, protect whistleblowers, and implement risk management structures.
Despite many people agreeing that new policies need to be implemented to better handle AI, this bill has created significant debates on what the appropriate levels of regulatory oversight are that ensure safety without sacrificing innovation. While prominent figures, like Elon Musk, have announced their support for the bill, other figures, like Nancy Pelosi and OpenAI, have expressed their concerns stating that the bill would require too much from companies and stifle their growth.
California State Senator Scott Wiener, the bill’s creator, responded to critics stating that “when companies promise to perform safety testing and then balk at oversight of that safety test, it makes one think hard about how well self-regulation will work.” In this statement, Senator Wiener additionally highlighted how legislators did amend the original bill to better incorporate some industry feedback that did limit the state attorney general’s ability to sue developers and created a threshold for fine-tuned open-source models. These amendments did win over some initial critics as Anthropic CEO, Dario Amodei, wrote that “the new SB1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs.” For the bill to be signed into law, the bill must first pass the state legislature by Friday and be signed into law by Governor Gavin Newsom.
The Knowledge.
Despite many prominent figures debating the merits of Bill 1047, this debate is not new. For months, policymakers and businesses have gone back and forth over how modern AI legislation should look. This debate has been had at both the federal and state levels as policymakers have attempted to determine what appropriate safeguards should be mandated, who bears the burden of responsibility for AI harm, and so on. Even single states cannot agree upon specific bills as California alone introduced sixty-five bills on AI during this legislative session alone including Bill 1047 with each attempting to address this issue.
However, despite the constant debate surrounding AI legislation, there is an underlying agreement amongst many that AI needs to be regulated in some form sooner rather than later. While it is unclear what that exact bill will look like, bills like Bill 1047, will likely play a significant role whether they are passed or not. These more prominent AI bills will set the foundation for future legislation to be based on incorporating the parts that were well received and removing tension points that were hindering a given bill's success.
The Impact.
While future AI legislation remains murky at this time, both businesses and individuals should be aware that pressure is mounting to address the emerging technology. Given how fast AI has begun to develop and expand in just the past year alone, it is understandable why passing legislation has been so challenging thus far. However, despite these challenges, progress is still being made. While SB 1047’s future remains uncertain, it is clear that legislatures are closing in on potential legal frameworks that will dictate how AI will be handled for the years to come.
In the meantime, both businesses and individuals should continue to use their due diligence when handling and implementing AI. While AI offers significant advantages, the technology is still associated with several significant security risks that need to be properly accounted for and mitigated. Additionally, for AI developers, businesses should continually examine these more prominent AI bills to assess how they could impact the AI landscape and understand how these proposed regulations could impact day-to-day business operations.
Highlighting Key Conversations.
In this week’s Caveat Podcast, our team took time to discuss the recent spread of false narratives and how social media has impacted and amplified these issues. During these conversations, Caveat sat down with Adan Darrah, the Vice President of Intelligence at ZeroFox, to further discuss these concepts and how the potential impacts of the recent Supreme Court decision, Murthy v Missouri. For greater context, this case involved conversations centering around social media companies and government officials censoring specific content.
Like what you read and curious about the conversation? Head over to the Caveat Podcast for the full scoop and additional compelling insights. Our Caveat Podcast is a weekly show where we discuss topics related to surveillance, digital privacy, cybersecurity law, and policy. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you.
Other Noteworthy Stories.
AT&T pays $950,000 to resolve its 911 outage investigation.
What: AT&T agrees to pay $950,000 after the Federal Communications Commission (FCC) found the company failed to deliver 911 calls to emergency call centers back in August 2023.
Why: On Monday, the FCC found that AT&T failed to deliver emergency calls to call centers and did not notify officials during a company outage in August 2023. The outage impacted calls across Illinois, Kansas, Texas, and Wisconsin. The hour lasted for over an hour and resulted in over 400 failed 911 calls.
With this settlement, AT&T has agreed to also implement a three-year plan that will ensure future compliance with the FCC’s 911 notification rules.
Telecom company set to pay $1 million fine for Biden deepfake robocalls.
What: The telecommunications company that transmitted the deepfake Biden robocalls will be forced to pay a $1 million fine.
Why: Last week, the FCC announced that Lingo Telecom would be required to pay this fine after these AI robocalls were used during the New Hampshire primaries to try and convince voters not to participate. Aside from paying the fine, Lingo Telecom has also agreed to implement a compliance plan that would require “strict adherence” to the FCC’s framework for caller ID verification.
With this announcement, Jessica Rosenworcel, the FCC Chair, stated that “if AI is being used, that should be made clear to any consumer, citizen, and voter who encounters it.” Rosenworcel continued by stating that “the FCC will act when trust in our communications networks is on the line.”