Welcome to the CAVEAT Weekly Newsletter, where we break down some of the major developments and happenings occurring worldwide when discussing cybersecurity, privacy, digital surveillance, and technology policy.
At 1,950 words, this briefing is about a 10-minute read.
At a Glance.
- FDIC proposes new banking recordkeeping requirements for fintech companies.
- US looks to convene a new AI safety summit in November.
FDIC proposes new fintech recording-keeping requirements.
The News.
On Tuesday, the Federal Deposit Insurance Corporation (FDIC) proposed a new rule that would mandate that banks increase their recordkeeping requirements for any account managed by a fintech company. More specifically, these proposed rules aim to provide greater clarity for accounts managed by fintech companies to ensure that account owners are still able to have timely access to their funds, even if a bank or fintech institution fails. Additionally, the FDIC has finalized a new policy that looks to increase its scrutiny during any bank merger that would combine asset values of $100 billion or more.
If these rules were to be implemented, banks working with fintech companies would be required to strengthen their recordkeeping practices. These recordkeeping changes would require banks to identify both the beneficial owners of each account and record account balances. If these requirements were met and maintained, then third parties, like fintech companies, would be allowed to manage these accounts. With these proposed rules, agency officials stated that these moves were created to increase the banking sector's stability and better protect consumers.
The Knowledge.
While these newly proposed rules may seem minor at first, these rules were likely created in response to the Synapse banking incident that occurred earlier this year. For context, Synapse Financial Technologies was a software fintech company that designed and developed banking software solutions related to lending, credit, online banking, etc. However, in April 2024, Synapse filed for Chapter 11 bankruptcy protection and rapidly shut down many of its fintech and banking services. While it is still unclear exactly how many people were frozen out of their accounts, regulators believe that tens of thousands of people were negatively impacted by this bankruptcy and that there is an eighty-five million dollar shortfall between Synapse’s partner banks and depositors.
While the Synapse bankruptcy created significant shockwaves earlier this year, this incident is reflective of the growing concerns related to the fintech industry as a whole. For example, in recent years, many “early wage access programs” have emerged that enable consumers to access their paycheck earlier than normal often for a substantial fee. Despite these programs seeming trivial at first, these programs have raised regulator concerns comparing these programs to consumer loans but without the requirements to specify additional disclosures, such as annual percentage rates (APR). Notably, these program fees can amount to over 100% APR despite being advertised as free or low-cost.
Aside from legal concerns, regulators have also expressed concerns related to fintech institutions managing consumer information. Since it remains unclear which compliance regimes apply to fintech companies, they are oftentimes not held accountable to key compliance security standards, despite these companies having access to highly sensitive information. These unclear circumstances have left consumers vulnerable to data breaches as many banks are intrinsically linked to these fintech providers.
The Impact.
Fintech represents a significant legislative challenge that has been inadequately addressed in recent years. Whether these challenges are related to fintech’s security, stability, or legality, legislatures have faced significant challenges getting many of their rules and bills enacted as many proposed changes have faced significant legal pushback. Furthermore, fintech’s constantly changing environment has made it difficult for legislators to make impactful changes as new programs, companies, and services are sometimes being introduced monthly.
Given both the industry's rapid proliferation and entrenchment into society, these institutions represent an unavoidable risk for both businesses and consumers alike. While citizens wait for greater accountability, people should understand the risks associated with fintech and ensure that their financial accounts are secured to avoid similar consequences as seen with the Synapse incident earlier this year. Additionally, consumers should monitor proposed rules to ensure that if they are impacted they can understand their rights as consumers and be able to hold institutions accountable.
US targets new AI safety summit.
The News.
On Wednesday, the Biden administration announced its plans to convene a new global safety summit centered on artificial intelligence (AI) in November. The summit will be hosted by both the Secretary of Commerce, Gina Raimondo, and the Secretary of State, Anthony Blinken. The conference will be held from November 20th to November 21st in San Fransico and will be the first meeting of the International Network of AI Safety Institutes. With this announcement, the administration stated that the goal of this conference would be to “advance global cooperation toward the safe, secure, and trustworthy development of artificial intelligence.” Aside from nations attending the conference, technical experts will also be invited to discuss key work areas as well as promote knowledge sharing on AI safety.
With this announcement, Secretary Raimondo stated that this conference aims to have “close, thoughtful coordination with our allies and like-minded partners.” Secretary Raimondo continued that this body “wants the rules of the road on AI to be underpinned by safety, security, and trust.”
Aside from the United States, other members that would likely attend the conference would include Australia, the European Union, Canada, Japan, South Korea, the United Kingdom, Kenya, and Singapore.
The Knowledge.
Originally founded in May at the AI Seoul Summit, the International Network of AI Safety Institutes was created for nations who agreed that AI safety, innovation, and inclusivity needed to be prioritized. This next conference is seen as the launching point for this technical collaboration before the next major AI Action Summit is held in Paris in February 2025. However, aside from promoting international cooperation, this conference also comes as domestic legislative AI action has stalled out.
Despite AI becoming further entrenched in society and initial legislative efforts heavily focusing on managing the technology earlier this year, legislative actions have continued to decline as debates over what is required have gone in circles for months. Even at the state level, bills have seen substantial pushback from all types of critics claiming that proposed legislation is either too restrictive or not impactful enough. For example, in California, SB 1047 was passed by the state legislature in August and has been widely discussed as people have expressed their opinions on whether or not the bill would too greatly restrict AI developers hampering innovation or if these restrictions are necessary to better protect citizens from AI misuse. Aside from this bill, SB 1047 joins over thirty-five other bills in California alone that all center on addressing AI. While many state and federal lawmakers have attempted to pass legislation, no true consensus has been reached yet at any level resulting in an uncertain future for AI as both developers and citizens do not know how lawmakers intend to manage the technology. While it is unclear what this new summit will exactly entail and what topics will emerge as focal points, it is clear that the Biden administration is using this conference as a means to reignite this conversation within the US.
The Impact.
As both nations, agencies, and private organizations prepare for this conference over the coming weeks, hopefully, these efforts will spurn both lawmakers and developers to reignite the conversations for how to better manage AI. While the administration has signed multiple executive orders over the past several months and various agencies have published guidelines, these efforts are ephemeral if the next administration chooses to revoke or change these initiatives.
This dynamic has demonstrated that lawmakers need to come together both at the federal level and the state level to pass legislation that ensures consumer safety as well as promotes technological innovation. In the meantime, both AI developers and consumers should monitor this conference as it approaches to understand what the key topics will be and how these nations intend to manage them. While much can still change, it is likely that this conference will not only set the staging ground for the upcoming Paris AI summit but will also be seen as a tool to guide the administration's AI policy plans for the next several months.
Highlighting Key Conversations.
In this week’s Caveat Podcast, our team sat down with Dmitri Alperovitch, the author and Chairman of Silverado Policy Accelerator to discuss his book, “World on the Brink: How America Can Beat China in the Race for the Twenty-First Century.” Throughout this conversation, Ben Yelin talked with Dmitri as he laid out his argument for why Xi Jinping is preparing to conquer Taiwan in the coming years and the associated fallouts if this reality were to come true. During this episode, you will be able to listen in as Alperovitch explains how we can play to our strengths, manage our weaknesses, and leverage our global position over the coming years.
Like what you read and curious about the conversation? Head over to the Caveat Podcast for the full scoop and additional compelling insights. Our Caveat Podcast is a weekly show where we discuss topics related to surveillance, digital privacy, cybersecurity law, and policy. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you.
Other Noteworthy Stories.
OpenAI launched new AI models that have “reasoning” abilities.
What: OpenAI announced that it will launch a new series of AI models capable of solving more difficult problems.
Why: Last week, OpenAI launched its new “Strawberry” AI model series that will be capable of more complex reasoning and solving more challenging problems related to science, math, and coding. With this announcement, OpenAI’s Noam Brown stated “I’m excited to share with you all the fruit of our effort at OpenAI to create AI models capable of truly general reasoning.” With this announcement, OpenAI also stated that this new model scored eighty-three percent on the qualifying exam for the International Mathematics Olympiad compared with the thirteen percent that the previous AI model, GPT-4o, scored.
The official name of these new models will be o1 and o1-mini.
Instagram rolls out new changes for teen users.
What: Instagram announced that it will be releasing new changes to teen profiles aimed at boosting online safety and giving greater parental control.
Why: On Tuesday, Instagram announced that it will roll out its new “Instagram Teen Accounts,” which will be set to private by default and will require users under the age of eighteen to have parental permission to change any built-in protections. Users between sixteen and eighteen will be able to change their privacy settings unless parents have configured the “parental supervision” settings.
With these new changes, Instagram also acknowledged that teens can lie about their age when making new accounts and that they will are going to require users to verify their ages in more places as well as is also building new technology that will “proactively find accounts [that belong] to teens, even if the account lists an adult birthday.” Instagram stated that it intends to test this new technology in early 2025.
23andMe settles data breach lawsuit for $30 million.
What: 23andMe has agreed to pay a thirty million settlement and provide three years of security monitoring to settle a lawsuit that accused the company of failing to protect its customers' personal information.
Why: Last week, 23andMe settled a class action lawsuit that accused the company of not telling customers that a hacker targeted specifically Chinese and Ashkenazi Jewish people and sold their information on the dark web.
23andMe stated that this settlement was “fair, adequate, and reasonable.” The breach occurred in April 2023 and lasted roughly five months, impacting nearly fourteen million customers.