At a glance.
- The FCC passes update to data breach rules.
- The SEC’s new incident disclosure rules take effect.
- An inside look at the National Initiative for Cybersecurity Careers and Studies.
- Fighting election misinformation in the age of AI.
- Lawmakers call on Biden to ensure EU law is fair to US companies.
The FCC passes update to data breach rules.
Last week the US Federal Communications Commission (FCC) officially approved the anticipated update to its privacy protection rules and data breach notification requirements. As GovInfoSecuirty explains, the requirements now cover all customer personal identifiable information collected by carriers and telecommunications relay service providers. Under the new rules, organizations will be required to notify individuals of breaches impacting upwards of five hundred customers “no later than seven business days after reasonable determination of a breach.” This includes hacks as well as accidental cyberincidents, like those caused by employee error oversight. An FCC statement released after the vote explained, “Today’s action would hold phone companies accountable for protecting sensitive customer information, while enabling customers to protect themselves in the event that their data is compromised.” As Media Post notes, the FCC voted along party lines, with the commission’s three Democrats voting for the update, and the two Republicans against. Just one day before the decision, four senators including Ted Cruz (a Republican representing Texas) spoke out against the update, claiming the FCC lost its power to dole out data privacy rules in 2017 when its authority was limited under the Trump administration. The two Republicans on the commission agreed, but they were outvoted by their Democrat counterparts.
The SEC’s new incident disclosure rules take effect.
The US Securities and Exchange Commission (SEC) also recently finalized new rules regarding cyberincident reporting that went into effect on December 18. In preparation, the Department of Justice (DOJ) and the Federal Bureau of Investigation (FBI) have issued guidance for companies covered by these rules who would like to request a breach disclosure delay. As we discussed last week, the rules require that victim organizations determine without “unreasonable delay” whether a cybersecurity incident is “material,” and must report material incidents on SEC Form 8-K within four business days of that determination. However, registrants are allowed to delay reporting these incidents if the DOJ determines that “a public filing would pose a substantial threat to public safety or national security.” As JDSupra explains, the DOJ offers some examples of when a delay might be warranted, which include an incident caused by an exploit technique for which there is not yet a well-known mitigation, or when an incident impacts a system containing sensitive US government information that might be further endangered by disclosure of the incident. A deferral could also be granted in a case where disclosure could impair remediation efforts being conducted for critical systems.
The FBI, which will be receiving these delay requests, has released guidelines outlining the delay request procedures and determination process. There has been much debate over the issue of determining the materiality of a cyberincident, and the FBI’s guidelines state in order to avoid immediate denial, any delay request must be submitted concurrently with the materiality determination. The government is prepping for the rollout of the new policy while allowing for flexibility until it becomes clearer how many exemption requests will need to be processed. One FBI official stated, “It’s something that has kept us very much front of mind in terms of needing to remain flexible in terms of our processes, to be loyal and allegiant to what we're committing to victims, but also to not understanding volume and how that can drive resource demands throughout the U.S. government.”
As the Record notes, the industry response to the reporting rules has been less than positive, and Republican lawmakers have proposed legislation to reverse them altogether. One argument is that disclosure could put organizations in harm’s way. But as one senior official at the Cybersecurity and Infrastructure Security Agency (CISA) explains, many experts feel the benefits outweigh the potential dangers. “We know that there is ubiquitous underreporting of cybersecurity incidents, and that diminishes our ability to help victims, our ability to provide effective guidance, our ability to understand adversary trends and drive broader risk reduction at scale,” the CISA official stated.
An inside look at the National Initiative for Cybersecurity Careers and Studies.
As we discussed last week, the US government is implementing several initiatives aimed at strengthening the nation’s cybersecurity workforce, and ClearanceJobs offers an interview with an official at the head of one such initiative. Antonio Scurlock serves as Deputy Chief Learning Officer of CISA’s National Initiative for Cybersecurity Careers and Studies (NICCS) and oversees the initiative’s website, a free online platform that offers resources for training, career planning, job opportunities, and scholarships. Scurlock explains, “It’s my job to try and find efficient, effective ways to provide leadership development, professional development, training, education, and hopefully, eventually, career pathways for all CISA employees. They’re our principal customer, our primary stakeholder and partner in what we do going forward, mission wise.” Scurlock emphasizes that the initiative offers a plethora of learning materials and over 12,000 courses, all free of charge, in an effort to support current and future cybersecurity professionals. He states, “So any way that we can enrich and enhance a person with knowledge, we make them stronger, right? You can give somebody something to eat and you feed them. Once you teach them how to cook, you teach them how to grow. They’ve fed for life. And we have a core value here. It’s just a commitment to a lifetime of learning, which I lead.” The materials are not only focused on professionals looking to learn new cyber skills, but are also geared toward hopefuls as early as highschool age who are considering future careers in cybersecurity. Scurlock also discusses the National Initiative for Cyber Education (NICE) Framework, which is a system for screening cybersecurity candidates to determine the best applications for their skill sets. He explains, “The beauty of that is it speaks in plain language about what those knowledge, skills, and abilities are, and it also in this tool can map you to certifications and training that would help facilitate knowledge in that arena if you’re designated as one of those professionals or if you’re thinking about going into that arena, you can see, ‘Okay, what is a person who does cyber network analysis do?’”
Fighting election misinformation in the age of AI.
The new year is just around the corner, and 2024 will see big elections in not only the US, but also Taiwan, India, and Indonesia. With such major government decisions on the line, concerns about election-related disinformation are growing. Election campaigns are already using artificial intelligence to enhance or even completely fabricate news, and the launch of ChatGPT a year ago demonstrated just how easy it is to create deepfakes. With Sam Altman back in the driver’s seat at Open AI, maker of ChatGPT, the company on Monday released its new preparedness framework, a plan for content moderation aimed at addressing the potential risks of AI. As Reuters explains, a special risk assessment group will be tasked with reviewing safety reports before sending them to the company's executives and board. Executives will make decisions on the reports, but the board will have the authority to reverse those decisions if it disagrees. In other words, the boards can prevent the release of an AI product even if company leadership considers it safe. The company tweeted, “We are systemizing our safety thinking with our Preparedness Framework, a living document (currently in beta) which details the technical and operational investments we are adopting to guide the safety of our frontier model development.” Technology Magazine reports that the new preparedness team will be led by MIT Professor Aleksander Madry and will be composed of AI researchers, computer scientists, national security experts, and policy professionals. The company will also invest in more data-driven research to better predict potential risks, and conduct regular evaluations of its products to minimize harm.
Remaining on the topic of election disinformation, former Google CEO Eric Schmidt has his own strategy for combating digital deception, with the hope that tech companies and lawmakers alike will take notice. Schmidt writes in the MIT Technology Review, “Regulations and laws will play a crucial role in incentivizing or mandating many of these actions. And while these reforms won’t solve all the problems of mis- and disinformation, they can help stem the tide ahead of elections next year.” His six-point plan includes creating better tools to determine which users are human and which are bots and to verify the source of content. In order to better identify deepfakes, Schmidt recommends platforms scan an existing database of images to determine if an image has no history (and is therefore likely AI-generated), and even using AI itself to learn and pinpoint the signatures of faked images in order to automatically alert users of AI-generated content. Of course, experience has shown us that technology isn’t foolproof, so Schmidt says it’s important to rely more heavily on flesh-and-blood humans to catch what might slip past digital detectors. He also suggests creating a list of approved advertisers who adhere to safe standards, and investing in AI research for long-term solutions, especially as the tech continues to evolve.
Lawmakers call on Biden to ensure EU law is fair to US companies.
Twenty-one members of the US House of Representatives have submitted a letter to President Joe Biden claiming that the EU’s Digital Markets Act (DMA) unfairly targets US firms over Chinese and European companies. As Reuters explains, the DMA designates American Big Tech firms Alphabet, Amazon, Apple, Meta, and Microsoft as "gatekeeper" service providers. In other words, as of March 2024, these companies will be mandated to make their messaging apps compatible with their competition and allow users to have the final say on which apps will come pre-installed on their devices. In the letter, the bipartisan group of lawmakers say that the new law will negatively impact the US economy and customer security, and they’re urging Biden to make the EU pledge the rules will be fairly implemented. The letter reads, "The designation of leading U.S. companies as 'gatekeepers' threatens to upend the U.S. economy, diminish our global leadership in the digital sphere, and jeopardize the security of consumers." The lawmakers also note that Chinese tech heavyweights like Alibaba, Huawei, and Tencent somehow avoided the gatekeeper designation, which could give them an unfair advantage. The letter adds, "The EU inexplicably failed to designate any European retailers, content-sharing platforms, payment firms, and telcos."