At a glance.
- Lawmakers push for reauthorization of Section 702.
- DoD advises on responsible military use of AI.
- International guidance on AI system development.
- US Navy’s cyber strategy’s focus on non-kinetic measures.
- CISA resurrects cyber insurance working group.
- Could “Eyes on the Board” do more harm than good?
- Mr. Cooper in legal hot water after data breach.
Lawmakers push for reauthorization of Section 702.
As the end of the year looms, the debate over the renewal of Section 702 of the Foreign Intelligence Surveillance Act (FISA), which authorizes the US’s spy agencies’ warrantless collection of intel on foreign targets, is reaching a fever pitch. Due to expire on New Year’s Eve, Section 702 is lauded by the Biden administration as essential in preventing and mitigating the damage of cyberattacks against critical US infrastructure, but opponents of the surveillance tool claim it violates citizens’ privacy rights, and there’s evidence that 702 has been abused by intelligence officials to query information on Americans. Insiders say House speaker Mike Johnson and Senate majority leader Chuck Schumer are planning a Hail Mary attempt to save the program by including it in a last-minute provision in the National Defense Authorization Act (NDAA). Congressional leaders are scheduled to present the final text of the NDAA by the end of this week, and as the bill decides the Pentagon’s funding for the year, it’s a must-pass piece of legislation. As Wired explains, if a bid to extend the life of Section 702 is included in the NDAA, there would be little time for legislators to make changes before it goes to the House and Senate floor for a final vote.
Meanwhile, Senate Intelligence Chairman Mark R. Warner, a Democrat from Virginia, introduced a bipartisan bill on Tuesday that aims to strike a middle ground that aims to appease both proponents and opponents of Section 702. The bill boasts co-sponsors Marco Rubio, a Florida Republican, and Lindsey Graham, the top Republican on the Judiciary Committee, which shares oversight of the FISA statute. Drafted in collaboration with the Biden administration and both leaders of the House Intelligence Committee, the bill would reign in the Federal Bureau of Investigation’s (FBI) authority to conduct backdoor queries of the data of Americans by prohibiting searches conducted solely to find evidence of a crime, while simultaneously enhancing the compliance requirements of the FBI. However, the Record explains, the proposal does not go so far as to require a warrant for intelligence searches. Warner told The Washington Post. “I think that the compromise product that we’ve got is pretty darn good, and it pushes the administration further than I think they want to go, but that’s what they can live with.” Despite Warner’s optimism, civil liberties groups say the bill’s concession is not enough, and a warrant requirement is still needed. Senior policy counsel at the American Civil Liberties Union Kia Hamadanchy states, “Pretty much all the stories of abuse we’ve seen were from queries done from foreign intelligence purposes,” and adds that Warner’s bill “makes mostly cosmetic changes and doesn’t actually get at the underlying problems we’ve seen with Section 702.” That said, the House Intelligence Committee has indicated it will introduce legislation similar to Warner’s proposal.
In an effort to further advocate for the reauthorization of Section 702, two US intelligence officials this week revealed that the Central Intelligence Agency (CIA), along with other intelligence agencies, used information gathered under Section 702 to prevent the sale of advanced weapons parts to Iran. The disclosure states that the CIA identified what US-manufactured supplies the Iranians were seeking, then searched the 702 database for those components and their manufacturers. While the officials, who have been granted anonymity, could not offer details about the operation, one of them stated, “It wasn’t one specific action. It was a number of actions. In at least one instance, if not more, specific sales were stopped either before they went or while they were en route.” They went on to say that 702 was essential in helping the administration target an individual and a foreign firm that attempted to bypass US sanctions on Iran. “Sometimes 702 is the only collection that we have on these kinds of things. So it makes it that much more critical,” the official said. Be that as it may, Politico notes that the disclosure is unlikely to make opponents of 702 change their minds. Elizabeth Goitein, the senior director of the Brennan Center for Justice’s Liberty & National Security Program, stated, “These belated and weak examples … merely underscore how out of touch the administration is with the concerns of lawmakers and the conversation that’s actually happening on the Hill.”
DoD advises on responsible military use of AI.
Just in time for Thanksgiving, the US Department of Defense (DoD) law week issued a statement on responsible use of artificial intelligence for military operations. Forty-seven other countires have so far endorsed the government’s “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” released last February. As the Under Secretary of Defense for Policy Sasha Baker explains, this declaration “advances international norms on responsible military use of AI and autonomy, provides a basis for building common understanding, and creates a community for all states to exchange best practices.”
Earlier this month, the DoD issued the “Data, Analytics, and AI Adoption Strategy,” ten measures that include guidance on the importance of making military AI systems auditable and clearly defining their uses. As well, the strategy recommends rigorous testing and evaluation through the systems’ life-cycle in order to detect and prevent unplanned behaviors, and that high-consequence applications are reviewed by senior-level personnel. A press release from the State Department explains, “The declaration and the measures it outlines, are an important step in building an international framework of responsibility to allow states to harness the benefits of AI while mitigating the risks. The US is committed to working together with other endorsing states to build on this important development."
International guidance on AI system development.
CISA has also joined forces with the UK National Cyber Security Centre (NCSC) to release “Guidelines for Secure AI System Development,” which details recommendations for system developers that create products incorporating artificial intelligence tech. The first of its kind, the document combines the knowledge of twenty-one agencies and ministries from across the globe including the Australian Signals Directorate’s Australian Cyber Security Centre, Germany’s Federal Office for Information Security, Japan’s National center of Incident readiness and Strategy for Cybersecurity, and Nigeria’s National Information Technology Development Agency. Secretary of Homeland Security Alejandro N. Mayorkas stated, “The guidelines jointly issued today by CISA, NCSC, and our other international partners, provide a commonsense path to designing, developing, deploying, and operating AI with cybersecurity at its core. By integrating ‘secure by design’ principles, these guidelines represent an historic agreement that developers must invest in, protecting customers at each step of a system’s design and development. Through global action like these guidelines, we can lead the world in harnessing the benefits while addressing the potential harms of this pioneering technology.”
CISA Director Jen Easterly has been vocal about her support of secure by design production principles for some time now. Easterly said of the guidance, “The domestic and international unity in advancing secure by design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology revolution. This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of international partnership in securing our digital future.” The document’s introduction highlights the need for developers to take responsibility for the security of their products, by “taking ownership of security outcomes for customers, embracing radical transparency and accountability, and building organisational structure and leadership so secure by design is a top business priority.” The guidance spells out considerations and mitigations for each of the four key stages within the AI system development life cycle: design, development, deployment, and operation and maintenance.
Mike Britton, CISO of Abnormal Security, sees this as an early sign of incipient allied cooperation on AI. "This international collaboration is a telling sign that allied countries are looking to get on the same page about AI regulation and safety measures," he wrote. He also sees this an an instance of a more general problem. "With the emphasis on a 'secure by design' model, you effectively could take out the word 'AI' here and it's a good standard of software development security principles. The emphasis on understanding where you are pulling external models and conducting due diligence aspects of this element is spot on. This is the first guidance that is addressing adversarial machine learning in a meaningful way, making this the most comprehensive 'AI centric' secure development lifecycle document to date."
This kind of cooperation will grow increasingly important as the family of technologies around AI continues to mature. Erich Kron, Security Awareness Advocate at KnowBe4, wrote, "With the growing development and popularity of AI and the obvious future integrations into business and our personal lives, laying out a common standard for development is incredibly important. Unlike traditional software, which many developers have had decades to get used to, many of the vulnerabilities unique to AI are still being learned," Kron wrote in emailed comments. "Having guidance like this can help accelerate the understanding of important security controls around AI and can give developers a common resource to reference and work from, rather than potentially introducing vulnerabilities that they are not familiar with yet."
Anurag Gurtu, Chief Product Officer at StrikeReady, also sees the guidelines as a significant advance in allied cooperation. "The recent secure AI system development guidelines released by the U.K., U.S., and other international partners are a significant move in enhancing cybersecurity in the field of artificial intelligence," he said. "These guidelines emphasize the importance of security outcomes for customers, incorporating transparency and accountability, and promoting a secure organizational structure. They focus on managing AI-related risks, requiring rigorous testing of tools before public release, and establishing measures to counteract societal harms, like bias. Additionally, the guidelines advocate a 'secure by design' approach covering all stages of AI development and deployment, and address the need to combat adversarial attacks targeting AI and machine learning systems, including prompt injection attacks and data poisoning."
Hitesh Sheth, president and CEO of Vectra AI, offered some thoughts on the ripple effect collaborative development of standards will have on the industry. “The new global AI guidelines announced today represent genuine efforts to deliver a much-needed global standard on secure AI design. With AI evolving at an unprecedented rate, and businesses increasingly keen to adopt it, it’s vital that developers fully consider the importance of cybersecurity when creating AI systems at the earliest opportunity. Therefore this ‘secure by design’ approach should be welcomed," Sheth wrote."It’s encouraging to see the UK and US work hand in hand, and with agencies from 17 other countries confirming they will endorse and co-seal the guidelines, developers from across the globe will be empowered to make more informed cyber security decisions. Transparency is vital when it comes to AI development, so these guidelines should act as a springboard for the delivery of reliable and secure innovation that can transform how we live and work.”
US Navy’s cyber strategy’s focus on non-kinetic measures.
Just as Chris Cleary, the US Department of the Navy’s (DON) first-ever principal cyber advisor, steps down, the branch is releasing its first-ever cyber strategy. The culmination of Cleary's efforts during his tenure, the document emphasizes that non-kinetic effects and defense will be the key to the DON’s approach. “The next fight against our major adversary will be like no other in prior conflicts,” the strategy states. “The use of non-kinetic effects and defense against those effects prior to and during kinetic exchanges will likely be the deciding factor in who prevails. The side that most effectively sequences and synchronizes non-kinetic effects will have a decisive advantage.” Cleary says the strategy’s focus is not just cybersecurity, but cyber as a whole,“This is a strategy that, when you look at the different lines of effort within it, goes well beyond the blocking and tackling of what the [chief information officer] is responsible for,” Cleary told Defense One.
The strategy identifies seven lines of effort supporting the DoD Cyber Strategy while maintaining the tenets of the 2020 DON Information Superiority Vision and the 2022 DON Cyberspace Superiority Vision. These lines of effort include cyber workforce and readiness; a shift from compliance to cyber readiness; defending of enterprise IT, data, and networks; protecting critical infrastructure and weapon systems; conducting cyber operations; securing the defense industrial base; and fostering collaboration and cooperation. Cleary goes on to say that fully embracing cyber as a core competency is key to the strategy’s success. He states, “Once we embrace cyber as a core competency alongside surface warfare, subsurface warfare, Marine expeditionary warfare, Navy Special Warfare—once cyber is seen in that lens as equal to the rest of these, things will then naturally begin to fall into place. All indications are we're moving in that direction.”
Troy Batterberry, CEO and founder of EchoMark, recommends new approaches to managing insider risk. “In order for the USA to achieve and maintain information superiority, we must adopt new forms of insider risk management. Nearly all major government agencies have experienced highly damaging leaks in part because the leaker (insider) felt they would never be caught. An entirely new approach is required to help change human behavior. Information watermarking is one such technology that can help keep private information private.”
Stephen Gates, Principal Security SME at Horizon3.ai, is struck by the shift from compliance to readiness:
“In the context of the Department of the Navy Cyber Strategy 2023, one line of effort stands out among the others: 2.0 Shift from Compliance to Cyber Readiness. As recent cyber events have repetitively proven, a purely defensive cyber strategy is not working and must be augmented by “adversarial assessments” of your own environments.
“These adversarial assessments are not the run-of-the-mill vulnerability scans. These assessments are cyber red team exercises whereby organizations attack themselves using the same tools, tactics, and procedures (TTPs) attackers use. The reason for this is simple. If you cannot find that hidden chink in your armor, that crack in your layered walls of defense, that blind spot you didn’t even know existed, you will never be able to adequately defend yourself against a purposeful attacker with nothing but time on their side – and disruption on their mind.
“Today, autonomous assessment solutions that let you see your environments through the eyes of an attacker are readily available. Having these solutions in the hands of highly skilled red teams allows them to force-multiply, meaning, they can do expansive cyber readiness exercises simultaneously, while using these solutions to accelerate their assessment analysis. Furthermore, these solutions also meet the objective of prioritizing mitigations and reassessment tracking to ensure issues have been remediated and readiness is confirmed.”
CISA resurrects cyber insurance working group.
As cyber insurance premiums continue to soar, the US Cybersecurity and Infrastructure Security Agency (CISA) is relaunching its Cybersecurity Insurance and Data Analysis Working Group (CIDAWG). Originally established in 2014, CIDAWG has been on hiatus, but CISA Deputy Director Nitin Natarajan announced in a blog post that the initiative had been resurrected last week during a conference on Catastrophic Cyber Risk and a Potential Federal Insurance Response. CIDAWG is composed of cybersecurity professionals with experience in critical infrastructure sectors and the insurance field, as well as other private sector organizations. An original focus was the consolidation and analysis of cyber incident data, and while this goal will be maintained, Natarajan said CIDAWG will also work on determining what security tools are most effective in defending against an increasingly sophisticated cyberthreat landscape. There was a 60% increase in ransomware attacks between 2018 and 2022, and while such attacks have become increasingly disruptive and costly for target organizations, cyber insurance firms have increased premiums and reduced coverage.
As TechTarget explains, this has left many critical infrastructure organizations without adequate resources for recovery. Natarajan wrote in the blog post, “The working group was re-established to create a venue for collaboration and forward progress with industry on topics where we have shared interests -- specifically, understanding what security controls are working most effectively to defend against cyber incidents.” CIDAWG will collaborate with Stanford University's Empirical Security Research Group to analyze the effectiveness of various cybersecurity measures in mitigating attacks and quantifying cyber-risk. The hope is that with more data analysis, it will be easier for insurers to assess risk and set appropriate premium rates. Sezaneh Seymour, vice president and head of regulatory risk and policy at cyber insurer Coalition, stated, "Reciprocal, anonymized data sharing under CIDAWG could help strengthen insights for both insurers and the federal government by augmenting the data accessible today and by acting as a repository for longitudinal data." The group’s reboot is scheduled for December.
Could “Eyes on the Board” do more harm than good?
Protecting minors on social media is a major priority for the US government, and Senator Ted Cruz, a Republican out of Texas, thinks he has a solution. Cruz’s “Eyes on the Board Act” intends to take social media out of schools entirely by cutting federal funding to any school that allows access to social media platforms on school devices. However, the Electronic Frontier Foundation says that while the intent of the bill might be positive, it would likely have a negative impact on education.
The author states, “This bill is a brazen attempt to censor information and to control how schools and teachers educate, and it would harm marginalized communities and children the most.” Eyes on the Board would cut offending schools off from the E-Rate funding program, which subsidizes internet services for learning institutions in school districts with high rates of poverty. To receive this funding, schools must adhere to the Children’s Internet Protection Act, which already requires they install internet filters that block minors’ access to harmful or disruptive content and monitor students’ online activity. An American Library Association study found that 88% of schools were already blocking social media platforms, so the bill would likely be redundant. In effect, EFF posits, such legislation would only cut under-resourced schools off from much-needed funding.
Mr. Cooper in legal hot water after data breach.
Inman reports that loan services firm Mr. Cooper has been hit with five different class-action lawsuits that claim a recent cyberattack was the result of the company's negligence. The attack in question, which was detected on October 31, was described on the company’s website as a “cyber security incident.” Mr. Cooper responded by proactively shutting down its systems, cutting off customers from their accounts for several days. A subsequent report to investors revealed “that certain customer data was exposed, however it will require additional analysis to validate this finding and quantify the scope and type of any such exposure.”
The complainants say that the company failed to adequately protect this customer data. One case states, “This unencrypted, unredacted [personal identifiable information] was compromised due to defendant’s negligent and/or careless acts and omissions and its utter failure to protect customers’ sensitive data,” and another adds that the company” has failed to provide timely, accurate, and adequate notice to Plaintiff and other Class Members that their PII had been compromised.” While Mr. Cooper hasn’t directly responded to these claims, a statement released shortly after the attack says the company stood to lose $5 million to $10 million in additional vendor costs as a result of the incident, and that the system shutdown could take a chunk out of fourth-quarter revenue and expenses. While most of the cases ask for undisclosed monetary damages, one suit specifically asks for upwards of $5 million.