At a glance.
- Should big tech be regulated like big banks?
- Regulating AI-enhanced political ads.
- NATO considers collective defense in cyberspace.
- NSA warns of China-backed attacks on US critical infrastructure.
- US publishes guidelines for secure AI development.
- New York focuses on hospital cybersecurity.
- A call to halt sales of compromised Android devices.
- Regulatory risk as criminal leverage.
Should big tech be regulated like big banks?
The US Consumer Financial Protection Bureau (CFPB) is proposing a new rule that could mean digital payment apps are treated more like their brick-and-mortar counterparts, Yahoo Finance reports. The rule would require nonbank financial companies that handle more than 5 million transactions a year to follow the same rules as the big banks that are already overseen by the CFPB. This would include popular mobile apps like Apple Pay and Google Pay, giving CFPB permission to more closely scrutinize the Big Tech firms that run these platforms. CFPB Director Rohit Chopra stated, “Payment systems are critical infrastructure for our economy. Today’s rule would crack down on one avenue for regulatory arbitrage by ensuring large technology firms and other nonbank payments companies are subjected to appropriate oversight.” Just last month, Chopra expressed his concerns about the access such platforms have to their customers’ personal data, warning that such access could be exploited for surveillance or even censorship. As well, a recent report from the CFPB highlighted how tech firms’ use their control over the mobile market to persuade consumers to use their digital banking platforms. It’s worth noting that the CFPB already regulates electronic fund transfers, but the proposed rule, which would take effect next year, would greatly expand this oversight.
The New York Times notes that the CFPB’s proposal is seen as a positive move by banking trade groups, who have long felt that non-banking groups should fall under greater regulatory oversight. The Electronic Transactions Association (ETA), one of the payment industry’s largest trade groups, responded, “E.T.A. supports the C.F.P.B.’s goals of robust consumer protections for payments and a consistent regulatory environment for both banks and fintechs. It is critical that the final rule encourages continued innovation and competition in the payments space.”
Richard Bird, Chief Security Officer at Traceable, wrote to place the proposed regulations into context. "We can't look at the CFPB's announcement about digital wallets and payments in a vacuum," he said. "The CFPB has been methodically building a plan and a case to contain and regulate the behaviors of Big Tech in a way that benefits consumers for the last couple of years. It is certain that Big Tech will bring all of their lobbying resources and efforts to bear on trying to mute or diminish the CFPB's energy by requiring them to behave like the banks. Given Big Tech's continuous failure to protect our data as citizens, it's nice to have the CFPB holding Big Tech to a higher standard when it comes to our money."
Regulating AI-enhanced political ads.
As America prepares for the 2024 US presidential election, Facebook parent company Meta has announced they’re requiring advertisers to disclose when ads for political or social issues have been created using artificial intelligence. “Meta will add information on the ad when an advertiser discloses in the advertising flow that the content is digitally created or altered…If we determine that an advertiser doesn’t disclose as required, we will reject the ad and repeated failure to disclose may result in penalties against the advertiser.” The content in question includes not only ads that depict fake, AI-generated people or events, but also those featuring a real person saying or doing something they did not say or do, as well as faked footage claiming to be connected to real events. January’s US primary elections will be the first major opportunity to see how AI tech impacts the campaign process. As the Wall Street Journal notes, the announcement comes nearly a year after AI chatbot ChatGPT hit the public web, demonstrating just how easy it is to create ultra-realistic AI-generated content. Meta itself has released a slate of new products using generative AI, including generative ads creation tools, as well as chatbots on its popular messaging and social media platforms Messenger, WhatsApp, and Instagram. That said, Meta previously announced advertisers will not be permitted to use the company’s own generative AI ads creation tools for political advertising campaigns.
Meanwhile, the American Civil Liberties Union (ACLU) is calling into question whether AI-augmented ads can be labeled as fraudulent misrepresentation by the US Federal Election Commission (FEC). As Nextgov.com explains, political rights activist group Public Citizen recently filed a petition for rulemaking with the FEC urging the regulatory agency to call out political ads with AI-generated content as “fraudulent misrepresentation.” Currently, the FEC defines fraudulent misrepresentation as the intentional mislabeling of advertising to confuse voters about the source of a particular ad, which is prohibited by the commission. Public Citizen says AI-enhanced ads should fall under this label.
However, the ACLU on Monday submitted a letter to FEC Chair Dara Lindenbaum arguing that such ads are protected by the free speech protections of the First Amendment and therefore should be permitted. The ACLU argued, “Should the FEC move forward with a rulemaking to clarify or expand the fraudulent misrepresentation provision of [the Federal Election Campaign Act] for AI-generated campaign ads, it must draw the line between protected AI-generated speech and impermissible fraudulent misrepresentations carefully. It is unclear whether this petition seeks for the FEC to merely apply its fraudulent misrepresentation analysis to AI-generated campaign ads without adequate disclosure, or whether it wants those communications to be deemed per se fraudulent misrepresentation.” In other words, the ACLU is saying the FEC can’t blatantly label all AI-generated ads as fraudulent, but should only prohibit ads that clearly demonstrate an intent and likelihood to deceive. Jenna Leventoff, ACLU senior policy counsel, explained, "AI-generated content is entitled to the same First Amendment protections as speech generated in other ways. Lawmakers should apply traditional First Amendment analysis to any efforts to regulate AI-generated campaign communications."
Eduardo Azanza, CEO at Veridas, sees the trend's upside as an increase in transparency. “With Meta joining Google in requiring political ads to disclose the use of AI, we are on track to establish a more trustworthy and transparent media landscape. This move could not come at a more important time, with the 2024 US Presidential elections approaching and political campaigns ramping up," he wrote. "The free use of AI in political ads with no label or indication makes the spread of misinformation significantly easier. We’ve already seen politicians take advantage of AI and deep fakes, leaving voters confused and questioning what is true. Voters have the right to make political decisions on the truth and leaving AI-generated content unlabeled creates a powerful tool of deception, which ultimately threatens democracy. It is important for other media companies to follow the steps of Meta and place guardrails on AI. That way, we can build trust in technology and secure the sanctity of elections.”
NATO considers collective defense in cyberspace.
NATO held its first annual Cyber Defence Conference last week in Berlin, Germany, and leaders emphasized the need for collaboration among allies when it comes to defending against cyberattacks. During the public opening speeches and panel discussion, members voiced their support for the creation of a NATO Cyber Centre. However, as the Record notes, the exact goals of the body have not yet been determined. It’s possible the initiative could focus on strengthening allies’ cyber competencies, creating an information-sharing resource, or perhaps even functioning a command center for combined tactical operations.
In her opening keynote address, German Foreign Minister Annalena Baerbock referenced recent incidents of cyber warfare in Ukraine and Israel to demonstrate the importance of defense in cyberspace. “Our commitment to prevention requires us to be able to actively defend ourselves in cyberspace if necessary,” she stated. NATO Secretary General Jens Stoltenberg also stressed the need for collaborative defense during his address: “NATO is perfectly positioned to share information, to spread innovation, and to coordinate our collective defence in cyberspace,” he stated. He referenced Russia and China as examples of authoritarian governments that threaten NATO’s values in cyberspace, underlining the need for allies to work together to defend against such regimes. As NATO notes, member nations at the Vilnius Summit earlier this year agreed to support NATO’s cyber defense politically, militarily, and technically, and to avoid depending on non-member states for digital equipment.
NSA warns of China-backed attacks on US critical infrastructure.
Speaking at the Cyberwarcon security conference held in Washington, DC last week, two members of the US National Security Agency (NSA) warned that Chinese government-backed hackers are targeting critical infrastructure in the US. As Wired explains, the message isn’t new; since May NSA has been cautioning that Beijing-sponsored threat group Volt Typhoon has its sights set on the US power grid. At the conference, however, NSA representatives focused on the novel tactics these hackers would utilize. As the Washington Post notes, NSA reminded attendees that the Chinese government goes to great effort to collect research on zero-day vulnerabilities, and the cybersecurity community should be on the lookout for attacks exploiting these novel bugs.
They also warned that Instead of deploying malware, the threat actors will likely employ a “living off the land” approach, abusing legitimate tools to infect target networks and making it more difficult for defenders to identify indicators of compromise. Josh Zaritsky, chief operations officer of the Cybersecurity Collaboration Center, explained, “They're trying to look like your normal users, trying to look like your normal administrators. They're compromising small office and home office network devices all over the country. They look like any of your remote users, which we all have a lot of now ever since the pandemic. They don't raise any alarms or trip wires into your normal environment.” Morgan Adamski, director of the NSA’s Cybersecurity Collaboration Center, stated, “The threat is extremely sophisticated and pervasive. It is not easy to find. It is pre-positioning with intent to quietly burrow into critical networks for the long haul. The fact that these actors are in critical infrastructure is unacceptable, and it is something that we are taking very seriously—something that we are concerned about.”
Microsoft’s Mark Parsons and Judy Ng also gave a presentation on Volt Typhoon’s recent activities, which include targeting universities and US Army Reserve Officers’ Training Corps programs, attacks that are largely focused on espionage. However, Adamski emphasized that going forward the threat group’s operations are expected to concentrate on critical networks in an effort to disrupt essential services. He stated, “Let me be clear: These target entities are of no intelligence value. The fact that these actors are in critical infrastructure is unacceptable.” NSA is advising the cybersecurity community to employ multifactor authentication, limit user and admin system privileges, and perhaps most importantly, closely monitor activity logs for any abnormal or potentially malicious activity.
US publishes guidelines for secure and “responsible” AI development.
This week the US federal government issued two sets of guidance on the secure use of artificial intelligence. Following last month’s executive order from President Joe Biden calling on the Department of Homeland Security (DHS) to promote global AI safety standards, the Cybersecurity and Infrastructure Security Agency (CISA) released a Roadmap for Artificial Intelligence (AI). CISA explains that the guidance “outlines five strategic lines of effort for CISA that will drive concrete initiatives and outline CISA’s responsible approach to AI in cybersecurity.” These initiatives include responsible use of AI to strengthen cyber defense; assessment of secure-by-design AI systems; defense of critical infrastructure from malicious use of AI; collaboration between agencies, international partners, and the public; and increased AI expertise in CISA’s workforce. CISA Director Jen Easterly explains, “Our Roadmap for AI, focused at the nexus of AI, cyber defense, and critical infrastructure, sets forth an agency-wide plan to promote the beneficial uses of AI to enhance cybersecurity capabilities; ensure AI systems are protected from cyber-based threats; and deter the malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day.”
Mike Barker, CCO of HYAS, particularly approves of the commitment to security by design. “CISA’s launch of JCDC.AI showcases a strategic commitment to fortify cyber defenses and mitigate risks associated with AI in critical infrastructure and is a tangible step toward managing AI threats with precision. This initiative aligns seamlessly with CISA's holistic approach, as evidenced by their ongoing efforts. From championing 'secure by design' AI software adoption to providing best practices and guidance, they are setting a benchmark in cybersecurity. Their dedication to red-teaming generative AI and sharing insights with interagency, international partners, and the public speaks volumes.”
On Tuesday the US Department of Defense (DoD) Chief Digital and Artificial Intelligence Office (CDAO) published the Responsible Artificial Intelligence (RAI) Toolkit. A key deliverable of the DoD RAI Strategy & Implementation Pathway, the toolkit builds upon guidance previously issued by the Defense Innovation Unit, the National Institute of Standards and Technology (NIST), and the Institute of Electrical and Electronics Engineers. The DoD explains, “The RAI Toolkit provides users a voluntary process that identifies, tracks, and improves alignment of AI projects to RAI best practices and the Department's AI Ethical Principles, while capitalizing on opportunities for innovation.” The document also offers an intuitive assessment process to cover the AI product’s life cycle, and provides an RAI standard for DoD industry partners. CDAO Craig Martell stated, "Responsible AI is foundational for anything that the DoD builds and ships…This release demonstrates our commitment to ethics, risk assessment, internal governance, and external collaboration.”
New York focuses on hospital cybersecurity.
Kathy Hochul, governor of the US state of New York, on Monday proposed a new slate of cybersecurity regulations for the hospitals in an effort to protect critical network systems and the sensitive data they contain. Hochul stated, "Our interconnected world demands an interconnected defense against cyber-attacks, leveraging every resource available, especially at hospitals. These new proposed regulations set forth a nation-leading blueprint to ensure New York State stands ready and resilient in the face of cyber threats.” As StateScoop explains, the rules would require that each hospital establish its own cybersecurity program and response plan, as well as appoint a chief information security officer.
Hospitals would also have to regularly assess their cyber risks, and run tests of their response plans to make sure hospital services can continue in the event of a cyberincident. The regulations will serve as a complement to the rules of the existing Health Insurance Portability and Accountability Act, which focuses on the security of health records. The move follows a wave of cyberattacks targeting medical institutions, and Hochul’s FY 2024 budget allocates $500 million to assist health care facilities in complying with the proposed rules. New York’s Public Health and Health Planning Council will consider the proposed regulations this week, and if adopted the rules will be published in December for a sixty-day public comment period.
The regulations would mandate some well-established best practices. Emily Phelps, Director at Cyware, approves of the move as an advance in resilience. “Governor Kathy Hochul's new cybersecurity regulations proposal for New York hospitals represents a significant step in reinforcing the resilience of healthcare facilities against cyber threats. Mandating the establishment of a Chief Information Security Officer (CISO) role and enforcing Multi-Factor Authentication (MFA) aim to fortify the defenses of healthcare systems."
She also sees an important role for collective security, and for addressing third-party risk. “With our interconnected world, it is true we need interconnected defenses. A crucial aspect is a focus on collective defense and software supply chain security in healthcare. Collective defense involves leveraging shared knowledge and resources to improve the overall cybersecurity posture of all involved entities. In healthcare, where organizations deal with sensitive data across modern and legacy systems, leveraging healthcare ISACs and trusted intelligence sharing help these entities become more proactive. Furthermore, the emphasis on evaluating and testing third-party security is a proactive measure to secure the software supply chain. Healthcare organizations rely heavily on various software solutions and third-party services, making them vulnerable to supply chain attacks. Regular testing and policy establishment for third-party security will help mitigate these risks.”
Healthcare organizations have become prime targets, and the rules may also be seen as a response to this. Paul Valente, CEO and Co-Founder of VISO Trust, wrote, “The lack of funding for security within the healthcare sector has led to the industry becoming a primary target for cyber criminals. Ransomware has become endemic with healthcare organizations, more frequently leaving them with no choice but to pay the ransom, rather than risk patient safety. Third-party risks pose significant challenges for hospitals due to their complex relationships with supply chain vendors and the evolving nature of cyber threats. Understaffing and outdated and complex techniques further hinder effective cyber risk management. Governor Hochul’s funding and requirements are just a starting point in safeguarding these institutions. It’s great to see New York taking the lead and it will be intriguing to see which states follow suit.”
A call to halt sales of compromised Android devices.
The Electronic Frontier Foundation has asked the Federal Trade Commission (FTC) to stop resellers from selling set-top Android boxes and mobile devices known to be compromised with malware. The ban the EFF advocates would affect devices manufactured by AllWinner and RockChip. These devices, the EFF says, were found by HUMAN researchers to be infected with BadBox malware. "When first connected to the internet, these infected devices immediately start communicating with botnet command and control servers, the letter explains. Then they connect to a vast click-fraud network—in which bots juice advertising revenue by producing bogus ad clicks." The infected devices can also be used to stage other attacks without their owners' knowledge, and this exposed them to legal risk as well as ordinary cyber risk. The EFF argues that this supply chain problem is a consumer protection issue, which therefore clearly lies within the FTC's remit.
Supply chains may receive more such attention in the near future, commented Javed Hasan, CEO and co-founder of Lineaje. “I expect we’ll see increasing sanctions related to hardware and supply chain attacks over the coming months," Hassan wrote in emailed comments. "CISA recently introduced the Hardware Bill of Materials Framework (HBOM) for supply chain risk management as a parallel to their SBOM initiatives. The new guidelines extend risk management to hardware components, such as Android TV set-top boxes and mobile devices and illustrates how serious of a problem supply chain attacks are on consumer-based devices. With the increase in demand for IoT products, the synergy between SBOMs and HBOMs is becoming increasingly essential to achieve a holistic supply chain risk management strategy. It means that organizations can now have a more comprehensive view of their entire supply chain, covering both software and hardware components. This integrated approach will lead to more robust and secure digital landscapes, better protection against emerging threats, and improved overall resilience”
Jeannie Warner, Director of Product Marketing of Exabeam, noted that this kind of risk has a history. “It was 2016 and earlier that we started seeing backdoors and problems in supply chains. The ugly truth is that any software or firmware update creates the possibility of a Solarigate issue, where the core website/download site can be hacked, the binaries altered," Warner wrote. "For the end user, both Google Play and Apple Store have scans to try and protect the software being distributed on their sites. The truth is, any OS or system can be corrupted, any check bypassed. It’s a constant game of cat/mouse played by adversaries vs Security teams, and the game will continue. Software supply chain attacks are successful because they abuse the trusted relationship between user and vendor. And there are many vendors that push releases out before full testing comes back clean because of economic pressure from management or the field. The truth is that many software vendors are not security oriented because their focus is meeting their customer needs in terms of functional requirements, and security as a non-functional requirement is far too often a second or third in priority. Businesses can and should, wherever possible, have a UAT or testing environment where they evaluate the binaries and updates that come in for impact, and keep up to date on all patches wherever they can. Now, it is true that many corporations do not have such a testing environment. In a scenario where one patch may be fouled or contain malware, another system monitoring it should be able to see anomalous or new behavior visible either on the endpoint, on the network, at the active directory, or otherwise passing through their SIEM or SASE collective.”
Using regulatory risk to pressure a ransomware victim.
BleepingComputer reports that ALPHV/BlackCat ransomware gang has dimed out one of its claimed victims to the US Securities and Exchange Commission (SEC). Their victim, the criminals allege, failed to disclose a cyber incident that had a material impact on its business by filing an 8K within the prescribed four days. ALPHV/BlackCat claimed to have stolen data from software company MeridianLink on November 7th. MeridianLink hasn't paid, and so the gang has reported the company to the SEC.
The gang received an automated reply from the SEC ("Thank you for contacting the United States Securities and Exchange Commission," etc.) but it's unlikely their complaint will be found to have merit. For one thing, the SEC's new disclosure rule doesn't take formal effect until December 15th, even though companies are already adjusting their practices to come into compliance. (See this paper from AuditBoard for an overview of such adjustments.) And for another thing, public companies will be required to disclose attacks that have a material impact. Who made BlackCat the authority on materiality?
Dr. Ilia Kolochenko, Chief Architect at ImmuniWeb and Adjunct Professor of Cybersecurity and Cyber Law at Capitol Technology University, points out that this kind of pressure was foreseeable:
“Misuse of the new SEC rules to make additional pressure on publicly traded companies was foreseeable, moreover, ransomware actors will likely start filing complaints with other US and EU regulatory agencies when the victims fail to disclose a breach within the timeframe provided by law.
"Having said that, not all security incidents are data breaches, and not all data breaches are reportable data breaches. Therefore, regulatory agencies and authorities should carefully scrutinize such reports and probably even establish a new rule to ignore reports uncorroborated with trustworthy evidence, otherwise, exaggerated or even completely false complaints will flood their systems with noise and paralyze their work.
"Victims of data breaches should urgently consider revising their digital forensics and incident response (DFIR) strategies by inviting corporate jurists and external law firms specialized in cybersecurity to participate in the creation, testing, management and continuous improvement of their DFIR plan. Many large organizations still have only technical people managing the entire process, eventually triggering such undesirable events as criminal prosecution of CISOs and a broad spectrum of legal ramifications for the entire organization. Transparent, well-thought-out and timely response to a data breach can save millions.”
(Added, 3:15 PM ET, November 20th, 2023.) While it's been tempting to scoff at the implausibility of a criminal gang diming out a victim to the SEC (we did, for example, when we asked above since when BlackCat was the boss of materiality determination) Avishai Avivi, CISO of SafeBreach, pointed out in an email that an increase in risk can ramp up the pressure on a victim to pay. "While this move seemed to elicit amusement from most security professionals, it does raise an intriguing development in the dynamics between ransomware groups and their victims," he wrote. "With its move to mandate disclosure of material cyber events, the SEC has inadvertently given the malicious actors another lever to extract payment from public companies that fall victim to their attacks. On the one hand, the malicious actor is extorting the executive team to pay in return for them to avoid a complaint to the SEC that may result in significant legal costs and fines. Conversely, it can also prevent the malicious actor from publicly disclosing their successful attack. If they publish their success, the proverbial cat (pun intended) is out of the bag."
So risk, of course, includes regulatory risk. "Time will tell what impact this move by ALPHV/BlackCat has on public companies. It will likely start discussions around what is considered material financial impact and whether there should be a special treatment for ongoing ransomware attacks.”
(Added, 8:45 AM ET, November 21st, 2023.) Darren Williams, CEO and Founder at BlackFog, views the incident as a foreseeable instance of attacker adaptability. “This is one of many examples of how hackers will continuously refine and adapt their strategies depending on the current cyber ecosystem. With the new SEC disclosure going into effect in mid-December, we will surely see an increase in hackers leveraging this as an extortion tactic to humiliate their victims and guarantee payment is made. The added levels of embarrassment from hackers exposing organizations’ failure to follow regulations and remain transparent with their customers and partners, should give them all the more reason to avoid delayed reporting and hopefully eliminate this new extortion tactic.”
Ferhat Dikbiyik, head of research at Black Kite, wonders whether the gang might have a foothold, in the form of affiliates, inside the US itself. "AlphV, the notorious ransomware group also known as BlackCat, has shown a new level of cyber audacity in leveraging the Securities and Exchange Commission’s (SEC) new cybersecurity disclosure rules. This puts added pressure on publicly traded MeridianLink after claiming to have breached its network and stolen unencrypted data. This move has blindsided the industry and raised questions about the effectiveness of the new SEC rules in the fight against cybercrime. It also begs the question: does AlphV have affiliates within the US?" Dikbiyik wrote in emailed comments. "It’s not a far-fetched hypothesis, considering the group’s sophisticated attack on MGM Resorts. In this case, AlphV used social engineering to infiltrate MGM’s network, disrupting a range of operations, compromising the information of many customers, and costing the company an estimated $100 million. This success suggests that it is possible that AlphV could indeed have a practical understanding and familiarity with American legal and cybersecurity systems, which should not be underestimated."
The incident also suggests to Dikbiyik that the SEC's regulations remain a work in progress. "Although the SEC rules are a step toward transparency, MeridianLink and MGM incidents reveal an uncomfortable truth: compliance alone is not sufficient. Cybersecurity is dynamic and requires robust, always-on defenses and proactive strategies. This is an industry-wide wake-up call."