Cybersecurity Awareness Month: the impact of new technologies.
N2K logoOct 2, 2023

Artificial intelligence and quantum computing represent both danger and opportunity. Experts advise getting ahead of them.

Cybersecurity Awareness Month: the impact of new technologies.

Threat actors as well as defenders benefit from technological advance. Industry experts see four new and emerging technologies as presenting defenders with particularly challenging new threats and vulnerabilities.

Large language models.

Large language models Joe Regensburger, Vice President of Research Engineering at Immuta thinks that “AI and large language models (LLMs) have the potential to significantly impact data security initiatives.” The technologies have the potential to enhance security as well as challenge it. “Already organizations are leveraging it to build advanced solutions for fraud detection, sentiment analysis, next-best-offer, predictive maintenance, and more.”

All these are good things, but Regensburger points out that these benefits come with new risks. “At the same time, although AI offers many benefits, 71% of IT leaders feel generative AI will also introduce new data security risks. To fully realize the benefits of AI, it’s vital that organizations must consider data security as a foundational component of any AI implementation.” 

In particular, large language models are data-hungry. At one level this represents a protection and compliance challenge. Regensburger says, “To do this, they need to consider four things: (1) ‘What’ data gets used to train the AI model? (2) ‘How’ does the AI model get trained? (3) ‘What’ controls exist on deployed AI? and (4) “How” can we assess the accuracy of outputs? By prioritizing data security and access control, organizations can safely harness the power of AI and LLMs while safeguarding against potential risks and ensuring responsible usage.” 

Generative artificial intelligence (AI).

The related technologies of generative AI have already begun to show their potential for malign use. Yariv Fishman, Chief Product Officer at Deep Instinct offered an appreciation of this quickly evolving family of threats:

“This Cybersecurity Awareness Month is unlike previous years, due to the rise of generative AI within enterprises. Recent research found that 75% of security professionals witnessed an increase in attacks over the past 12 months, with 85% attributing this rise to bad actors using generative AI.

“The weaponization of AI is happening rapidly, with attackers using it to create new malware variants at an unprecedented pace. Current security mechanisms rooted in machine learning (ML) are ineffective against never-before-seen, unknown malware; they will break down in the face of AI-powered threats.”

The counter to these threats should be, Fishman argues, a more advanced form of AI: “The only way to protect yourself is with a more advanced form of AI. Specifically, Deep Learning. Any other NL-based, legacy security solution is too reactive and latent to adequately fight back. This is where EDR and NGAV fall short. What’s missing is a layer of Deep Learning-powered data security, sitting in front of your existing security controls, to predict and prevent threats before they cause damage. This Cybersecurity Awareness Month, organizations should know that prevention against cyber attacks is possible – but it requires a change to the ‘assume breach’ status quo, especially in this new era of AI.”

Steve Stone, Head of Rubrik Zero Labs, finds three aspects of generative AI as particularly worthy of attention:

“First, GAI can demonstrably increase the capability and bandwidth of defense teams which are typically operating at beyond capacity. We should seek out the right types of automation and support GAI lends itself well too, so we can then reinvest the precious few cycles we have in our defense experts. Let's provide those skilled practitioners the ability to leverage their capabilities in the most impactful ways and transition years of legacy workflow to increased automation delivered via GAI.

“Second, what are the inevitable shifts in defense needed as threats pivot to using GAI as well. Traditionally, cybersecurity has leaned on attacker bottlenecks in our defensive posture. At a minimum, we used these bottlenecks to classify threat types based on resourcing and capability. GAI is undoubtedly going to shift these years-long expectations. If any attacker can quickly use GAI to overcome language limitations, coding gaps in knowledge, or quickly understand technical nuances in a victim environment, what do we need to do differently? We should work to be ahead of these pivots and find the new bottlenecks.

“Third, GAI doesn't come with a zero cost to cybersecurity. Even if we move past using GAI, the presence of GAI leaves us with two new distinct data elements to secure. The first is the GAI model itself, which is nothing more than data and code. Second, the source material for a GAI model should be secured as well. If the model and underlying data are left undefended, we could lose these tools or have them leveraged against us in different ways all without our knowledge.”

(Added, 2:45 PM ET, October 2nd, 2023.) “This year, CISA’s new theme for Cybersecurity Awareness Month is challenging us to reflect on how we can best secure our world,” Marcus Fowler, CEO of Darktrace Federal, wrote in emailed comments. “The global threat landscape is always evolving, but AI is poised to have a significant impact on the cybersecurity industry. The tools used by attackers —and the digital environments that need to be protected—are constantly changing and increasingly complex. We expect novel attacks will become the new normal, and we’re entering an era where sophisticated attacks can adapt at machine speed and scale.” 

The upside to this development, Fowler points out, is AI’s countervailing utility for defense. “Luckily, AI is already being used as a powerful tool for defenders – helping to strengthen and empower our existing cyber workers so they can keep pace with increasingly complex environments and the constant onslaught of ever-evolving cyber threats.”

Deepfakes.

One potential family of threats emerging from these technologies surrounds deepfakes, which can automate convincing deceptions at scale.

Ricardo Amper, CEO and Founder of Incode Technologies, called for improved methods of verifying and authenticating identities. “With the rise of deepfakes and fraudsters becoming increasingly sophisticated, verifying identities is more challenging than ever. As verifying identities becomes harder, fraud mounts,” Amper wrote. “Today, passwordless authentication is one of the top methods to deter fraud where identity means everything, for example, in banking, government, and payments processing. We’re seeing industries such as financial enterprises combat spoofing and identity fraud through biometric digital identity verification, which can prevent the use of ‘synthetic identity' to steal customer profiles and open new accounts.”

Amper suggested that biometrics could afford an important measure of security against deepfakes. “As a means of digital identification, biometrics prevent fake digital identities by identifying documents that have been tampered with or photoshopped. Companies in a variety of key sectors are introducing digital authentication services and solutions to combat growing levels of fraud and stay ahead of cyber criminals.”

David Divitt, Senior Director, Fraud Prevention and Experience at Veriff, noted that a lot of traditional commonsense skepticism has been rendered obsolete by deepfakes. “We’ve all been taught to be on our guard about ‘suspicious’ characters as a means to avoid getting scammed,” he wrote. “But what if the criminal behind the scam looks, and sounds, exactly like someone you trust? Deepfakes, or lifelike manipulations of an assumed likeness or voice, have exploded in accessibility and sophistication, with deepfakes-as-a-service now allowing even less-advanced fraud actors to near-flawlessly impersonate a target. This progression makes all kinds of fraud, from individual blackmail to defrauding entire corporations, significantly harder to detect and defend against. With the help of General Adversarial Networks (GANs), even a single image of an individual can be enough for fraudsters to produce a convincing deepfake of them.” 

Older forms of authentication are thus no longer up to the task. Like threats using generative AI, threats deploying deepfake technology call for a defensive response in kind. “Certain forms of user authentication can be fooled by a competent deepfake fraudster, necessitating the use of specialized AI tools to identify the subtle but telltale signs of a manipulated image or voice. AI models can also be trained to identify patterns of fraud, enabling businesses to get ahead of an attack before it hits.” 

Thus deepfakes call for an AI response. “AI is now at the forefront of fraud threats, and organizations that fail to use AI tech to defend themselves will likely find themselves the victim of it.”

Quantum computing.

Philip George, Executive Technical Strategist at Merlin Cyber, urged more attention to the emerging risks of quantum technology. He commented that quantum computing poses a particular threat to cryptography. “While quantum computing is poised to enable researchers to tackle complex problems through simulation in a way that simply wasn’t possible before, it also has very serious implications for cryptography – the foundation upon which functionally all modern cybersecurity relies,” George wrote. 

“A cryptographically relevant quantum computer (CRQC) could render linear cryptography ineffective, meaning sensitive data and critical systems protected in this way will be exposed to anyone with quantum computing capabilities. The reality is that our adversaries are inching closer and closer to achieving a CRQC every day and in the meantime are collecting sensitive encrypted data to access later also known as a ‘store now, decrypt later’ approach. 

This particular threat is still emerging, but George advises thought and action now, before the new technologies are fielded in an effective and malign form:

“Certain cryptographic standard bodies estimate that we have approximately 7-10 years before quantum cryptographic relevancy is achieved – however we’ve already seen instances of adversaries exploiting our growing reliance and implicit trust with current cryptography, like in the SolarWinds SUNBURST Backdoor and Microsoft Storm-0558 forged tokens attacks. With the executive direction to adopt zero-trust architectures (ZTA) across IT/OT portfolios, the industry cannot afford to delay the inclusion of a quantum-readiness (QR) roadmap (see the joint CISA/NSA Quantum Readiness memo) into said ZTA modernization plans. Especially considering how heavily they will rely upon cryptography across every facet of the maturity model. A major component of the QR roadmap is the execution of a cryptographic discovery and inventory report, which would provide valuable insight into quantum vulnerable cryptographic dependencies as well as overall cryptographic usage. The results of which would provide critical insight into strategic risk management decisions for Y2Q (years to quantum) planning and operational cyber threat-hunting purposes. 

“The era of implicit cryptographic trust and reliance on an iterative standard process is coming to a close, the industry needs to fully incorporate cryptographic risk into its vulnerability management and remediation programs before Y2Q. This will ensure a more cryptographically agile and robust zero trust ecosystem is achieved across newly modernized environments.”