Deepfakes in conflict and commerce: a conversation with AU10TIX’s Carey O’Connor Kolaja.
By Katie Aulenbacher, the CyberWire staff
May 16, 2022

Where is deepfake technology headed, and what’s to be done about it? AU10TIX CEO Carey O’Connor Kolaja warns of expanding use cases, and proposes a fix grounded in technological teamwork. 

Deepfakes in conflict and commerce: a conversation with AU10TIX’s Carey O’Connor Kolaja.

Events are unfolding rapidly on the disinformation front, with the Biden Administration’s formation of a Disinformation Governance Board housed in the Department of Homeland Security, free-speech proponent Elon Musk’s Twitter takeover bid, and the Declaration for the Future of the Internet’s publication. The CyberWire caught up with Carey O’Connor Kolaja, CEO of AU10TIX and former PayPal VP, on the role of deepfakes in current events and how they might impact the future of digital communication. 

Fakery in Russia’s war against Ukraine.

The use of deepfakes in the Russia-Ukraine war serves as a reminder that good and bad guys alike are constantly innovating, Kolaja said: “It is important to understand that while our methods continue to evolve so do the technologies and methods used by those looking to promote chaos through misinformation.”

The lesson average citizens should take from the growing prominence of deepfakery is to exercise caution in trusting their lying eyes (and ears). Kolaja advised users to “fact check and verify everything they see on the internet,” since “[f]or the casual consumer of information, it can be difficult to identify a deepfake at first glance, and many would not know how to identify a deepfake upon deeper inspection.” Consumers must “understand that this technology exists,” she said, “and is being used to promote misinformation.”

The business cases for deepfakes: benign and malign.

Audio, image, and video deepfakes have benign and benevolent uses, across, for example, education, communications, entertainment, and commercial applications. On the other hand, they can undermine trust and jeopardize security. “In addition to the havoc deepfakes can wreak on governments, militaries, and consumers,” Kolaja said, “they can also cause a wide variety of problems for businesses globally.”

Deepfaked audio, she said, is “particularly concerning,” with “endless" conceivable misuses. Synthetic voices can mime bosses, thwart voice identification tools, and mislead courts, for instance. “[E]mployees can be fooled thinking it is the actual voice of senior management,” she explained, and “these voices can also be used to fool voice verification technologies at large institutions such as banks.” 

Fortunately, products exist to combat the threat of deepfakes, Kolaja noted: “businesses can invest in necessary tools to identify deepfakes that utilize the growing power [of] AI and machine learning to identify inconsistencies in these deepfakes, both audio and video.” 

What’s next for deepfakes

For now, Kolaja said, deepfake detectors seem to be edging out deepfakers in the AI arms race, but victory is not a sure thing. “Currently, I think detectors have managed to stay ahead of deepfakers,” she said, but “the gap is narrow at best. With the speed at which technology is improving and deepfake creators are becoming more sophisticated it is important that we stay vigilant and continue to develop detection technology at the same pace.”

Deepfake technology will likely increase in range as well as quality, Kolaja predicted. “Deepfakes will become more common and will penetrate unexpected areas,” she said. “The expected uses will continue to be media, in all forms including political campaigns, social campaigns, even commercial campaigns. Other forms of impersonation that can prove to be even more problematic can range from a fraudulent person taking exams to fake doctors providing fraudulent online services.”

“In the identity space,” she continued, “fraudsters will try to fool authentication systems using synthetic images [or] videos of someone other than themselves. Fraudsters could also create videos of family relatives to try and obtain ransom money.” Threats to identity verification are already routine; AU10TIX detects fifty per customer every day.

The political and social possibilities worry Kolaja most, however. “[T]here are serious concerns around the ability for deepfakes to change the trajectory of policies, incite war and social chaos,” she said. (See our Disinformation Briefing for a longer look at the meaning and effects of breaking developments in global information warfare.)

Disinformation and inconvenient truth, and locating reality together.

On a related note, we’ve seen authorities around the world reference deepfakes and disinformation to cast doubt on real events for political advantage. Kolaja envisioned a solution to distinguishing truth from fiction involving cross-sector information sharing with technical checks and balances. 

“To combat disinformation and misrepresentation, we need to unify and create consortiums to safely and legally share signals within ecosystems,” she said. “Technology allows us to do this with zero-trust architectures and cryptographic signatures but commercially we have to find a way forward. If CNN misrepresents something, then it is bad for all media agencies. If GoFundMe misrepresents then it is bad for all crowdfunding platforms,” and so forth. 

Cross-referencing data from diverse sources could clear up some claims, especially as each institution bolsters its own security posture. “We should also bring together unexpected data signals to determine what is authentic and what is not,” Kolaja said, “[c]ombining social signals, with local signals, with deepfake technology for example. Organizations should also be looking to institute new policies, standards and interoperability while embracing [Coalition for Content Provenance and Authenticity (C2PA)] tools, and new approaches to maintaining trust in our digital/physical world.”