Deepfakes have grown in sophistication and verisimilitude, but there are still some tells that can help an alert consumer recognize them.
The Future of Deepfakes.
Trend Micro has published a report looking at the current and future impacts of deepfakes. The researchers note that deepfakes have already been used in social engineering attacks, and these attacks will increase as the technology improves:
- “There is enough content exposed on social media to create deepfake models for millions of people. People in every country, city, village, or particular social group have their social media exposed to the world.
- “All the technological pillars are in place. Attack implementation does not require significant investment and attacks can be launched not just by national states and corporations but also by individuals and small criminal groups.
- “Actors can already impersonate and steal the identities of politicians, C-level executives, and celebrities. This could significantly increase the success rate of certain attacks such as financial schemes, short-lived disinformation campaigns, public opinion manipulation, and extortion.
- “The identities of ordinary people are available to be stolen or recreated from publicly exposed media. Cybercriminals can steal from the impersonated victims or use their identities for malicious activities.
- “The modification of deepfake models can lead to a mass appearance of identities of people who never existed. These identities can be used in different fraud schemes. Indicators of such appearances have already been spotted in the wild.”
What to look for when confronting potential deepfakes.
Trend Micro says organizations should implement multifactor authentication (preferably biometric authentication if possible), and train their employees to look for visual discrepancies that are often present in current deepfakes. Some deepfakes, however, are already indistinguishable from the real thing, so the researchers conclude: “Significant policy changes are required to address the problem on a larger scale. These policies should address the use of current and previously exposed biometric data. They must also take into account the state of cybercriminal activities now as well as prepare for the future.”
Deepfakes during election cycles.
AwareGO outlines the risks deepfakes pose for mis- and disinformation on social media, particularly during election cycles. Users should check reputable news sources to confirm what they read on social media, especially when it seems sensational or outrageous. They make mention of three tells that often betray misinformation and disinformation. They're not dispositive evidence that a story is bogus, but they're worth keeping in mind:
- "Clickbait – A headline is more sensationalized than the article’s content warrants. This is done to gain social media engagement, but it can distort the truth."
- "Misleading headlines – Headlines that don’t look too exaggerated but are still misleading or present the story in a different light from the actual content."
- "Satire and parody – Meant to be just for fun but might look like the truth to some."
Don't think satire and parody obvious and harmless. It can be surprising how many people find these difficult to recognize. Just a week ago a "sarcastic" post by a German journalist was one of the principal ways fake news about a coup in China found its way into mainstream reporting.