At a glance.
- A market for deepfakes.
- Facebook's Oversight Board and the case of former US President Trump.
- Disinformation as a threat to business.
A market for deepfakes.
Researchers at the security firm Recorded Future have discerned a growing international criminal market for deepfakes.
Why do people care about this? It’s easy to think of deepfakes as primarily useful in more exotic forms of spoofing, perhaps faked video or photographic evidence to discredit a political movement or figure. Or, on a more prosaic level, they might be used for more effective social engineering--better and more convincing business email compromises, for example, or more compelling catphishing.
But there are also other, even more prosaic concerns about deepfakes. A criminal market in such deceptive stuff might undercut commonly used modes of establishing one’s identity. Traditionally, people have seen three basic ways of establishing that they are who they say they are.
You can do this through something you know, and the most common form this takes is the password. The security question is another--if you know your grandmother’s maiden name was Fifinella, or that your first pet was Blinky the chameleon or Finnegan the goldfish, or that you drove a Hillman Minx when you were at school, the assumption is that, well, you’re probably who you say you are. You can also do this through something you have, like a hardware token, or, in real life, maybe an ID card, or a badge. Or, finally, you could establish your identity through something you are, that is, through one of the several biometric modalities, like your face, your fingerprint, or even your gait. Thus, something you know, something you have, or something you are.
One of the reasons a criminal market in deepfakes is troubling is that it might be used to undercut the third mode of identification: who you are. This could erode trust in the biometric technologies that organizations use online. If your fake face is out there, well, maybe some hood can use it to sign on somewhere as you, your own self.
Deepfakes are, in the view of Recorded Future’s Insikt Group, “fraud’s next frontier.” They used to be a repellent but in most respects less threatening kind of technology. “Deepfake technology used maliciously has migrated away from the creation of pornographic-related content to more sophisticated targeting that incorporates security bypassing and releasing misinformation and disinformation,” the researchers say. “Publicly available examples of criminals successfully using visual and audio deepfakes highlights the potential for all types of fraud or crime, including blackmail, identity theft, and social engineering.”
The researchers found online souks catering especially to Anglophone and Russophone hoods, but they also found a few hawking to speakers of Spanish, Turkish, and Chinese. The deepfake products and services on offer include editing both pictures and video, how-to tips, tutorials, exchanges of best (that is, bad) practices, free software downloads and photo generators, and news on advancing criminal technology.
The Insikt Group says that much of the chatter online about deepfakes is of a relatively benign, technophile nature: people interested in the topic are chatting and swapping stories. But the researchers think that this is likely to turn ugly, as a hobbyist’s interest turns into a perception that deepfakes have a lot of criminal potential.
De-platforming: Facebook and former US President Trump.
Facebook's Oversight Board upheld the social platform's ban of former President Trump, but with a degree of ambivalence about the consistency with which Facebook moderates its users' speech and behavior:
"The Board has upheld Facebook’s decision on January 7, 2021, to restrict then-President Donald Trump’s access to posting content on his Facebook page and Instagram account.
"However, it was not appropriate for Facebook to impose the indeterminate and standardless penalty of indefinite suspension. Facebook’s normal penalties include removing the violating content, imposing a time-bound period of suspension, or permanently disabling the page and account.
"The Board insists that Facebook review this matter to determine and justify a proportionate response that is consistent with the rules that are applied to other users of its platform. Facebook must complete its review of this matter within six months of the date of this decision. The Board also made policy recommendations for Facebook to implement in developing clear, necessary, and proportionate policies that promote public safety and respect freedom of expression."
If there were ever an object lesson in the impossibility of pleasing everyone, it can be found in l'affaire Trump. The Washington Post's media reporting denounces the decision as craven waffling, arguing that it was immediately obvious that President Trump deserved a ban for spreading misinformation and incitement, since, presumably, error has no (or at least far fewer) rights. The New York Post, on the other hand, dismisses the Oversight Board as a cabal of Trump-hating progressive censorious tools. Both articles have their points, but it's difficult to see how either side could find common ground. MIT Technology Review sees the oversight exercise as in its essence a marketing gambit unlikely to resolve the tensions many perceive between freedom of speech and public safety.
Major social media, including Facebook and Twitter, have asked for public feedback on how to apply their content moderation rules to major political leaders. Reuters notes that both platforms cut big leaders more slack than they do the ordinary Janes and Joes.
Nina Jankowicz, Disinformation Fellow with the Science And Technology Innovation Program at The Wilson Center in Washington, DC, forwarded comments on the Oversight Board's decision:
"The Facebook Oversight Board's decision to uphold the ban on former President Trump for another six months and ultimately leave the decision on the permanence of the ban in the platform's hands underlines the need for an independent, government regulatory body to provide oversight of and transparency within social media. Ultimately, the Oversight Board is still a body that was created and paid for by Facebook.
"What is more striking about the decision and related recommendations is not related to President Trump, but how Facebook will deal with other world leaders and government officials. The Board acknowledges that not just officials, but influencers have an outsized influence on politically motivated violence, and recommends that Facebook invest in local subject matter, cultural, and linguistic experts to help them monitor content and enforce policies, an area in which Facebook has been notoriously underinvested.
"The Board also emphasized the need for transparency in Facebook's content moderation decisions, dinged Facebook for its opacity in not responding to the Board's requests for material in its investigations, and urged Facebook to conduct a review of its role in the January 6 insurrection at the US Capitol.
"In short: the decision is not a "win" for any political force. It is a start toward more equitable enforcement and discourse on Facebook, but until some of these recommendations are translated into law and carry consequences, democratic discourse around the world will continue to be at the mercy of social media executives."
Commerce and disinformation.
Disinformation isn't just for information warfare. Crunchbase observes that disinformation can hit businesses, too, harming brand reputation. This can occur in the context of stock shorting, short-squeezes, pump-and-dump scams, or even unfortunate "influencer" engagements.
Crunchbase, which of course is particularly concerned with start-ups and their exits, notes that exits like IPOs have become an occasion for reputational attack through what Facebook calls “coordinated inauthenticity,” the creation and marshalling of bogus accounts to astroturf an illusion of grassroots opinion. Their article recommends looking for early signs of “a coordinated effort of online users looking to shock the market,” of signs that “supporters of a competitor” are trying to “sow discord,” or, perhaps, that “fake profiles are building upon existing discontent in your user base.”
"From there, companies have a myriad of options when it comes to leveraging these insights. If conversations have yet to go viral, taking down and reporting fake accounts can be an option for brands. Companies can even compile the fake posts and address them directly in their messaging response. Similarly, brands might want to release preemptive campaigns focused on promoting accurate information or debunking false claims to get ahead of potentially damaging narratives. But regardless of how companies choose to respond, they need to be prepared to detect, track and trace disinformation in real time as the market adjusts to the impact of online conversations.
"Arming your teams with the data of where these conversations are happening online—and how they’re spreading—is the critical first step to understanding how to take action. With the information in hand, leadership teams can make better decisions in how to respond publicly, how to engage or not engage in the conversation online, or inform any public-facing executives as it relates to their reputation.
But there are other, more workaday forms of commercial disinformation at play, too. A report by the Safety Detectives describes how an exposed ElasticSearch database revealed how an organized campaign of bogus Amazon reviews is organized:
"The server contained a treasure trove of direct messages between Amazon vendors and customers willing to provide fake reviews in exchange for free products. In total, 13,124,962 of these records (or 7 GB of data) have been exposed in the breach, potentially implicating more than 200,000 people in unethical activities.
"While it is unclear who owns the database, the breach demonstrates the inner workings of a prevalent issue affecting the online retail industry."