At a glance.
- Meta reports on adversarial networks.
- Fakes in the criminal underworld.
Meta reports on adversarial networks.
Facebook's parent Meta yesterday released its end-of-year Adversarial Threat Report. It concentrates on what Meta calls "Coordinated Inauthentic Behavior (CIB), Brigading and Mass Reporting." Coordinated inauthentic behavior is familiar, but brigading and mass reporting deserve some explanation.
Brigading involves an "adversarial network" whose participants cooperate "to mass comment, mass post or engage in other types of repetitive mass behaviors to harass others or silence them,” which sounds like trolling scaled to an industrial size.
Mass reporting, also characterized as involving an adversarial network, occurs when "people work together to mass-report an account or content to get it incorrectly taken down from our platform." That is, people combine to falsely allege violations of policy in an attempt to get someone banned from Facebook or any other Meta platform. The “reporting” in this case is reporting in the sense of diming someone out to the platform.
Meta took down four coordinated inauthentic behavior networks in China, Palestine, Poland and Belarus. One network in Italy and France was disabled for brigading, and one network in Vietnam was removed for mass reporting. The brigading in Italy and France Meta attributes to a conspiracy movement, and Facebook's ban is partial:
"We removed a network of accounts that originated in Italy and France and targeted medical professionals, journalists, and elected officials with mass harassment. Our investigation linked this activity to an anti-vaccination conspiracy movement called V_V, publicly reported to engage in violent online and offline behaviors. The people behind this operation relied on a combination of authentic, duplicate and fake accounts to mass comment on posts from Pages, including news entities, and individuals to intimidate them and suppress their views. While we aren’t banning all V_V content, we’re continuing to monitor the situation and will take action if we find additional violations to prevent abuse on our apps."
Thus V_V isn't obviously connected to any government. Matters are different with the mass reporting activity:
"The network coordinated to falsely report activists and other people who publicly criticized the Vietnamese government for various violations in an attempt to have these users removed from Facebook. The people behind this activity relied primarily on authentic and duplicate accounts to submit hundreds — in some cases, thousands — of complaints against their targets through our abuse reporting tools."
The coordinated inauthenticity also looks government-directed. Meta's summaries of the four take-downs are instructive. Note the way in which much of the activity is keyed to international tension and conflict:
- "Palestine: We removed 141 Facebook accounts, 79 Pages, 13 Groups and 21 Instagram accounts from the Gaza Strip in Palestine that primarily targeted people in Palestine, and to a much lesser extent in Egypt and Israel. We found this activity as part of our internal investigation into suspected coordinated inauthentic behavior in the region and linked it to Hamas."
- "Poland: We removed 31 Facebook accounts, four Groups, two Facebook Events and four Instagram accounts that we believe originated in Poland and targeted Belarus and Iraq. We found this activity as a result of our internal investigation into suspected coordinated inauthentic behavior in the region, as we monitored the unfolding crisis at the border between Belarus and the EU."
- "Belarus: We removed 41 Facebook accounts, five Groups, and four Instagram accounts in Belarus that primarily targeted audiences in the Middle East and Europe. We found this activity as a result of our internal investigation into suspected coordinated inauthentic behavior in the region as we monitored the ongoing crisis at the border between Belarus and the EU, and we linked it to the Belarusian KGB."
- "China: We removed 524 Facebook accounts, 20 Pages, four Groups and 86 accounts on Instagram. This network originated primarily in China and targeted global English-speaking audiences in the United States and United Kingdom, and also Chinese-speaking audiences in Taiwan, Hong Kong, and Tibet. We began looking into this activity after reviewing public reporting about the single fake account at the center of this operation. Our investigation found links to individuals in mainland China, including employees of Sichuan Silence Information Technology Co, Ltd, an information security firm, and individuals associated with Chinese state infrastructure companies located around the world."
Increasingly governments, which remain responsible for a good fraction of the adversarial networks Facebook (and its parent Meta) are concerned about, are outsourcing disinformation operations of this kind to contractors. Not only may this make economic sense, but it also affords a degree of deniability and greater opportunities for amplification of messaging by state-controlled media outlets.
Fakes in the criminal underworld.
Deception has long been an essential part of criminal fraud. The technological convergence of fraud and disinformation, however, continues in cyberspace. Digital Shadows today issued a report, "When acting turns criminal: Deepfakes and voice impersonators in the cybercriminal underground," that describes the criminal side of that convergence.
The security firm is interested in vishing, the familiar voice phone calls everyone gets (ours commonly offer an extended warranty, HVAC-cleaning services, or a warning that we're about to be arrested for abuse of our Social Security Number). Those scams are pretty transparent--for one thing, they come with an implausible caller ID, and they usually sound as if they're being made from a boiler room as opposed to the Headquarters of the Social Security Police (whoever that might be, we're pretty sure they'd have a better-sounding work environment).
But vishing may be growing more sophisticated:
"When cybercriminals really want to up the ante and make an impersonation appear as credible as possible, they may resort to deepfake audio or video technology. Deepfake technology can alter or clone voices in real time, resulting in the artificial simulation of a person’s voice. Cybercriminals can use deepfake videos or audio to impersonate individuals and bypass security measures to achieve their aim of, for example, authorizing a payment or gathering valuable intelligence."
Digital Shadows has found voice impersonation services for sale in Russophone criminal souks. These are tailored to both language and gender. "If you were looking for a Russian-speaking male voice," Digital Shadows writes, "there was a service waiting for you."
Some of the offerings sound relatively harebrained, not much better than one might get from a boiler-room rookie whose prose style had been heavily influenced by romance novels and reality television. Many of those offerings are deeply commodified, like simple recordings that can be deployed in an indefinitely large number of robo-calls. The more sophisticated offerings, however, seem keyed to enable more convincing business frauds, impersonating a company official in order to inveigle an employee to transfer funds to a criminal's account.
All of this is the supply push side of the criminal market. There's also demand pull, in which hoods ask participants in a forum for specific kinds of voice talent:
"One weird request on a Russian-language forum consisted of a user seeking to conduct a verbal phishing attack on a Telegram account (see Figure 3). The request became oddly specific when the user said they preferred female voices because 'females have the ability to fake emotions better than men' and 'make better social engineers.' Sorry men, you aren’t that talented!"
Apparently diversity isn't always the criminals' strength. Who knew?
"Their request then took a dark turn. They emphasized they needed the female voice actor to pretend that their son or daughter was dying because it would prompt 'the operator' on the phone to help. The rest of the details were vague, but they claimed their scheme could provide a “quick payout” because the target had poor OPSEC. This tactic of creating a sense of urgency or panic is quite common in email phishing campaigns because it can elicit a quick response. In this case, the attacker adopted a similar strategy, with the only difference being the mode of communication."
To return to the vishing equivalent of a business email compromise scam, voice cloning may be affording criminals a more plausible approach to stealing organizations' funds. And the deep faked voices, Digital Shadows says, may be more worrisome than the much-discussed risk of deepfake imagery:
"Despite the growing concern of manipulated images, voice cloning deepfake technology to impersonate high-profile figures is a more pressing concern. In July 2019, cybercriminals were observed impersonating the chief executive of a company in the energy sector in an attempt to receive a fraudulent money transfer of approximately USD 243,000. The threat actors used a voice-cloning tool to request the transfer from an employee, claiming that the payment was to be sent to a third-party supplier based in Hungary. The attackers then moved the money to an account in a second country and distributed it across several states from there."
And, of course, deepfake voice tech has obvious applications to disinformation. A discussion Digital Shadows found on one Anglophone criminal forum was asking for "voice-changing software that they could use to promote misleading content on social media channels." In this case they were looking for a bandwagon effect: the creation of the bogus impression that a lot of people were on board. That particular request was an advertising gimmick, but it's easy to see how it could be adapted to serve political ends. Some of those political purposes are better served when a propaganda line can be attributed to a fake persona, hiding the natural person behind the imposture.
So what advice is there? As always, urgent requests should be treated with appropriate skepticism. And don't hesitate to hang up on an obvious robot. One of us, years and years ago, actually received a phone call (in Lawton, Oklahoma, of all places) that began, "Good afternoon. Although I am a recording, I hope you will have the courtesy not to hang up on me." As if.