At a glance.
- Legislating content moderation.
- Tactical spoofing.
- A market for deepfakes.
Misinformation seems, recently, equally impervious to content moderation and rumor control,
The Rest of the World has a story, discouraging from the point of view of countering misinformation (although arguably encouraging, in a grim sort of way, to any remaining free speech absolutists out there), of how an influencer in Indonesia continues to attract adherents to crackpot theories about COVID-19:
"As the drummer of the Indonesian punk band Superman Is Dead, I Gede Ari Astina, better known as Jerinx, always relished making headlines for rebellious acts. His most recent, and most controversial: becoming one of Indonesia’s leading anti-vaxxers.
“'Those who don’t believe that this covid is just a business scheme may still believe that America has landed on the moon and 9/11 is [initiated] by Muslims,' he wrote to his over 1 million Instagram followers in April 2020, as the country’s case count began to pick up speed."
Jerinx was jailed under Indonesian anti-defamation laws last November and was released in June. Since then he's lost his Twitter account, and reestablished a new Instagram account, but the drummer's riff of nonsense seems not to have missed a beat. That the two claims he counts on his audience reading as obvious falsehoods are the success of the Apollo Program and the role of radical Muslims in the 9/11 attacks is particularly discouraging.
Singapore, which has also enacted laws in an attempt to control misinformation, has also seen indifferent success.
Other attempts to develop workable legal constraints on falsehood (or other categories of objectionable content) will continue to be tried. The Toronto Star reports that Canada's Liberal Government has comprehensive legislation pending (and out for public comment) that would restrict five categories of illegal content: "child pornography, terrorist content, incitements to violence, hate-speech and the non-consensual sharing of intimate images." The regulations would be administered by a "digital safety commissioner of Canada."
Tactical disinformation in the Black Sea.
NATO's insistence on treating the Black Sea as international waters (as does most of the rest of the world) continues to put a burr under the Kremlin's saddle. WIRED has an account of how Russian spoofing of warship AIS (automatic identification system) has misrepresented NATO warship positions as intrusions into Russian territorial waters. The provocative voyages never happened at all, the ships simply having transited to and from the undisputed Ukrainian port of Odessa, through undisputed international and Ukrainian waters, but their AIS signals showed them in Russian claimed (and occupied) Sevastopol.
The spoofing has evidently been more widespread than previously appreciated, occurring in the Baltic, where Swedish warship positions were spoored, as well as the Black Sea. As WIRED writes:
"According to analysis conducted by conservation technology nonprofit SkyTruth and Global Fishing Watch, over 100 warships from at least 14 European countries, Russia, and the US appear to have had their locations faked, sometimes for days at a time, since August 2020. Some of these tracks show the warships approaching foreign naval bases or intruding into disputed waters, activities that could escalate tension in hot spots like the Black Sea and the Baltic. Only a few of these fake tracks have previously been reported, and all share characteristics that suggest a common perpetrator."
That common perpetrator would be Russia.
AIS spoofing would serve disinformation, but Russia has also engaged in GPS spoofing, which is more a form of meaconing, which serves to confuse navigation as opposed to presenting the larger world with a false picture. Both forms of spoofing, of course, represent a hazard to navigation.
Not disinformation, but creating technology likely to be turned to that end.
Why are reality shows common on TV? They're cheaper to make than, say, conventional sit-coms or dramas. The costs of paying writers and actors are low, the director hasn't got as much to do, and you'd have to have a producer and an editor in any case.
There's a similar trend in corporate training. Actors are expensive to hire, but if you could replace them with synthetic but convincing deepfakes, you'd have more budget-friendly training videos. The technology is still developing (WIRED calls it "imperfect") but it's getting good enough, and it's creating a market likely to sustain a trajectory of improvement.