At a glance.
- Facebook cuts off Myanmar's military.
- Myanmar's junta and abuse of lawful intercept tools.
- Equating online journalism and espionage.
- Vaccine misinformation and disinformation.
- Difficulties of content moderation,
Facebook cuts off the Tatmadaw.
On February 24th Facebook suspended the country's military (the "Tatmadaw") from both Facebook and Instagram. "We’re also prohibiting Tatmadaw-linked commercial entities from advertising on the platform. We are using the UN Fact-Finding Mission on Myanmar’s 2019 report, on the economic interests of the Tatmadaw, as the basis to guide these efforts, along with the UN Guiding Principles on Business and Human Rights. These bans will remain in effect indefinitely." Facebook cited the increased risk (and reality) of violence as the reason for the ban, which the social network explained in terms of four "guiding factors:"
- "The Tatmadaw’s history of exceptionally severe human rights abuses and the clear risk of future military-initiated violence in Myanmar, where the military is operating unchecked and with wide-ranging powers."
- "The Tatmadaw’s history of on-platform content and behavior violations that led to us repeatedly enforcing our policies to protect our community."
- "Ongoing violations by the military and military-linked accounts and Pages since the February 1 coup, including efforts to reconstitute networks of Coordinated Inauthentic Behavior that we previously removed, and content that violates our violence and incitement and coordinating harm policies, which we removed."
- "The coup greatly increases the danger posed by the behaviors above, and the likelihood that online threats could lead to offline harm."
The ban is a discriminating one, in the good sense: Facebook exempted government agencies providing essential public services.
WIRED describes what it's been like to live under progressively ambitious junta shutdowns of the Internet. Social media have both benign and malign uses, but it's striking how burdensome the people WIRED spoke to have found their inability to connect and share their news online.
Myanmar, surveillance, and Internet control.
Myanmar's ruling junta has extended its control over online activity, both managing content and, what's more worrisome to dissidents and disfavored people, expanded online surveillance. It had to buy the technology it uses for those purposes somewhere, and the New York Times this week reviewed cyber proliferation to Myanmar's junta. The report indicates the perennial difficulty of restricting the spread of dual-use technologies, that is, not only tech that has entirely legitimate civilian uses, but technology that has lawful military and law enforcement uses, but which should be kept away from governments likely to use it for illicit repression.
Singled out for particular mention are field units produced by the Swedish firm MSAB that can download the contents of mobile devices and recover deleted items, and MacQuisition forensic software that extracts data from Apple devices. MacQuisition is made by BlackBag Technologies, a US company that was acquired last year by Israel’s Cellebrite.
Both companies say the tech in question appears to represent legacy systems, and that they had suspended sales to Myanmar before this year’s coup. Some of the tools may have been provided by various middlemen. The report in the Times might be considered a useful case study of the sort of problem the Atlantic Council addressed in its report on initial access brokers and cyber proliferation earlier this week.
An Atlantic Council report discusses one aspect of cyber proliferation that can be seen operating in Myanmar today, and that's the growth of what the Council calls "access-as-a-service brokers." These vendors offer “Vulnerability Research and Exploitation, Malware Payload Development, Technical Command and Control, Operational Management, and Training and Support.” The report recommends international action, specifically by the US and its allies, to:
- “Understand and partner” with like-minded governments, elevating the issue and enacting appropriate controls.
- “Shape,” by developing lists of troublesome vendors, standardizing risk assessment, incentivizing corporate ethics moves, and controlling sales and assistance to states that deal with banned vendors..
- “Limit,” by widening the scope of vulnerability disclosure, restricting post-employment activities for former government cyber operators, taking legal action against access-as-a-service business, and encouraging “technical limits on malware payload jurisdiction.”
The Atlantic Council’s proposals don’t amount to a call for a ban on corporate development, as contractors, of tools useful for cyber offensive operations. Rather the Council argues for an approach that would bring such companies’ activities under the sort of regulation now exercised over traditional, conventional, kinetic weapons. Existing approaches to cyber nonproliferation, the study’s authors argue, lack the granularity they would need to be effective, and the report’s recommendations are intended to outline how such granularity might be developed.
Moscow looks at social media and doesn't like what it sees.
The Wall Street Journal reports that Russia's communications regulator is pressing Twitter to suppress content the government finds subversive, mostly tweets by dissidents and opposition figures. Failure to delete banned content constitutes a violation of Russian law. In Twitter's case the consequences are likely to be heavy fines as opposed to a midnight knock on the door, if only because the San Francisco PD are unlikely to exercise Mr. Putin's warrants at Twitter headquarters.
An essay in Foreign Policy argues that Russia's government has effectively equated online journalism with espionage.
Vaccine mis- and disinformation.
One might think that military organizations would have a degree of natural resistance to acting on popular delusions. After all, you can limit the effects of misinformation on the troops by simply ordering them to do what they need to do, right? Actually, not so fast. Some US military organizations are having difficulty getting their personnel to take the COVID-19 vaccine. The Marine Corps base at Camp Lejeune, South Carolina, is trying a rumor-control exercise to overcome its Marines' largely groundless misgivings about the vaccine, the Carolina Public Press reports, and the Army is trying the same at nearby Fort Bragg, North Carolina. The challenge isn't confined to the Carolinas, either: the Guardian puts the fraction of US military personnel refusing vaccination at about a third of the total force.
As the Quint notes, Twitter is applying its "strike" system (as in, three of them and you're out) to tweets of COVID-19 vaccine misinformation. It will flag such tweets to warn users that the content they're uploading “may be misleading.”
Where do people get such ideas? There are some high-profile misinformers, influencers, at work. He's unlikely to be primarily or even significantly responsible for military skepticism about COVID-19 vaccines, but the Reverend Louis Farrakhan is one such misinfluencer, and his performance is instructive. The Daily Caller reports that Minister Farrakhan is actively denouncing the vaccine as a "vial of death," and surrounding it with a predictable array of conspiracy theories.
Context is difficult for content moderation at scale.
WIRED describes the curious case of a YouTube channel devoted to chess that found itself flagged for hate speech. The AI deployed to screen for objectionable content missed that the channel was about the game, and so interpreted an innocent discussion of the King's Indian Defense as problematic, discussing as it did "White" and "Black" "attacking" and "defending."
The confusion seems a more elaborate version of the familiar Scunthorpe Problem, in which simple scans for combinations of letters found the name of a British town to contain an impermissible obscenity. AI's gotten better than that, but it still finds context difficult. How AI will cope with features of natural language that even human speakers struggle with, like intensionality, modality, and the use-mention distinction, remains unclear. But the dream of easy automation of content moderation seems likely to continue to recede into the indefinite future, right beside practical fusion power, or colonies on the moons of Saturn.
Hybrid attempts to facilitate content moderation.
Thus it seems that for the near future, at least, advances in content moderation are likely to involve hybrid approaches that facilitate the work of human censors standing watch. The fact-checking firm Logically, for one, is offering a dashboard designed to augment the capability of human watchstanders. The company's platform offers:
- "A dashboard displaying all potentially problematic online activity"
- "The ability to map and assess relevant emerging narratives, themes and associations, enabling early detection of potential issues before they become widespread"
- "Insights into demographics and/or communities that are being targeted and reached by coordinated narratives and campaigns"
- "A suite of countermeasures to tackle problematic content, including priority flags / takedown notices to platforms and deep dive investigative reports into high priority issues identified by Logically's systems"
Avast also blogged some thoughts about recognizing disinformation. While firmly rooted in current events, the company's blogger offers advice that could have come straight out of Francis Bacon's Novum Organon. Beware the idols of the tribe, of the cave, of the marketplace, and of the theater.