At a glance.
- The view from Europe: standards for content moderation is a governmental responsibility.
- The challenges of developing principles for content moderation.
Content moderation as seen from Europe.
Bloomberg reports that German Chancellor Merkel characterized US President Trump’s social media bans as “problematic,” arguing that laws, not private whims, should determine acceptable speech. Similarly, two French ministers said voters and the government, not corporate executives and “the digital oligarchy,” should make weighty content moderation decisions, with one remarking that he was “shocked” at the President’s deplatforming, and the other describing tech giants as a threat to democracy. As we’ve seen, the EU is considering major legislation aimed at reining in Big Tech.
Principled content moderation or ad hoc improvisation?
Lawfare says US President Trump’s “Great Deplatforming” has raised questions about corporate motives and “whether they stem from political bias or commercial self-interest rather than any kind of principle,” suggesting this would be a wonderful opportunity for Facebook to make use of its Oversight Board. (The social media company has endured criticism over the Board’s delayed kickoff.) The Oversight Board’s purpose, as articulated by Zuckerberg, is to prevent Facebook from making “so many important decisions about free expression and safety on our own.” Lawfare maintains “suspending the account of the leader of the free world” probably qualifies as an important decision.
Although President Trump himself cannot refer his case to the Board due to current guidelines governing appeals, Facebook can, and could even expedite deliberations under an “exceptional circumstances” clause. Lawfare reasons in favor of doing so, calling the decision “extremely controversial, polarizing and an exercise of awesome power,” and hinting that not doing so could smack of buttering up its next set of (Democratic) regulators. The current situation is an archetype poised to recur in coming years, at home and abroad, as world leaders incite unrest and posts “no more objectionable than” usual interact with complex social contexts.
Picking up on this thread, Defense One takes a different tack, positing President Trump as a “superspreader” of “conspiracy theories,” and claiming the “path to making the internet less toxic is placing limits on…key nodes.” On this view, the Great Deplatforming represents Big Tech taking ownership of “battlefields” by moderating content with an eye to downstream and off-platform societal effects, not just posted terms of service. This line of thinking of course returns us to the question of whether Big Tech should oversee a battlefield circumscribed by all of human society. As Defense One says, such duties are not what most had in mind “when they packed their bags for Silicon Valley,” yet they find themselves now “running information warzones,” managing “a conflict space,” and reckoning with the results of algorithms that moved “our content feeds toward extremism.”