At a glance.
- Influencers and the inspiration of unrest.
- Deplatforming and content moderation.
- Responsibility for content moderation.
- Shutting down Parler.
- Misinformation is easy.
Influencers, inspiration, and spontaneous organization.
The Wall Street Journal has an account of how last week's inexcusable riot on Capitol Hill was inspired and organized in social media. The proximate cause of the unrest were facially implausible claims that the 2020 election in the US had been stolen, and that justice under the Constitution required that the electoral vote not be formally counted and certified until after all protests and challenges had been resolved to a high standard.
Unlike many, perhaps most, other cases of online incitement, the Journal reports that experts who’ve taken an early, preliminary look at the incident think that the inspiration was a lot more distributed than it’s usually been, with less top-down direction, fewer high-profile leaders, and a lot more of what we’ve come to call “virality.” As the Journal puts it, “the Capitol riot doesn’t appear to have been orchestrated by a central figure or organization.” The agitation has been in progress for weeks, and it proceeded through a large number of channels and across many platforms. One expert quoted by the Journal said, “They didn’t need central planning.” The President's own remarks to the protesters, many of whom were about to turn violent, may be read here.
The rioting was firmly rooted in the domestic political fringe, Trump-dead-ender division (encouraged as the fringe too often is by people who should have known better). But KrebsOnSecurity points out that there’s a disruptive Russian connection, too, via the Russian company DDoS-Guard. DDoS-Guard provided hosting services to Washington State-based VanwaTech, which in turn provided Internet connectivity to QAnon, 8Chan, and (of all people) Hamas. This is a minor causal factor, if indeed it amounts to any causal factor at all, but there's the possibility of unexpected legal exposure: Hamas is under US sanction.
Deplatforming and content moderation.
We noticed that the emails from President Trump which we received several times a day--most were appeals for cash support of election appeal efforts--pretty much piped down after the rioting began. This is apparently due, according to Vice, to their having been blocked by Salesforce, which now owns the email marketing firm, ExactTarget, the Trump campaign had been using.
Many large Internet companies were quick to deplatform US President Trump and various supporters in response to the President’s encouragement of demonstrations earlier in the week. Axios lists Reddit, Twitch, Shopify, Twitter, Google, YouTube, Facebook, Instagram, Snapchat, TikTok, Apple, Discord, Pinterest, and Stripe.
Responsibility for content moderation.
An op-ed in the New York Times thinks the lesson to be drawn from the deplatforming is that tech companies hold a great deal of power over online discourse, and that power tends to be exercised from the top, on the basis of “gut decisions” by executives, and not in conformity with established “quasi due process” criteria. The American Civil Liberties Union says it understands the desire to ban President Trump from Big Tech’s platforms, “But it should concern everyone when companies like Facebook and Twitter wield the unchecked power to remove people from platforms that have become indispensable for the speech of billions — especially when political realities make those decisions easier.” (The ACLU's worries about freedom of speech may not be unwarranted: we've noticed that media outlets have increasingly enclosed the words "free speech" in scare quotes that suggest those words are being invoked with bogosity and in bad faith. We also note that, while right-wing advocacy as represented by Parler is out, left-wing advocacy as represented by Antifa's several Twitter accounts remains up, active, and encouraging adherents to "mix it up" with the non-like-minded.)
The implications of the controversy and the ban won’t be confined to the US. Computing reports, for example, that British Health Secretary Matt Hancock has said that it seems clear that social platforms are now acting much more like publishers than a public square. He took no position on the deplatforming, nor did he offer any prescriptions for the future, but he said the companies are "choosing who should and shouldn't have a voice on their platform," and that recognizing this should inform any regulations governments might enact.
Reaction from both Germany and France has been more pointed. Bloomberg reports that German Chancellor Merkel characterized US President Trump’s social media bans as “problematic,” arguing that laws, not private-sector decisions, should shape discourse. Similarly, two French ministers said voters and the governments they elect, and not corporate executives and “the digital oligarchy,” should make significant content moderation decisions, with one remarking that he was “shocked” at the President’s deplatforming, and the other describing tech giants as a threat to democracy. European governments have historically been more dirigiste with respect to speech than have American administrations, but in this case at least the Europeans think placing boundaries on public speech is an inherently governmental responsibility.
Lawfare says US President Trump’s “Great Deplatforming” has raised questions about corporate motives and “whether they stem from political bias or commercial self-interest rather than any kind of principle,” suggesting this would be a wonderful opportunity for Facebook to make use of its Oversight Board. (The social media company has endured criticism over the Board’s delayed kickoff.) The Oversight Board’s purpose, as articulated by Zuckerberg, is to prevent Facebook from making “so many important decisions about free expression and safety on our own.” Lawfare maintains “suspending the account of the leader of the free world” probably qualifies as an important decision.
Although President Trump himself cannot refer his case to the Board due to current guidelines governing appeals, Facebook can, and could even expedite deliberations under an “exceptional circumstances” clause. Lawfare reasons in favor of doing so, calling the decision “extremely controversial, polarizing and an exercise of awesome power,” and hinting that not doing so could smack of buttering up its next set of (Democratic) regulators. The current situation is an archetype poised to recur in coming years, at home and abroad, as world leaders incite unrest and posts “no more objectionable than usual" interact with complex social contexts.
Defense One takes a different tack, classifying President Trump as a “superspreader” of “conspiracy theories,” and claiming the “path to making the internet less toxic is placing limits on…key nodes.” On this view, the Great Deplatforming represents Big Tech taking ownership of “battlefields” by moderating content with an eye to downstream and off-platform societal effects, not just posted terms of service. This line of thinking of course returns us to the question of whether Big Tech should oversee a battlefield circumscribed by all of human society. As Defense One says, such duties are not what most had in mind “when they packed their bags for Silicon Valley,” yet they find themselves now “running information warzones,” managing “a conflict space,” and reckoning with the results of algorithms that moved “our content feeds toward extremism.”
None of this is easy. Even if there weren't civil rights and liberties at stake, the sheer difficulty of controlling content shouldn't be underestimated. The old Soviet Union, uninhibited by anything resembling a functioning Bill of Rights, found itself unable to fully control Samizdat.
What happened to Parler?
The Wall Street Journal reports that both Apple and Amazon have taken action against Parler, the social platform whose declared mission is to provide a conservative alternative to what Parler characterizes as the general progressive bias of platforms like Twitter and so forth. Parler in turn is suing Amazon in the US District Court for the Western District of Washington, seeking “injunctive relief, including a temporary restraining order and preliminary injunctive relief, and damages.” Parler is claiming an anti-competitive bias by Amazon. The company notes that Amazon provides equivalent services to both Twitter and Parler, yet only Parler was singled out for silencing on the grounds that it wasn’t filtering content that amounted to incitement to violence. The filing observes that “Friday night one of the top trending tweets on Twitter was “Hang Mike Pence.” But AWS has no plans nor has it made any threats to suspend Twitter’s account.”
Parler says it does have content moderation designed to stop incitement, but Amazon says that, whatever Parler’s review boards are doing, it’s not enough.
The mob attack on the US Capitol last week remains under investigation, as investigators sort out responsibilities and identify rioters. A quasi-vigilante scraping and archiving of Parler data by private researchers has preserved much of that platform’s traffic. This is being widely reported as a hack, but that seems incorrect: apparently the data collected were all publicly posted and available. If it was a hack, it may have been the hack of an indifferently secured platform.
Matt Warner, CTO at automated threat detection and response technology shop Blumira, sent us some commentary on what happened to Parler when the outraged citizens or (take your pick) vigilantes turned their attention to it:
"Parler had several developmental issues, some long-standing and others caused by poor engineering and lack of testing. We've found mistakes and oddities with timestamps and geotags in the metadata. These failures culminated in a number of potential exposures and a full scrape of attachments on Parler. Parler improperly allowed mass collection of archived images, videos and information that were posted onto the service. This was due to an unprotected API call that was sequentially numbered, therefore allowing any attacker to iterate continuously over the endpoint and take all information available - with over 1 million videos alone. By having no security protections on who can iterate these endpoints, nor any rate-limiting protections, the internet was generally able to scrape and capture unlimited amounts of data.
"From a defensive security perspective, this is a failure of one of the Top 10 of OWASP which defines web application security best practices. Specifically this is an Insecure Direct Object Reference (IDOR) attack which enumerates across all data available. (For example, if you have a corporate website and you store your PDFs numbered at http://www.acme.com/pdfs/1[dot]pdf, that would allow an attacker to then guess for 2.pdf, 3.pdf, and so on and so forth until they are detected and stopped or they divulge all information they desire.) In the case of Parler this was URLs that looked like https://par.pw/v1/photo[?]id= and the ID could be sequentially increased to gather information from the API without direct knowledge.
"If Parler were using a Version 4 UUID, which is a universally unique identifier that is generated using random numbers, this mass scrape would have been nearly impossible. Additionally, when Twilio, a third-party service for user authentication suspended their services, Parler users were able to create accounts without having to verify their email which resulted in additional risk.
"Though less common these days, these types of attacks can occur against all organizations that expose themselves to the internet. These failures at the application security level can be prevented with simple and affordable threat detection and response tools. Changes in authentication (e.g., no longer able to access two-factor authentication (2FA), therefore all valid authentications are allowed) and anomalous behavior are easily detectable and an automated SIEM platform like Blumira can provide scheduled reporting to help surface these kinds of security trends for organizations."
C'mon, traders: read the name of the company before you buy the stock.
So a lot of people have been up in (metaphorical) arms over WhatsApp move toward sharing more information with its corporate parent Facebook's flagship platform. WhatsApp has tried to allay such concerns, but with mixed success. But here's one way misinformation gets its boots on: it does it with the help of influencers, and that help is inadvertent.
In response to the WhatsApp imbroglio, Elon Musk (of Tesla, SpaceX,and the Boring Company) tweeted out "Use Signal," by which he mean, of course, the competing messaging platform of that name. Now, in addition to the companies he runs, Mr. Musk (who gives off a strong Tony Stark vibe) is an influencer. Not as big, maybe, as PewDiePie or blac chyna, but an influencer nonetheless. CNBC reports that the tweet led to a "buying frenzy" for shares of an unrelated technology component manufacturer, Signal Advance, which is based in Texas. The Signal to which Mr. Musk alluded isn't even, as Quartz notes, a publicly traded company, but traders were gonna trade. Investors bought Signal Advance in droves, sending the small company's stock up by what Business Insider put at 11,708%. CNBC summed up: "Signal Advance, which reported receiving no revenue in 2015 and 2016, is now worth more than $3 billion."
See the problem? The medium is inherently compressed, low context, and liable to easy misinterpretation. If people fling their money around like that, think how much more careless they're likely to be when it comes to politics, or faith, or morals. As President Trump might have tweeted back in the day, "Sad!"