At a glance.
- Diplomatic smack.
- Section 230, takedowns, and industry practices.
- Carelessness and inattention and their contribution to the spread of misinformation,
- Inauthenticity and disinformation as phishbait.
- NFTs as investment mania.
- Lame memes and national self-regard.
Diplomatic signalling as information operation, and talking summit smack.
In an apparent expression of displeasure with Washington, Russia has recalled its ambassador to the United States for consultations. The Wall Street Journal notes that the move came the day after the US Intelligence Community released its unclassified Assessment accusing Russian President Putin of personal involvement in malign influence operations directed at the 2020 US elections. The Wall Street Journal reports a speech by Russian President Putin in the occupied and annexed Crimea during which he casts much US diplomacy as classical projection. Responding to US President Biden's characterization of Mr. Putin as "a killer," the Russian President said of Mr. Biden, “How would I respond to him? I would say: be well, I wish him health."
According to the Washington Times, Mr. Putin continued his response to Mr. Biden in a television interview. “I want to propose to President Biden to continue our discussion, but on the condition that we do it basically live, as it’s called, without any delays and directly in an open, direct discussion,” the Russian President said, adding “it seems to me that would be interesting for the people of Russia and for the people of the United States.” This is regarded as an allusion to President Biden's reported preference for scripted, restricted appearances before benign audiences.
Facebook outlines steps against disinformation prior to Section 230 hearings.
In an op-ed published in the Morning Consult, Facebook described the steps it's recently taken against disinformation. In the last quarter of 2020, for example, the company took down more than 1.3 billion fake accounts. This is part of the company's familiar work against inauthenticity, against users pretending to be who they're not. Facebook also described the action it's taken more directly against disinformation: "We’ve found that one of the best ways to fight this behavior is by disrupting the economic incentives structure behind it. We’ve built teams and systems to detect and enforce against inauthentic behavior tactics behind a lot of clickbait. We also use artificial intelligence to help us detect fraud and enforce our policies against inauthentic spam accounts."
Misinformation, innocently propagated falsehoods, is in some respects a tougher challenge. Facebook is addressing it with teams of fact-checkers. All of the content moderation will be received with the usual animadversions about freedom of speech and its curtailment by censorship. Facebook explains its practice thusly:
"Misinformation can also be posted by regular people, even in good faith. To address this challenge, we’ve built a global network of more than 80 independent fact-checkers, who review content in more than 60 languages. When they rate something as false, we reduce its distribution so fewer people see it and add a warning label with more information for anyone who sees it. We know that when a warning screen is placed on a post, 95 percent of the time people don’t click to view it. We also notify the person who posted it and we reduce the distribution of Pages, Groups and domains that repeatedly share misinformation. For the most serious kinds of misinformation, such as false claims about COVID-19 and vaccines and content that is intended to suppress voting, we will remove the content."
The op-ed was published before Facebook CEO Zuckerberg's appearance before the US House Energy and Commerce Committee today. The Committee is holding hearings on Section 230 and the responsibility platforms have for limiting disinformation and misinformation. Mr. Zuckerberg’s prepared testimony argued that shielding companies' from liability for unlawful content should be conditioned on “companies’ ability to meet best practices to combat the spread of this content.” He also asked for more guidance on managing lawful but “harmful” content.
A set of recommendations developed jointly at the Harvard Kennedy School's Mossavar-Rahmani Center for Business and Government and NYU's Stern Center for Business and Human Rights would have the Federal Government address the "societal harms" of problematic communication. "The extent of political disinformation, hate speech, and other harmful content illustrates that the social media industry has not done enough to police itself. Specifically, the leading social media companies have not developed standards and processes for addressing harmful content that recognize the broader social harms caused by their activities. As a result, this industry requires greater governmental oversight." The report sees models in the role the Federal Communications Commission plays in overseeing telecommunications, radio and television, and the work of the Securities and Exchange Commission does to preserve the equity and transparency of securities markets. It would like to see a Digital Bureau established, possibly within the Federal Trade Commission, that would establish regulations to moderate online content. "Although the creation of a standalone agency would be ideal, political obstacles to such an initiative would be considerable and unlikely to be overcome in the short term. Instead, we recommend enhancing the authority of the Federal Trade Commission to oversee the commercial internet, including social media companies."
Such moderation wouldn't be uncontroversial. Much of that controversy would surround how harm was unpacked, and what would be codified as hate. Contrast notes from US Cyber Command’s annual Legal Conference, where one of the items under discussion was whether US Government interference with foreign information operations was even Constitutional. First Amendment sensibilities may run stronger in Fort Meade than they do in either Cambridge or Greenwich Village.
The good I would do, I do not; the evil I would not do, that I do.
Sure, it's Romans 7:19, but it's also, roughly speaking, the conclusion of a study by an international team of academics, who published (in Nature) the results of a look at why people shared false information. It's not that they can't distinguish, relatively easily in a rough-and-ready way, reliable information from manifest hogwash, nor is it that they want to spread falsehood. For the most part they're just careless. They're just not paying attention. Unlike St. Paul, they're not particularly troubled by their slack ways.
Disinformation as bait.
Some inauthenticity operates like fraud, in which an impostor induces someone to take an action that's against their interests. An example of this is seen in Chinese security services' use of lures to draw disfavored communities to sites where their devices can be infected with spyware.
Facebook announced yesterday that it had taken down a Chinese cyberespionage operation directed principally against "Uyghur activists, journalists & dissidents living abroad in Turkey, Kazakhstan, US, Syria, Australia, Canada & other countries." Facebook's tweet announcing the takedown cited earlier work on the threat actor by Volexity, Project Zero, and Trend Micro (who called the group "Evil Eye"). Facebook said that a lot of the surveillance activity was conducted "off platform," with surveillance installed via maliciously crafted, bogus news articles that falsely represented themselves as media reports in outlets covering news of interest to the Uyghur diaspora. Those links are now blocked on Facebook.
SecurityWeek reports that much of the "off-platform" stuff took the form of content carried by iOS or Android apps. The Washington Post notes that the takedown shows that Facebook's intelligence operations are now looking beyond Facebook itself.
Investment mania.
Investment mania isn't disinformation, strictly speaking, but consider it a kind of first cousin. We saw the effect of social media during the GameStop short squeeze, and we're seeing another case of the madness of crowds in the current rush to acquire non-fungible tokens. The NFT market is hot, right now. Jack Dorsey sold his first tweet for what the Verge calls an "oddly specific $2,915,835.47," proceeds to be donated to charity. That particular NFT rides on the Ethereum blockchain, thus amounting to a kind of convergence of poorly grasped but strongly attractive technologies. FStoppers grumps that NFT trading is just a pyramid scheme where people are already being bubbled out of their cash.
Huggy Bear, you're just adorable.
Kremlin media outlet RT sniffs that the Pentagon isn't very good at information ops, since it took Fort Fumble "more than twenty days" back in October to produce a lame meme in the form of a cartoon showing a stumblebum bear in Russian uniform dumping his load of trick-or-treat candy. "An uninspired cartoon commissioned by the Pentagon that attempted to throw shade at Moscow required weeks to complete, documents have revealed, shining a light on how the US military goes about its creative endeavors," the Moscow service says, accurately enough. RT also points out that, at the time of its writing, the tweeted cartoon bear-wannabe-meme had received just two-hundred retweets and two-hundred-eighty-five likes, which would seem disappointing, well below a TikTok of, say, someone bopping their head to Jane Austen.
Yeah, sure, it's maybe not that good, not like a heroic nude cartoon of Mr, Musk riding a Clifford-sized doge (no, really--see the Voice for an account of how this and other "digital junk" is being hawked as NFTs) but it seems to have gotten under RT's skin. That was the point: Cyber Command thinks, not without reason, that being dismissed as cute and cuddly makes Russian operators crazy. Maybe they're on to something. Question more, Huggy Bear. And, pro-tip: if you want to put a burr under Uncle Sam's saddle, don't tell him he's lame, tell him how much the Pentagon spent on the info op. Get your shots in before Section 230 reform.