At a glance.
- Trolling at scale isn't as difficult as it might appear.
- Spending money for influence, but to what end?
- Generation Z and misinformation.
- Facebook would like to know if you are now, or have ever been, an extremist.
An experiment in trolling.
DefenseOne has an interesting essay whose authors sought to find out how easy it might be to deploy influence operations that played on the known preferences and beliefs of large numbers of individual targets. "A surprisingly detailed psychological profile of an individual can be developed by simply analyzing a person’s 'likes' on social media posts. With data on just 70 posts that someone has liked, an algorithm can predict personal traits better than a friend, and with 300 likes it can predict better than one’s spouse." The authors suggest a three-step approach to developing a targeted influence campaign:
"First, publicly available “like histories” can be used to identify impressionable users along with their topics of interest. Not only we can sort individuals by their easily accessible preferences, but we can also employ an army of tweet-producing bots to test users’ interactions with posts that were pro-, anti-, and neutral toward our piece of test legislation.
"Second, we can create the content most likely to influence particular users based off the predictions from our model." Twitter itself provides the training model for automating such creation.
"For our campaign to be effective, however, our tweets need to appear in the targeted users’ news feeds." And again, Twitter's algorithms will help here as well.
Culture as a field of conflict: the use and abuse of generosity.
Foreign Policy reports that Russian money is flowing into American cultural institutions in ways calculated to have a malign effect on US civil society. Drawing on the work of the Anti-Corruption Data Collective, Foreign Policy says that seven Russian oligarchs "connected to interference efforts" have donated large amounts of money over the past twenty years to American not-for-profit cultural institutions. The report assesses the total as falling somewhere between $372 million and $435 million; more than two-hundred institutions received funds. Many of the recipients are well-known and have good reputations. They include think tanks like the Brookings Institution and the Council on Foreign Relations, universities like Harvard and the University of Southern California, and cultural organizations like the Museum of Modern Art (MOMA) in Manhattan and the Kennedy Center in Washington, DC.
There are, of course, benign cultural institutions that legitimately share aspects of a nation's experience, life, art, and language with an international audience--Germany's Goethe Institute comes immediately to mind, and there are many others. Foreign money is far from automatically corrupting, especially when it's offered transparently and accepted with due diligence.
But Russian oligarchs, with their close ties to the Kremlin and their record of surrogacy on behalf of the Russian government, are a different matter. It's worth considering what the oligarchs think they might be buying with their cash. Foreign Policy suggests two goals: influencing policy (and in this case creating a climate of opinion less favorable to economic sanctions would be one immediate, self-interested goal) and "reputation laundering" (as in, how bad could Ivan Ilyich really be, since he's that nice man who donated the swimming pool to Heather's prep school, etc.).
It's worth considering how reliable the ROI might be. Buying influence may not be as easy as one thinks, particularly in the US, where one could almost weep at the naivety with which foreign money has been thrown in the general direction of Presidential families, from Libya's interest in Billy Carter to current White House ethical distancing from a Presidential son and sometime lawyer who's now turned to a career in fine art. (Such cases are almost always a trial for the incumbent.) There may appear to be a for-sale sign on American politicians, but they've historically taken a less clearly transactional view of corruption, and when bought, they don't reliably stay in the pockets of the purchasers. Caveat emptor. In this respect the money might be better spent on think tanks, whose members often serve as a kind of shadow sub-cabinet.
Why does Generation Z fall for misinformation?
Obviously because they've been educated by Baby Boomers, who've had their pampered heads up their metaphorical fourth points of contact since the Boomers began gracing this world around May of 1946. And you can take it from us: that's the straight dope.
No, seriously, that's just opinion. Don't take it from us. But you see how it works, right?
MIT Technology Review has an essay by a Stanford undergraduate who's currently a research assistant at the Stanford Internet Observatory who has tried to understand why Internet users roughly between the ages of 8 and 24 believe so much manifest nonsense. Her identification of the manifest nonsense is sound, since much of it made predictions that events quickly falsified. She traces the susceptibility of Generation Z to hogwash to a tendency that generation has to "believe and pass on misinformation if they feel a sense of common identity with the person who shared it in the first place."
Her suggestions for a remedy include, roughly speaking, what would have once been regarded as authoritarian: "Policymakers must regulate social media platforms and pass laws to address online misinformation." She also suggests that "educators can teach students to assess the credibility of sources and their claims," so take that, Boomer.
It's an interesting essay, and it suggests that Francis Bacon's Four Idols might be revisited. A few questions suggest themselves. First, the delineation of "common identity" might be worth thinking through. Where are boundaries among communities to be drawn? Are there affinities among some of those communities? (The author's examples suggest this. It's not, for example, that Generation Xers who believed that frustrated supporters of Donald Trump would unleash a genocidal pogrom against "LGBT individuals and people of color" if the Democrats won the election were themselves LGBT individuals or persons of color, but rather that they were likely to know someone who was. As the author rightly notes, the murderous rampage never happened, and really was never particularly likely to happen in any case.) And second, is such identity-based gullibility characteristic of Generation Z, or simply characteristic of adolescence? Will it persist over time? And, finally, is indeed Generation Z any more gullible in such respects than other generations have been, and continue to be. If you think the Boomers are paragons of enlightened and world-weary skepticism, we could show you a pretty cage-full of counterexamples. A lot of them tend to post on NextDoor.
Facebook is worried you may be susceptible to extremism.
CNN reports that Facebook has begun, on an experimental basis, to query some users about whether someone they know seems to be turning into an "extremist." The social medium is also notifying other users that they may have been exposed to extremist content. A Facebook representative told CNN, "This test is part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content, or may know someone who is at risk. We are partnering with NGOs and academic experts in this space and hope to have more to share in the future."
CNN's story doesn't include the working definition of "extremist" or "extremism" that Facebook is using, but it does reproduce some of the messaging Facebook is deploying. "Are you concerned that someone you know is becoming an extremist?" one alert said, going on to explain, "We care about preventing extremism on Facebook, Others in your situation have received confidential support."
Slippery slope arguments are always suspect, but this test does seem to have a foot on a banana peel that will slide the incautious toward "Are you now, or have you ever been, a member of the Communist Party?" Especially in the veiled suggestion that you might want to name names. The slippery slope, it's worth pointing out, is an informal fallacy, an unsound hypothetical syllogism. Which step here is the false premise?
WIRED has an unrelated essay that calls for an international effort on the part of democratic states to take control of internet governance and fight the malign influence of authoritarian regimes. It's a call for discussion, and not a set of policy prescriptions, but developing an effective response to the authoritarians that won't itself be authoritarian may prove challenging. Destroying the liberal order to preserve it isn't an option, as the authors recognize.