At a glance.
- Al Qaeda and the Taliban, in the wake of victory.
- Technology's advance and the prospect of more widespread deepfakes.
- More challenges of content moderation.
- Conspiracy theories, political censorship and fact-checking, and the decline of expertise.
Al Qaeda's inspirations and aspirations, in the wake of the Taliban's victory.
Over the weekend SITE Intelligence Director Rita Katz followed al Qaeda sympathizers writing in the online publication Wolves of Manhattan. They call for more attacks like those of 9/11, and are emboldened by the US withdrawal from Afghanistan, which they see as a validation of al Qaeda's original strategy.
“As soon as [the] U.S. announced withdrawal from Afghanistan, al-Qaeda began transforming its media structure, emulating ISIS – creating dozens of media groups, each with a different mission, all serving the overarching goal of strengthening al-Qaeda,” Katz tweeted. How effective such online influence and inspiration will prove to be remains of course to be seen. The Taliban are generally regarded as allies of al Qaeda; ISIS is thought to be a rival, not an ally. The Taliban is expected to present as moderate a face online as is consistent with its program. Neither al Quadea nor ISIS are likely to be so nuanced.
Technological advance and growing unease about the potential of deepfakes.
Some technological advance depends upon a kind of bottom-up craft-knowledge that develops potentials not envisioned by those who first introduced the technology in question. One such is DeepFaceLive, which the Dot describes as a way of a user's altering their appearance in realtime during Twitch, TikTok, and, presumably, Zoom sessions. It's now well on its way to commodification, and is available on GitHub. This seems to trouble people more than does the introduction of backgrounds into Zoom sessions, so participants could present themselves as participating from the surface of the moon, say, or home plate at Ebbets Field, or the cockpit of a B-26, and it's instructive to ask why this might be so. Do we simply become inured to bogosity once it's embedded in familiar conventions?
Some forms of bogosity are worse than others. No one really cares if you make yourself look like an ear of corn, the way the human partner of the emu does in a Liberty Mutual commercial. But putting someone's face in a pornographic video seems more invasive, more damaging, more insulting, and far less amenable to becoming normalized as an artistic convention. This is what a new service advertised recently. MIT Technology Review has an account of the service (unnamed for reasons of discretion, and to avoid driving traffic to the site), which bills itself as a way of exploring fantasies. In either a fit of conscience or a fit of fear of litigation, the proprietors of the unnamed site appear to have taken their product down.
More on the difficulties of content moderation.
The Wall Street Journal reports that Facebook changed its news feed algorithm in 2018. The motive for doing so seems to reflect a common corporate mixture of public spirit and narrow commercial self-interest. Facebook was interested in fostering community among its users, in helping them connect with the like-minded in sharing their sense of the common good. It had also discerned signs that users were interacting less with its platform, which of course is not good for business. The company tweaked the algorithms that served content to users on the basis of their perceived interests. "A proprietary algorithm controls what appears in each user’s News Feed. It takes into account who users are friends with, what kind of groups they have joined, what pages they have liked, which advertisers have paid to target them and what types of stories are popular or driving conversation," the Journal explains.
The goal of the change was to encourage more comments, more original content, and to decrease the extent to which users became passive computers of professionally produced video (like television viewers in the 1960s, when Newton Minow denounced broadcast t.v. as "a vast wasteland," which of course t.v. was). Content was ranked on a scale of "meaningfulness," roughly by how much interaction it produced, and the news feeds were designed to privilege meaningful content.
The effect of the change, however, was less benign than Facebook had hoped. The algorithm served up polarizing, exaggerated, often demonstrably false content, and, in a vicious, self-reinforcing circle, that seems to have been what the users wanted. So interaction became more intense, communities smaller and more closed, and the quality of the experience by most reports declined, getting, by most informal measures, angrier and in general nastier. It's as if newspapers decided to cede their editorial judgment of local news entirely to letters to the editor.
It's a difficult problem, since it involves in part the ancient problem of free human beings treating one another in their usual, fallen way. Can better behavior be incentivized? Sure. But it's difficult to automate such incentives, and the curiously private quality of what in fact are extremely public performances in social media suggest why developing such incentives is likely to remain a very difficult if not intractable problem.
There are other issues surrounding content moderation at Facebook. The platform has rules about what's permissible, but, another Wall Street Journal piece reports, those rules are applied unevenly. Whitelisted V.I.P. accounts are, in general, permitted to romp freely through otherwise prohibited territory. "The program [of content moderation], known as 'cross check' or 'XCheck,'" the Journal writes, "was initially intended as a quality-control measure for actions taken against high-profile accounts, including celebrities, politicians and journalists. Today, it shields millions of VIP users from the company’s normal enforcement process, the documents show. Some users are 'whitelisted'—rendered immune from enforcement actions—while others are allowed to post rule-violating material pending Facebook employee reviews that often never come."
Conspiracy theories, political censorship and fact-checking, and the decline of expertise.
Patently bogus conspiracy theories are worrisome, as are debunking campaigns that push political positions under the guise (often sincerely assumed) of fact-checking. These problems aren't confined to any one culture, linguistic community, or country, but Foreign Affairs has a discussion of what it takes to be the central problem of disinformation and misinformation as they play out in the US: the decline of trust in public experts. Since most of the things we know we know on someone else's authority, when genuine expertise vanishes, becomes distrusted, or is displaced by institutional forms of nonsense, some other source of opinion will take its place.