Imperfect platforms moderate imperfect users imperfectly.
The CyberWire
By Katie Aulenbacher, the CyberWire staff
Apr 27, 2021

Hearings before the US Congress on March 25th, 2021 took up social media’s role in the decline of public discourse ran the gamut from mental health to minority harms and ideological censorship, exposing factual disagreements about Big Tech’s business model and moral disagreements about who’s to blame. (As we so often are, we're tempted to quote Pogo Possum, "we have met the enemy and he is us.") With Congress in a regulatory mood, the social media industry received a grilling comparable to the one comic books got in the 1950s and television, periodically, from the 1960s through the 1980s. An account of the hearings is a useful look at how US political opinion stood at the end of the first quarter of 2021.

This article is a CyberWire Pro exclusive made available to the public. Sign up for CyberWire Pro for our premium articles, podcasts, events, and more.

Imperfect platforms moderate imperfect users imperfectly.

Representative Doyle (Democrat, Pennsylvania 18th) opened the hearing with what would become a motif of the event: an idealized origin story of social media as an Edenic venue for innocent intergenerational photo sharing—rather than one reflecting degeneracy by design, as some contend. Doyle lamented the rise of algorithms that lead people into intellectual temptation, a point of contention between the members of Congress and CEOs, who consistently denied their platforms’ intention to cultivate contemptible content for profit.   

The March 25th joint Subcommittees on Communications & Technology and Consumer Protection & Commerce virtual hearing, titled "Disinformation Nation: Social Media's Role in Promoting Extremism and Misinformation," covered the alleged harms of social media with a wary eye on the alleged beneficiaries of these harms. Facebook’s Zuckerberg proposed the most concrete regulatory reforms, while many members of Congress were satisfied to warn that the end of self-rule is near.   

Thanks for the tax dollars, but times have taken a turn.  

Several Representatives praised the platforms’ ingenuity, success, and potential (Representative O’Halleran, Democrat, Arizona 1st, called the executives “three of the most knowledgeable businesspeople in the world”) before laying the degradation of civic relations and public discourse at their feet.  

Tying the rise of child exploitation, human trafficking, cyberbullying, suicide, technology addiction, and extremism to social media, Representative Johnson (Republican, Ohio 6th) commented that the “nation’s political discourse has never been uglier,” and the country hasn’t “been this divided since the Civil War.” Representative Eshoo (Democrat, California 18th) said platforms “chase user engagement at great cost to our society.”

Representative Schrier (Democrat, Washington 8th) complained that “your ecosystem…directs a hostile sliver of society en masse to my” page. Representative Pence (Republican, Indiana 6th) added that his accounts, too, are “littered with hateful, nasty arguments between constituents that stand in complete opposition to the ideas of civil discourse that your platforms claim to uphold,” and this while the platforms “engulf our lives” and participation is no longer optional for functioning members of society. 

Doyle remarked that social media has altered how the whole planet learns and connects, often for the worse. He explained how platforms abuse peoples’ natural trust: users think they’re interacting with neighbors when they’re fighting Russians and bots, and the companies “failed to protect [them] from the worst consequences of [their] creations.”

Airing a laundry list of harms encompassing the summer and Capitol riots, discrimination against religious groups, childhoods lost to “the comparison game,” and violations of “inherent freedoms,” Representative Bilirakis (Republican, Florida 12th) concluded that “Big Tech has lost its way,” and the nation no longer trusts the platforms to be good stewards.  

Do social media radicalize users?

Members of Congress spotlighted various examples of social media-bred extremist hotbeds, from anti-vaxxers to QAnon, presenting as evidence studies that purport to show algorithms leading users away from centrism and down dark rabbit holes. Representative Kelly (Democrat, Illinois 2nd) observed that “algorithms can actively funnel users from the mainstream to the fringe, subjecting users to more extreme content.” Representative Pallone (Democrat, New Jersey 6th) repeated a former Google designer’s alleged comments that “If I’m YouTube, and I want you to watch more, I’m always going to steer you towards crazy town.”

Representative Dingell (Democrat, Michigan 12th) highlighted an ex-Facebook artificial intelligence researcher’s findings that “in study after study…models that maximize engagement, increase polarization.” Pallone pointed to another internal study that linked Facebook’s recommendation tools to 64% of the participation in certain German extremist groups.

Zuckerberg responded that Facebook doesn’t intend these patterns, which are bad for business long-term, and works hard to weed out extremism. Some forms of radicalization, he said, are promoted organically through mechanisms not under the platform’s direct control, like news sources and messaging. Furthermore, he argued, the evidence doesn’t support platform scapegoating, since US polarization was on the rise before the dawn of social networks, and has dropped in other nations that rely on social media.

Are social media deliberately designed to be addictive?

Leave aside for the moment that Pinterest isn't heroin, that TikTok isn't meth. The worry that social media can in some sense become addictive is a persistent one, and the conversation soon turned to social media’s addictive tendencies. Johnson remarked, “It is now public knowledge that former Facebook executives have admitted that they used the tobacco industry’s playbook for addictive products.” Representative Trahan (Democrat, Massachusetts 3rd) described “manipulative design features intended to keep [kids] hooked,” pulling confessions from the executives that autoplay is the default setting on YouTube Kids, and endless scroll might be the default on Instagram for kids. Eshoo asked the CEOs, “So, are you willing to redesign your products to eliminate your focus on addicting users to your platforms at all costs?”

While acknowledging that Facebook “want[s] to serve everyone,” Zuckerberg clarified that his company doesn’t set goals around time spent on the platform, but seeks to maximize the quality of social interactions, which will naturally lead to more engagement. The platform has reduced the spread of viral videos by 50 million hours per day, for example, and ceased recommending any political or civic group, two moves that run contrary to the Representatives’ hypothesis.  

Dingell elicited an admission from Zuckerberg that the company’s chief risk is its “ability to add and retain users and maintain levels of user engagement,” concluding that it’s hard to square his various statements. 

Pallone confronted Pichai with the statistic that “YouTube’s recommendation algorithm is responsible for more than seventy-percent of the time users spend” on the platform. Pichai responded that engagement is not the company’s only goal, and content responsibility is their top priority. 

Harms to special, arguably particularly vulnerable, populations. 

Several Representatives called out specific and disproportionate harms to groups like Spanish language speakers, minority communities, and veterans. (All three executives conceded that their products harm minorities.) Representative Soto (Democrat, Florida 9th) worried that misinformation ran particularly rampant in Florida’s Spanish-speaking community, and Representative Rush (Democrat, Illinois 1st) was outraged that Dorsey broke a 2018 promise to deliver an independent audit of Twitter’s civil rights impacts. Representative Rice (Democrat, New York 4th) elevated the issue of bad actors targeting military service members, and Representative Clarke (Democrat, New York 9th) asked the CEOs about policies that could perpetuate discriminatory ad targeting. Representative Butterfield (Democrat, North Carolina 1st) circled back to what he described as the platforms’ hypocritical show of support for racial justice efforts while fostering racial violence. 

The executives assured Congress that they take each of these concerns very seriously, and invest heavily in addressing them, in collaboration with advocacy organizations. Facebook, for example, employs over a thousand “integrity system” engineers and roughly 35 thousand content review personnel. 

Protecting our children. 

Representative Rodgers (Republican, Washington 5th) said the companies’ exploitation of children was the nail in Big Tech’s coffin in her eyes, and the eyes of “millions of moms.” She shared that her household fights the “Big Tech battles” for her kids’ health and safety each day, calling social media her “biggest fear as a parent.” Following a rash of youth suicides in her community, she heard from local doctors, educators, and parents that social media is playing a role in young kids’ “deep sense of brokenness” and “despair.” The research, she claimed, is solidifying: the more time children spend on the platforms, the worse their mental health outcomes. 

Zuckerberg denied that the science is conclusive on the issue of screen time and mental health, saying some studies show positive or neutral outcomes. (He wasn’t aware of a 2019 JAMA Pediatrics study, raised by Representative Castor, Democrat, Florida 14th, linking teen depression rates to social media use.) While rejecting the claim that Facebook “harms children,” he acknowledged limiting his own kids’ use of the platform. He also emphasized that children under thirteen aren’t allowed on Facebook and Instagram, but multiple Representatives countered that this position doesn’t pass the “smell test.” 

Rodgers made her position explicit: “I do not want you defining what is true for [my kids]…I do not want their self-worth defined by [you]…I do not want their emotions and vulnerabilities taken advantage of so you can make more money and have more power.” 

Trahan added that “leading experts all acknowledge that social media sites pose risks to young people,” asking what the executives do when their kids don’t want to put down their devices, or choose social platforms over family. The CEOs responded that they’ve built good parental controls, but Trahan pressed them to build in “child-centric design” by default. 

Castor noted another study that found the majority of sampled apps used by young children illegally transmit identifying information to third parties, and Bilirakis worried about Google and Instagram’s pivot to the under-thirteen market, saying “you’ve obviously identified a business case for targeting this age bracket,” his question about how the companies plan to monetize kids going unanswered. Johnson compared the rollout to “handing our children a lit cigarette,” while Trahan warned that the “Committee is ready to legislate to protect our children from your ambition.”

Political bias in social media. (Upper-middle-class, Silicon Valley, lifestyle progressive?) 

A number of Representatives remarked on the widespread perception that Big Tech harbors a leftwing bias. Representative Walberg (Republican, Michigan 7th) raised a Pew Research Center finding that seventy-two percent of people find it “likely that social media platforms actively censor political views that Big Tech companies find objectionable,” with a four-to-one margin of respondents saying the platforms favor liberals. Walberg said Big Tech consistently disclaims any bias, however, blaming any apparent examples on “glitches.” Zuckerberg and Dorsey acknowledged that their platforms have not studied whether their policies unfairly target conservatives. 

Representatives proceeded to catalogue some glitches for the executives. Representative Joyce (Republican, Pennsylvania 13th) wondered why Facebook shut down three public accounts in his state with no warning or opportunity for appeal. Walberg listed takedowns of pro-life and Second Amendment groups, observing that even Bernie Sanders is uneasy about President Trump’s deplatforming. (Dorsey noted that Twitter is reviewing its world leaders policy and accepting comment, but Pence retorted that he’s not sure what there is to review in cases like Iran’s, where many lives have been lost in connection to the regime’s policies.) Representative Carter (Republican, Georgia 1st) asked Zuckerberg and Dorsey why they weren’t taking the “attempted theft of the certified election in Iowa” as seriously as other election claims, and the CEOs said they’d look into it.

Representative Scalise (Republican, Louisiana 1st) recalled Twitter’s two-week deplatforming of the New York Post over “a very credibly sourced” article about Hunter Biden, while a Washington Post article with election-related misinformation about President Trump remains in circulation. Dorsey said the latter article doesn’t violate Twitter policy (which includes a civic integrity focus), but the New York Post decision was “a total mistake.” Representative Crenshaw (Republican, Texas 2nd) contrasted Twitter’s censorship of a Project Veritas interview with its allowance of an “apples to apples” CNN interview.  

On the subject of media bias, Representative Duncan (Republican, South Carolina 3rd) called out the irony of Democratic Representatives “repeating disinformation” about the Atlanta shooter’s (non-racial) motives during a hearing on media misinformation. He also asked Dorsey to look into why it wasn’t against Twitter’s policies for users to refer to the “Syrian…Biden-supporting Muslim” accused in the Boulder shooting as a “white Christian terrorist.” 

Rodgers faulted social media for failing “to promote the battle of ideas.” Carter worried that Dorsey’s Birdwatch experiment, which allows the community to self-police, will recapitulate well-established patterns of mob pile-ons targeting factual but disfavored views. Dorsey replied that experimentation is good and necessary, and Carter countered that that may be true when you’re not the guinea pig. Citing disparate treatment of accounts “too numerous to be explained away,” Duncan lamented that it’s worse in the eyes of Twitter to be a conservative than “a pedophile pornographer, a woke racist, or a state sponsor of terror.”

Representative Latta (Republican, Ohio 5th) said we have a “Little Brother problem” with social media, where “private companies do for the Government things it cannot do for itself.” Politicians, journalists, and scientists are censored, he said, children are brainwashed, and those are just the visible offenses. “Good faith moderation” amounts to silencing opposing viewpoints, he continued, with only a sham appeals process. Crenshaw also worried that liberal members of Government strive to define “misinformation” as “political speech they disagree with,” intending to weaponize Big Tech to restrict First Amendment rights they themselves can’t touch. He fears the US could go the way of Canada and the EU, where citizens are investigated and jailed for speech. 

Do platforms profit from social harms?

Representative after Representative accused the CEOs of benefiting financially from the above miseries. Kelly commented, “The business model for your platforms is quite simple: keep users engaged. The more time they spend on social media, the more data harvested, and targeted ads sold.” Eshoo noted that “the most engaging posts are often those that induce fear, anxiety, anger.” 

Kelly asked each executive point blank whether they profit off damaging content. Dorsey and Zuckerberg answered in the negative, and Pichai said that wouldn’t be Google’s intention. Kelly asked them to provide “in writing” an explanation of how they escaped gathering income on advertisements distributed with harmful posts. Pallone hammered home the point that platforms aren’t passive “bystanders” or non-profits: when they spread destructive content, “they make money.” 

International considerations. 

Representative Schakowsky (Democrat, Illinois 9th) wondered about the CEOs’ views on expanding Section 230 internationally, as with the US-Mexico-Canada trade agreement. Dorsey dodged, but Pichai and Zuckerberg affirmed the importance of something resembling 230 protections to global business dealings. 

Representative Dunn (Republican, Florida 2nd) pressed Pichai on Google’s relationship with Beijing, and the significance of Washington’s strategic advantage in emerging technologies. Pichai denied any artificial intelligence collaborations with China, but Dunn wasn’t convinced, and sought assurances about how US data was shared in-country. He detailed the Chinese Communist Party’s misuse of AI to suppress democracy, spread disinformation, facilitate genocide, and harvest organs, encouraging Google to “end this dangerous artificial intelligence research relationship with China.”

Regulatory questions. 

Calling the platforms “hotbeds of misinformation and disinformation, despite all the promises,” Schakowsky proclaimed that “self-regulation has come to the end of its road.” She said the Capitol riots weren’t the first or worst platform failure; the 2018 Rohingya genocide in Myanmar was organized on Facebook.  Schakowsky’s comments were representative, with many vowing to hold Big Tech accountable where the market has not. The details of this accountability were largely left for a later date, though several members promoted forthcoming bills. 

Members of Congress variously pitched bans on surveillance advertising, ads near misinformation, protected demographic targeting, and dark patterns that dupe people into handing over information. The establishment of a Federal Trade Commission-like body to oversee social media was floated, and Zuckerberg pitched regular transparency reports and conditioning 230 immunity on the effectivity of large platforms’ moderation systems. Dorsey said there’s been too much focus on outcomes, and regulators should look at the “primitives of AI.” 

Dorsey also stressed that the Government shouldn’t dictate prescriptive rules or compel every business to behave identically, or the US could end up with fewer choices and limited mechanisms to question leaders, “a reality in many countries.” Twitter’s open source Bluesky project, he said, addresses issues of transparency, access, security, and innovation. 

Representative Bucshon (Republican, Indiana 8th) questioned the platforms about their market share. Pichai said Google faces vigorous competition “by category,” for example in product searches—the company’s main source of revenue—as well as software and phones. Zuckerberg pointed to the existence of Twitter, YouTube, TikTok, and Snapchat, and Dorsey said Twitter’s protocol approach would empower competition. Bucshon and Zuckerberg also raised the potential for federal privacy legislation to price start-ups out of the game. 

Where does the social-media buck stop?

The hearing left participants to grapple with the question of whose fault is what Castor called the “toxic stew” of social media, and whose job it should be to remedy it. Walberg recalled a Founding Father’s view that “our Constitution was meant for a moral and religious people, and is wholly inadequate for any other.” He encouraged parents, parishes, and schools not to neglect their authorities and obligations, explaining that we face a choice between “conscience and the constable,” self-control or legal control. 

Zuckerberg maintained that responsibility for bad behavior rests with bad actors, while seeming to acknowledge that technology can have a “gradient,” as Shannon Vallor put it in “New Social Media and the Virtues,” that encourages virtue or vice. Noting that misinformation is nothing novel, he admitted that the Internet has produced “unique challenges.” 

As for who should address these challenges, Zuckerberg said “it’s not possible to catch every piece of harmful content without infringing on people’s freedoms,” and people probably don’t want to live in a world where they “can only say things that private companies judge to be true.” Dorsey added that neither Big Tech nor Big Government should regulate public discourse. The existing tragedy of the commons-esque situation, which Pence said resembles a “town hall without a moderator,” was also ruled untenable. 

The executives stressed that their platforms are developed and run by humans, and will never be perfect. Left unspoken was the sentiment that their platforms are also populated by imperfect humans. Pichai gestured at the scale of the problem, noting that over five-hundred hours of media are uploaded to YouTube each minute. While the number of moderation mistakes might be large, Zuckerberg said, the percentage should be small. Dorsey took a conciliatory tone, admitting that Twitter will “make mistakes in prioritization and in execution.” He observed that some say Twitter does too much, and some say they do too little, and “both points of view are reasonable and worth exploring.” 

A reaction from industry (and specifically from the legal profession).

The CyberWire received commentary from Jenny Lee, partner at Arent Fox LLP and former Consumer Financial Protection Bureau enforcement attorney, and we'll give her the last words:

“[M]embers of Congress, from both sides, demonstrated in today’s hearing that the Congress is galvanized and ready to proceed with serious legislation.

"The true test will be what happens in the days that follow today’s hearing. It will be up to the industry, comprising both large and small Internet platforms, and lawmakers as well as agencies like the FTC, FCC, and CFPB, to put in the elbow grease to convert policy objectives into real legal proposals. 

"There are many pitfalls ahead, which merely reflect the enormity of the problems. In my view, these pitfalls include [a] misunderstanding by all stakeholders regarding how litigation works. Many members emphasized that some categories of lawsuits are so important that, while big tech can keep the Section 230 exemption from liability for some matters, they should NOT retain the exemption if the underlying lawsuits are about:  consumer protection violations. (Shakowsky for example said this.) The Senate Safe Tech Act has a similar model, but says not if the underlying lawsuit is for civil rights violations or injunctive relief. All sides seem to agree that one way to draw the line in the sand is to base it on what is legal versus illegal. 

"But the big problem with this approach is:  it misconstrues what Section 230 liability exemption is. Defendants whom we assist in court will use motions to dismiss under rule 12(b)(6) to defend the case, dispose of it before trial, by invoking exemptions in statutes. It is circular and flawed reasoning for lawmakers to base the carveout on what is deemed a consumer protection violation. You cannot know if injunctive relief was warranted, or if a consumer protection violation was ruled to have happened, until AFTER THE CASE IS DONE. But the point of a liability exemption is that you need a shield to keep the case from getting going, and to stop the case before costly discovery and trial begins. 

"Otherwise, smaller companies will wind up being stifled by the enormous costs of defending litigation – so even if the injunction never turns out to be meritorious, companies need to spend money just to reach a ruling that they are not guilty… 

"Another interesting problem that the hearing shone a spotlight on today: A taking is when the government seizes private property for public use. Today’s hearing reflects that we aren’t sure what we want governments to do versus what we want private corporations to do. Before, we relied on public services for things like legal protection, law enforcement investigation, and consumer protection. But now, Republicans and Democrats alike are imposing on the shoulders of private companies an amorphous obligation to nearly act as substitutes for government functions…

"What must businesses be looking ahead to prepare to do? The natural consequence of this is – to what degree, given the large societal questions posed by tech and Internet communications today – are we more comfortable with allowing government to do… takings of private business, than we were before? Certainly the Internet is not private property belonging to any one company. But the platforms belong to the companies that made them… 

"In this context, business leaders have an opportunity to plan ahead and develop internal compliance systems that allay the concerns of lawmakers but [are] also consistent with their own business model. Tech company leaders are also poised to develop clever solutions to multiple issues, and because confrontations with regulators and lawmakers are only likely to continue in the coming months, leaders in the industry would be well-advised to plan ahead for them.”