How To Prevent the Next Social Media-Driven Attack On Democracy—and Avoid a Big Tech Censorship Regime

February 3, 2021 Tech

Ending Targeted Advertising Will Shut Down Social Media’s Radicalization Machine

Social media networks like Facebook, YouTube, and Twitter played a significant role in the Jan. 6 attack on the Capitol Building. But having private monopolists ban conservative political groups or individual leaders not only introduces a dangerous unaccountable censorship regime, but also fails to fix the fundamental problem, which is that these same monopolists make billions from promoting misinformation, conspiracy theories, and violence.

On Jan. 6, 2021, there were two attacks on American democracy. The first was an attack on the Capitol by a violent mob egged on by former President Donald Trump. The second was the response: the removal of certain groups, individuals, and even entire social media sites by a small network of Silicon Valley executives who run social media networks and the systems that support them.

While the riots were no doubt more immediately dangerous, the long-term risk of allowing the holders of critical private sector infrastructure to silence political factions is significant. Indeed, German Chancellor Angela Merkel, no friend of Trump, called big tech’s banning of the president “problematic” because of its anti-democratic nature.

This brief will explain how the business models of Facebook and other social networks were integral to creating the conditions that led to the Capitol attack. It will then lay out why the best way to prevent the harms these corporations foist upon democracy – both in America and globally – is to end their domination of online communications networks and alter their business models, not ask them to use their own judgment to decide which political speech is and is not acceptable.

We recommend three policy changes:

  1. Ban targeted advertising by communications platforms through either Federal Trade Commission agency rulemaking or legislation.[1]
  2. Repeal or reform Section 230 of the Communications Decency Act so that traditional legal claims such as defamation, fraud, incitement, harassment, and so forth apply to platforms who profit from spreading dangerous and illegal content. A possible reform path would be to remove protections for firms which use algorithms to monetize data.[2]
  3. Implement the recommendations in the House Antitrust Subcommittee’s report on digital markets to force common carriage rules on big tech firms that operate critical private infrastructure so that anyone engaged in legal behavior has access to this infrastructure on equal terms for equal service.[3]

There’s a direct connection between increasing radicalization in society and the platforms’ profit motive. These three policy choices would break that connection, which is critical for preserving American democracy.

1. How did social media networks contribute to the Jan. 6 riot?

When the rioters attacked the Capitol, they were acting on misinformation and conspiracy theories that many of them believed to be true. Steeped in an ecosystem of disinformation, bigotry, and rage, they sincerely believed they were stopping an election from being stolen.[4] CNN’s Brian Stelter called the event an “Extremely Online Riot,” as members of the mob were not only inspired and organized by what they found online, but tweeted and broadcast the attack to their fans and followers.[5]

Nearly all of the services they used to do so, whether Facebook, YouTube, or Twitter, made money by selling ads as the riot occurred. They also made money selling ads against election conspiracies and organizing efforts that pre-dated and influenced the riot.

2. Step back: How exactly do platforms make money?

By selling ads against your attention and data. Platforms such as Facebook and Google’s YouTube video service create specialized user interfaces to keep you engaged and to collect more data that can be sold to specific advertisers attempting to reach people just like you: this is called targeted advertising. The longer users remain on the platform – hooked on sensationalist content, which the platforms’ algorithms prioritize – the more money they make, because more time equals more ads and more data collection. False or radicalizing content is not an unfortunate byproduct of the business model. It’s core to these corporations’ ad-based revenue models because it keeps you hooked and enables ever more perfect targeting.

A Wall Street Journal article citing an internal Facebook investigation found that 64 percent of people who joined extremist groups on Facebook did so because the site’s own algorithm suggested them.[6] That’s how these platforms operate: hook people on extremist content, propaganda, and conspiracy theories, enabling the constant collection of more data that can be sold to third parties for profit. They also gamify it, putting Like buttons, retweets, and video view counters to keep people coming back for more.

This problem didn’t just help produce the recent riots. For instance, Google has provided ad services to 86 percent of sites carrying coronavirus conspiracies.[7] Facebook, with its addictive user interface designed to maximize engagement, has helped foster deadly mob attacks in India, Sri Lanka, and Myanmar.[8]

3. So deplatforming dangerous actors doesn’t actually fix the underlying problem?

Right, because it doesn’t change the business model these corporations rely on. While the world is a better place without Donald Trump’s Twitter feed or Facebook page inciting his followers to violently overturn an election, keeping him or other arbitrarily chosen malignant actors off these platforms doesn’t change the incentive for Facebook or other social networks to continue pumping misinformation into users’ feeds to continue profiting off of ads. There’s a reason every time Facebook says it is cracking down on a particular topic or rumor, it seems unable to fully clear its platform of that information: Truly eliminating this kind of click-bait would undermine its entire business.[9]

For years, anti-monopolists on the Democratic side of the aisle have been arguing that the power of big tech is a threat to democracy. The House Antitrust Subcommittee released a report last year summarizing a 16-month investigation into these firms, observing that they use the power they have via their market position to decide who gets to succeed in the marketplace. Together, Apple, Amazon, Facebook, and Google determine what kinds of businesses and behaviors can happen online.

4. So what should we actually do?

The subcommittee recommended that these firms be broken up, as well as the implementation of nondiscrimination rules to mandate service for all comers on equal terms, similar to common carrier rules used throughout American history, from railroads to cable systems to grain elevators. Prior mergers that should have been blocked can be reversed: for instance, WhatsApp can be broken away from Facebook. As Economic Liberties explained, the goal is to create “regulated competition,” where these platforms compete on the merits of their services, not on who can serve up the most salacious nonsense to addicted users.[10]

5. But wait, nondiscrimination rules sound like they could require platforms to keep bad users around?

Yes, seen in the light of the riots, anti-monopoly rules might seem dangerous. For instance, Apple, Google, and Amazon Web Services removed the conservative social network Parler from their facilities, under the premise that Parler fosters violence. A nondiscrimination obligation would require them to carry Parler’s business, even if it causes harm.

But let’s back up a step: why are business models that allow profiting off of violent insurrections – whether Parler, Facebook, Twitter, or YouTube – legal in the first place? The answer is a law known as Section 230 of the Communications Decency Act.

6. I’ve heard a lot about Section 230. What is it and what does it do?

Section 230 is the legal framework behind the business models of social media platforms. It immunizes them from responsibility for what their users do with their service.

In 1996, lawmakers sought to encourage “Good Samaritan” behavior on the part of technologists, giving them legal immunity so they could curate their platforms to eliminate anti-social behavior. The real effect, however, has been to encourage illegal activity, such as harassment, defamation, fraud, and incitement, as long as it is done online. These platforms make money from this behavior and are immunized from any costs, under the false premise that they are merely conveying speech.

Two examples make the point. Scammers often create fake Facebook accounts impersonating military personnel and use those accounts to lure lonely women into sending them money. When the solders return home, these women are waiting for a romance they think is real.[11] Obviously, the scammers are committing fraud, but Facebook is also profiting by selling ads and collecting data as the scam happens and expending little to no effort to stop it. Facebook bears no liability for this behavior, because Section 230 immunizes it from legal claims. Similarly, Grindr knowingly enables stalkers to use its platform to harass and, in some cases, commit violence against victims, but bears no liability for doing so, because of Section 230.[12]

In short, platforms profit off of targeted ads sold against dangerous, fraudulent, and violent content, and are protected by Section 230 from facing any liability for the harms they cause.

7. Sounds bad. Can we get rid of it?

Repealing Section 230, or reforming it so platforms who profit via targeted advertising are not covered, would reduce the incentive for social media to enable illegal behavior.[13] Were it repealed, a whole range of legal claims, from incitement to intentional infliction of emotional distress to harassment to defamation to fraud to negligence, would hit the court system, and platforms would have to alter their products to make them less harmful.

There are other paths to taking on targeted advertising, like barring it through privacy legislation, a law for a real Do Not Track List, or using what’s known as the “unfair methods of competition authority” of the Federal Trade Commission. But at the very least, we can stop immunizing platforms that enable illegal behavior from offloading the costs of what they inflict.

Ending the shield for illegal activity would in turn open the door to antitrust action and nondiscrimination rules that take away the power of tech oligarchs to choose who gets to be a part of our politics. What behavior is and isn’t illegal is a public choice, not one for tech executives to make. If Facebook, Twitter, Parler, or anyone else is engaged in facilitating violent rebellions, they must be held accountable. But that accountability must happen in a court of law, and not in an executive suite in Silicon Valley.

8. So, democracy will be saved?

Well, there are lots of other things that need to be done to rein in the ability of big corporations to ruin democracy, but the measures outlined here will certainly make it less profitable for big tech barons to radicalize those who would tear democracy down.

Endnotes

[1] Chopra, Rohit and Khan, Lina, The Case for ‘Unfair Methods of Competition’ Rulemaking (March 21, 2020). 87 University of Chicago Law Review 357 (2020), Available at SSRN: https://ssrn.com/abstract=3558721 Addressing Facebook and Google’s Harms. Through a Regulated. Competition Approach. Matt Stoller. Sarah Miller. Zephyr Teachout. April 2020

[2] See “Statement of the American Economic Liberties Project Replying to the Comments of Carrie A. Goldberg” Comment to the Federal Communications Commission in the NTIA Petition for Rulemaking to Clarify Provisions of Section 230 of the Communications Act of 1934, September 17, 2020 Administration

[3] “Investigation of Competition in Digital Markets: Majority Staff Report and Recommendations,” US House of Representatives Committee on the Judiciary, Subcommittee on Antitrust, Commercial and Administrative Law, 2020, https://judiciary.house.gov/uploadedfiles/competition_in_digital_markets.pdf

[4] Carney, Timothy P. “For tens of thousands, Trump was just something to believe in,” the Washington Examiner, Jan. 7, 2021 https://www.washingtonexaminer.com/opinion/columnists/for-tens-of-thousands-trump-was-just-something-to-believe-in

[5] Stelter, Brian, “CNN’s Elle Reeve: ‘Donald Trump plus the Internet brings extremism to the masses’,” CNN, Jan. 9, 2021 https://www.cnn.com/2021/01/09/media/elle-reeve-firsthand-account-riot/index.html

[6] Horwitz, Jeff, and  Deepa Seetharaman, “Facebook Executives Shut Down Efforts to Make the Site Less Divisive,” The Wall Street Journal, May 26, 2020 https://www.wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11590507499

[7] “Why is Ad Tech Funding These Ads on Coronavirus Conspiracy Sites?” Global Disinformation Index, March 24, 2020 https://disinformationindex.org/2020/03/why-is-ad-tech-funding-these-ads-on-coronavirus-conspiracy-sites/

[8] Kamdar, Bansari, “Facebook’s Problematic History in South Asia,.” The Diplomat, Aug. 19, 2020 https://thediplomat.com/2020/08/facebooks-problematic-history-in-south-asia/

[9] Garofalo, Pat, “Facebook spread rumors about arsonists setting fires in Oregon. It’s part of their business model.” NBC Think, Sept. 17, 2020 https://www.nbcnews.com/think/opinion/facebook-spread-rumors-about-arsonists-setting-fires-oregon-it-s-ncna1240308

[10] Stoller, Matt, Sarah Miller and Zephyr Teachout, “Addressing Facebook and Google’s Harms Through a Regulated Competition Approach,” American Economic Liberties Project, April 10, 2020 http://www.economicliberties.us/our-work/addressing-facebook-and-googles-harms-through-a-regulated-competition-approach/

[11] Nicas, Jack, “Facebook Connected Her to a Tattooed Soldier in Iraq. Or So She Thought.” The New York Times, July 28, 2019 https://www.nytimes.com/2019/07/28/technology/facebook-military-scam.html

[12] Goldberg, Carrie, “Herrick v. Grindr: Why Section 230 of the Communications Decency Act Must be Fixed,” Lawfare, Aug. 14, 2019 https://www.lawfareblog.com/herrick-v-grindr-why-section-230-communications-decency-act-must-be-fixed

[13] “Statement of the American Economic Liberties Project Replying to the Comments of Carrie A. Goldberg,” American Economic Liberties Project, In the Matter of ) Petition for Rulemaking of the ) National Telecommunications and ) Information Administration to ) Clarify the Provisions of Section ) 230 of the Communications Act of ) 1934, RM-11862, Sept. 17, 2020