Ending Our Click-Bait Culture: Why Progressives Must Break the Power of Facebook and Google

June 10, 2020 Tech

Ending Our Click-Bait Culture: Why Progressives Must Break the Power of Facebook and Google

The harms Facebook and Google cause are directly related to how they make money – and their monopoly power immunizes them from competitive or consumer pressure – so policy change is the only path to achieving meaningful, sustainable reform.

Recently, America has been gripped by widespread protests over police brutality, with a vital debate over policing and racial and economic justice coming to the fore of the national conversation, all in the context of a pandemic. But a very different discussion was happening on the two dominant platforms that host most online speech, Google and Facebook.

A conspiracy video alleging George Floyd’s death was faked was posted to Google’s video sharing service YouTube, and then shared on Facebook, where it reached 1.3 million people, according to a Facebook-owned analytics tool. Other conspiracies include allegations George Soros provided bricks to protesters, and that the entire incident was organized by the CIA. These are just some of the highest-profile examples of the violent rhetoric and conspiracy theorizing that proliferated on social media following Floyd’s killing.

In the aftermath, Google’s CEO Sundar Pichai said he stood with protestors and donated money to organizations supporting racial justice, while Mark Zuckerberg sought to justify to his employees and civil rights leaders why Facebook refused to label Donald Trump’s social media communications as incitements to violence. Neither CEO, however, noted that when users posted false click-bait content designed to spread violent and divisiveness rhetoric, their corporations made money.

In other words, underpinning the current conversation around Google and Facebook is the ideological concession that the public must accept these dominant, unaccountable platforms, which are fueled by tens of billions of dollars in advertising revenue and business models that generate dangerous and socially harmful content.

These controversies arose as Facebook and Google are facing growing scrutiny from policymakers. Last month, the Wall Street Journal reported that the Department of Justice, along with a group of state attorneys general, is preparing to file an antitrust suit against Google. Almost every state attorney general in the country has joined New York’s Letitia James in an investigation of Facebook. The House Judiciary Committee is delving into the power of Google, Apple, Facebook, and Amazon.

That these investigations are all occurring at the same time is no coincidence. The 2016 election crystallized for many policymakers that the big tech companies – and particularly Facebook and Google – are dangerous to democracy. During that election, they played an outsized role in disseminating misinformation and propaganda, promoting conspiracy theorists, and facilitating foreign interference. Their monopolization of digital advertising markets is also at the center of the collapse of good sources of credible information, in particular local press.

Not only did these platforms facilitate these harms, but they made money doing so. They turn our society into a click-bait driven culture, because they make money selling ads against click-bait.

Preventing the same problems from affecting the 2020 election, and future elections too, should be of paramount importance. Lawmakers and other public officials can act now to mitigate the harms Facebook and Google bring to the public square, public discourse, and ultimately the very act of voting itself. The conversation must move beyond requiring that platforms engage in better or more fact-checking, or symbolic actions to placate stakeholders frustrated with limited actions by these businesses. We must restructure how they make money.

Facebook and Google’s many interrelated harms trace back to their business model, and in particular, two primary causes: (1) their unregulated dominance over key communications networks, and (2) their use of those networks to engage in surveillance and user manipulation to monopolize digital advertising revenue. Addressing these harms will require policymakers to correct mistakes of the past and take two basic but essential steps.

First, they must break up Facebook and Google, so they are no longer too big to regulate. Second, policymakers must regulate the resulting market practices to protect broad universal public access to the 21st century public square, promoting in turn the critical values underpinning our democracy, free speech and a diversity of speakers sharing ideas free of fear or discrimination. We must change how they generate revenue, so that they profit by serving democracy, not undermining it.

When considering how to address monopolistic corporations like Google and Facebook, we are often presented with a false choice. Should we break them up, or regulate them? To achieve a structural reorientation of the business model of Google and Facebook, the answer is: Both.

Call the approach “regulated competition.” Policymakers should enact structural separations of the current monopolies through antitrust suits, regulations, and/or statutes. Then, policymakers should set the rules of the marketplace for social networking and digital advertising in order to foster innovation and competition, not monopolization, and then the slimmer Google, Facebook, and any other company in the space can participate in the market under those rules. These rules should move advertising from a model where profit comes from intrusive tracking of users and click-bait articles to one where advertising revenue is sold based on the ability of a publisher to accumulate an audience by producing trusted content.

This memo will briefly explain how Facebook and Google have come to dominate modern communications networks, what that means for American democracy, and how to fix it.

1. What are Facebook and Google?

Though the two are popularly portrayed as simply big tech companies, they are more accurately described as conglomerates that own a series of advertising-powered communications networks. And they are vital to 21st century communication and commerce, just like railroads or phone lines were in previous centuries. Using social networking to communicate means, in large part, using a Facebook network, through its main website and app, or its ownership of Instagram and WhatsApp. Using search, online video, or mapping means using a Google network, such as YouTube or Google Maps.

Facebook and Google use their dominant position as gatekeepers to the internet to surveil users and businesses, amass unrivalled stores of data, and rent out targeting services to third parties who can then target content – from ads for shoes to racist propaganda – at users with a precision unrivalled by any other entity. That’s why they siphoned up about 60 percent of all digital ad revenue.

2. Why should I complain about free products?

Two reasons. First, they’re not actually free. Instead of paying with your dollars, you pay for Facebook and Google with your attention.

These platforms create specialized user interfaces to keep you engaged. The longer users remain on the platform – hooked on sensationalist content, which the platforms’ algorithms prioritize – the more money Facebook and Google make. The false content, surveillance, addiction, and so forth are not unfortunate byproducts of the business model; they are core characteristics of it, essential to these corporations’ ad-based revenue models.

A recent Wall Street Journal article found that 64 percent of people who joined extremist groups on Facebook did so because the site’s own algorithm suggested them. That’s how these platforms operate: Hook people on extremist content, propaganda, and conspiracy theories, enabling the constant collection of more data that can be sold to third parties for profit. And they do it while pretending to provide something free.

Second, these monopolies undermine institutions you care about, notably publishers who create the art, entertainment, local news, and content on which we all rely. These corporations direct traffic where they wish, insulating them from basic market-based accountability mechanisms, like competition from rivals for users and advertisers. The net result is they capture an increasing share of ad revenue, which denies it to rivals who both compete with Facebook and Google for ad money and yet are dependent on the distribution channels Facebook and Google control.

3) Why should I care about this? What’s the real harm?

There are many reasons, which we’ll briefly summarize here. This is by no means an exhaustive list.

  • Facebook and Google supercharge the spread of misinformation, which is especially troublesome during instances such as a pandemic or an election; that misinformation can also harm physical and mental health, promoting addiction and targeting users with ads for sham medications.
  • They invade user and business privacy via surveillance, manipulation, and data breaches.
  • They reduce innovation by acquiring or undermining start-ups that might compete with their products, creating what venture capitalists call a “kill zone” around sectors they control.
  • They impose a tax on small businesses by preferencing ad placements of those that pay them the highest rates.
  • And they’ve destroyed local independent journalism by dominating ad markets, siphoning up about 60 percent of all digital ad revenue. That has particular effect on both local elections, since there are fewer local journalists providing coverage, and political corruption, since fewer watchdogs means more misdeeds can occur away from the spotlight.

There are more harms, of course. A product based on constant surveillance and user manipulation that never pays any price for either is bound to cause a host of problems for society.

4) That sounds bad! Why don’t lawmakers or regulators do something about it?

Historically, lawmakers have protected vital communications networks such as telephones or television airwaves from being commandeered by monopolists who exact a toll from everyone who wants to use their systems. But there was a break in the late 1970s, when the so-called Chicago School successfully reframed antitrust to mean only prices paid by consumers.

Both Republican and Democratic administrations adopted this framework. They sought to allow corporate concentration within and across markets, often under the assumption that market dominance signaled nothing more than efficiency.

As part of that change in mindset, regulators largely did not consider that, as companies accumulated data on their users, they would be able to structure bargaining power among different agents in the marketplace, including publishers, platforms, and users, in order to water down or eliminate competition. Instead, market power was inappropriately shoe-horned into the framework of privacy law, where it remains today.

Antitrust enforcers also weakened merger law, viewing mergers in terms of their presumed impact on consumer prices instead of their impact on the competitive process. This pivot set the stage for corporations like Facebook and Google, which offered tools at no monetary cost, to go on merger sprees unencumbered by meaningful challenges.

From 2004 to 2014, Google spent at least $23 billion buying 145 companies, including Maps, Analytics, YouTube, Gmail, Android, and, critically, its 2007 purchase of DoubleClick. Google’s DoubleClick acquisition, which the Federal Trade Commission approved, gave Google control over the plumbing used to deliver ads from advertisers to publishers in the display ad market. Google then combined DoubleClick data with its search and Gmail data to give ad buyers unrivalled information with which to reach potential customers.

Facebook, too, acquired competitors without regulatory intervention. Most notably, in 2012, the FTC unanimously approved Facebook’s $1 billion purchase of Instagram, which it saw as a competitive threat. A year later, Facebook acquired a company called Onavo, allowing it access to granular data on how people used rival apps in order to monitor potential competitive threats. Then, in 2014, Facebook paid $19 billion for the secure communications messaging service WhatsApp.

5) What is this Section 230 everyone keeps talking about?

President Donald Trump recently signed an executive order ostensibly aimed at regulating social media companies. It has to do with something called Section 230 of the Telecommunications Act of 1996.

Prior to the rise of digital environments, publishers of content such as newspapers were subject to libel claims. They made money selling advertising, but also took responsibility for the content they published. Utilities like phone networks didn’t have liability for what customers said to one another, but they also didn’t profit via advertising.

Digital environments created a legal question: Were utility-like platforms, such as websites and blogs, legally responsible for content posted by third-party users? Section 230 gave the clear answer. Section 230 was a government benefit to tech platforms, allowing “interactive computer services” to avoid being held responsible for what their users say or do on the platform, even if they profit from user content via advertising, and even if the platforms engage in editing or filtering of the content. Section 512 of the Digital Millennium Copyright Act offers a somewhat parallel shield for platforms to avoid liability for copyright violations.

Essentially, this means that tech platforms can escape liability and/or make money selling ads when users post the wrong date for Election Day, spread defamatory content or revenge porn, concoct ungrounded conspiracy theories about political candidates, and post fake news cooked up by kids in Lithuania.

6) Great, so I should delete my Facebook account?

If you want, but consumer action alone will not help. First, Facebook can still track you even once your account is gone unless you take a series of additional steps. But more importantly, it’s a vital communication network, where, for instance, school events, neighborhood meetups, and volunteer opportunities are organized. Unless you can get everyone in your social network to switch at once, leaving Facebook just won’t matter.

7) Then what are the solutions?

Reducing Google and Facebook’s dominance means changing the rules and laws that enable their business model, and to bring anti-monopoly enforcement actions to reduce their scale and scope. One way to start is through structural separations: for instance, splitting out Google’s general search from mapping, Android, and YouTube.

The advantage of function-based structural separation is that individual divisions in Google would no longer have an incentive to privilege other divisions. General search, for instance, would no longer automatically funnel addresses or local search terms to Google Maps, but could find other partners.

Facebook and Google subsidiaries could also be separated out along existing technical business lines to enhance competition among similar competitors. Facebook, Instagram, and WhatsApp, for example, could be separated to heighten competition over social network quality, such as improved privacy settings, reduced surveillance, the quality of their mobile applications, or the quantity of ads.

Finally, some business lines could be separated between production and distribution. For example, general search can be divided into the search engine web page Google.com, the underlying web crawl (the unseen function that indexes internet sites for searching), and the ad feed. This division would enable other firms to license the underlying web crawl, creating a new and open competitive market of search engines.

Such changes would result in new, independent companies – Instagram, WhatsApp, Facebook, Messenger, Maps, Google Search technology, Google Search Engine, YouTube, Android/Play, Google Advertising, Analytics, Drive, Chrome, Gmail, and infrastructure services – that can innovate around a variety of business models.

But that can’t be the end. To turn these platforms into safe, neutral networks for communication, policymakers must focus on regulating advertising practices, and in particular on the dangers inherent in targeted advertising.

Ultimately, that means communications networks like Facebook and Google should lose their liability protections under Section 230 if they profit from advertising. Stripping Section 230 benefits for those who sell targeted advertising – wherein users’ specific data is used to single them out for specific advertisements – could serve as an intermediary step. Such a removal of benefits would dramatically reduce incentives to collect and store user information.

Modification would likely force Facebook and Google to change their business model. They would no longer profit from intrusive tracking of users, because they simply wouldn’t be able to sell advertising targeted to any specific person. Instead, they would focus on a different advertising model known as “contextual advertising,” which is a form of advertising premised on trust rather than click-bait. Amending Section 230 and potentially Section 512 would also help restore a level playing field for publishers, who are legally responsible for the content they publish.

8) Is this just about better privacy settings or banning political ads?

No. As long as Facebook and Google profit from a business model fueled by surveillance and data collection, strong incentives will exist to violate user privacy and evade regulations. This approach also risks falling short due to lax enforcement of privacy laws.

Banning political ads, or banning microtargeting solely for political ads, is also not a safe long-term approach. Deferring to Facebook or Google to determine what constitutes political content is inherently dangerous and subject to self-serving bias and manipulation.

9) Isn’t this unrealistic? Aren’t these companies too powerful?

No! Policymakers and enforcers around the globe are pursuing investigations and new policy approaches focused on the sources of Facebook and Google’s power and abuse because they are clearly a problem. On Friday, it was reported that state attorneys general are considering filing an antitrust suit against Google because of its dominance in advertising. This would be the first such suit since the famous case against Microsoft in 1998.

While these companies used to be darlings of the political scene, feted by presidents and beloved by the press, there is now a critical moment of political opportunity to spur policymakers to address their harms by building consensus around regulated competition.