Three top executives from big tech are back in the hot seat on Capitol Hill as lawmakers look to find solutions for misinformation, disinformation and how it spreads. But this time, Mark Zuckerberg of Facebook, Sundar Pichai of Alphabet, and Jack Dorsey of Twitter face questions about their companies’ own responsibility in the January 6 riot at the Capitol. William Brangham reports.
So, let’s talk a little bit more about what came up today.
Sarah Miller is the executive director of American Economic Liberties Project. That’s a nonpartisan group that advocates for corporate accountability and antitrust enforcement. And part of that project includes the group Freedom From Facebook and Google.
Sarah Miller, good to have you on the “NewsHour.”
I want to pick up on the argument that we just heard from Congressman Frank Pallone, which is the argument that these companies’ business models, even the very algorithms they use to keep us on their sites, are meant to coop us glued, and that they often will feed us increasingly dubious, dangerous misinformation.
How fair is that accusation?
It is exactly right.
And I think it’s really refreshing to see members of Congress focus on the underlying financial incentives that are driving the misinformation and disinformation and toxic content that are flooding kind of our online communications ecosystem.
So, we’re beginning to focus in on the right problem, the money and the advertising dollars that are driving this toxic content, and creating this polarization and kind of anti-democratic, antisocial content that is having real-world effects on our society and on people’s lives.
One of those real-world effects that some representatives brought up today was January 6, and that they argued that these plotters plotted their insurrection, as some call it, on these platforms. They celebrated on these platforms.
The tech CEOs said, look, when you go to assign responsibility, the plotters themselves bear more responsibility than us, the place where that plot was hatched and discussed.
What do you make of that argument?
Yes, I think policy-makers are seeing through that argument. The truth is that this sort of toxic, engaging content is actually what these platforms are designed to amplify.
So, for example, according to our estimation, we think that Facebook may have made as much as $3 billion off of keeping QAnon content on the platform. So, this isn’t an issue of trying, but failing to capture all of the dangerous content that’s flooding through the platform. It’s actually an issue of these platforms being designed to amplify exactly that type of content.
It’s the most engaging. It’s what keeps people glued to the platform, and it’s what keeps them making money.