- As part of the Oversight Board’s decision whether or not to continue Donald Trump’s suspension, it asked Facebook 46 questions.
- Facebook refused to answer a few of them, all of which get to the heart of what makes the platform toxic.
- Unfortunately, they are also things Facebook will never fix on its own because they impact its bottom line.
- This is an opinion column. The thoughts expressed are those of the author.
- See more stories on Insider’s business page.
If you haven’t heard by now, Facebook’s Oversight Board — an independent “Supreme Court” created by the company last year — decided to uphold former President Donald Trump’s indefinite suspension for violating the platform’s content policies. But they kicked it back to Facebook’s executives to decide within six months whether to ban Trump permanently and create clearer rules around “indefinite” suspensions.
The statement the Board put out described how they came to their decision. Part of the process, the Board said, involved sending Facebook a list of 46 questions — some of which the company refused to answer. This is particularly notable because all of the unanswered questions get at the issues that make Facebook so toxic. They’re things the company refuses to change — despite the power to — because they also make Facebook rich.
From the Oversight Board:
In this case, the Board asked Facebook 46 questions, and Facebook declined to answer seven entirely, and two partially. The questions that Facebook did not answer included questions about how Facebook’s news feed and other features impacted the visibility of Mr. Trump’s content; whether Facebook has researched, or plans to research, those design decisions in relation to the events of January 6, 2021; and information about violating content from followers of Mr. Trump’s accounts.
The Board also asked questions related to the suspension of other political figures and removal of other content; whether Facebook had been contacted by political officeholders or their staff about the suspension of Mr. Trump’s accounts; and whether account suspension or deletion impacts the ability of advertisers to target the accounts of followers.
Facebook stated that this information was not reasonably required for decision-making in accordance with the intent of the Charter; was not technically feasible to provide; was covered by attorney/client privilege; and/or could not or should not be provided because of legal, privacy, safety, or data protection concerns.
If Facebook’s reasoning for not responding to those questions made you roll your eyes, you’re not alone. Since when does Facebook have “privacy, safety, or data protection concerns”?
The first set of questions has to do with how Facebook’s design made it easier for users to see Trump’s posts that helped incite and drive the January 6 riot — which ultimately got him kicked off the platform. We know the answer to that is “yes” — Facebook’s algorithms are notorious for pushing vile content and antisocial behavior.
We know this thanks to some excellent reporting from Jeff Horowitz and Deepa Seetharaman at The Wall Street Journal. And we know Facebook knows it too because the Journal found that internal Facebook reports showed “64% of all extremist group joins are due to our recommendation tools” and that Facebook’s “algorithms exploit the human brain’s attraction to divisiveness.”
Facebook executives, especially Mark Zuckerberg, time and time again ignored or watered down recommendations to fix these problems. Executives had to protect the company’s moneymaking, attention-seeking, antisocial algorithms — regardless of the damage they may be doing in society as a whole. The WSJ also found that Facebook was terrified of upsetting conservatives, and that that bias has been baked into the company’s technology and policy.
It’s very likely Facebook is well aware that its algorithm latched onto Trump’s content and pushed it around the platform, but it’s very unlikely the company will tell us that without a subpoena.
The Oversight Board also asked about other accounts that were amplifying Trump’s content. There are two possible reasons Facebook was unwilling to share information about that. For one, you have to wonder if a lot of those accounts are fake. According to documents filed in a lawsuit Facebook has been fighting since 2018, Facebook has been exaggerating the reach of its platform for years, and that is in part because it has included duplicate and fake accounts in its reach numbers.
So it’s quite possible that Trump’s content was carried on the wings of fake accounts, and that’s something Facebook doesn’t want you to know.
The other reason Facebook might not want to answer questions about the accounts that were amplifying Trump’s content ties into the rest of the questions it wouldn’t answer — it’s politics. Facebook has tried so hard not to look biased against conservatives that its platform has become a haven for right-wing misinformation and lies, and that’s how some people at the company (and in Washington) want to keep it.
Mark won’t fix this
Facebook has no incentive to do things that will bring down engagement, impact the “reach” it sells to advertisers, or upset conservatives — who make up a massive amount of the platform’s most engaged content. Dealing with these issues would hurt the company’s bottom line.
We are, however, starting to see some movement in Washington on the algorithm issue. Social media companies cannot be sued for user-created content on their platforms due to Section 230 of the Communications Decency Act, which was written back in the 1990s to protect nascent internet companies. Last month, the Senate had a hearing about this issue and there seemed to be bipartisan agreement that the problem here is social media’s business model in general — and if you want to reform that, you need to target the algorithms.
“… This advanced technology is harnessed into algorithms designed to attract our time and attention on social media, and the results can be harmful to our kids’ attention spans, to the quality of our public discourse, to our public health, and even to our democracy itself,” said Sen. Chris Coons (D-DE), chair of the Senate Judiciary’s subcommittee on privacy and tech, which held the hearing.
There are a few bills floating around Congress to address this issue, but it will take time before anything becomes law (if it ever does). And absolutely nothing is being done about Facebook’s fake user problem, or its bias toward conservatives. My advice to you is what it has always been — leave Facebook. It won’t fix anything for the whole wide world, but at least you won’t get bullied by its algorithms.