It's time to fact-check Facebook

Net Results: How much longer do we let monolithic speech arbiters like Facebook and Twitter operate with little limitation?

Facebook’s Mark Zuckerberg. Against Twitter’s modest gestures, Facebook and Zuckerberg look increasingly immoral and thoughtless. Photograph: Getty Images

This week a fact-checked post appeared on my Facebook timeline, a partially obscured image of a supposed Time magazine cover illustration of Trump exiting through a door. Under the title "Time" were the words "… to go."

In the centre of the greyed-out image were the words: “False information checked by independent fact-checkers.” Beneath was a list of related articles from three publications explaining that the image was not an actual Time magazine cover.

Facebook offered a link to explain why I was seeing all this. “The primary claims in the information are factually inaccurate,” it explained, as if I still wasn’t listening. Well, thank you, Facebook. Thank goodness I wasn’t deceived by some mildly clever illustrator’s image of Trump.

Shame about your approach to the man himself, though, who regularly emits primary, secondary and tertiary claims that are "factually inaccurate". According to the Washington Post, Trump has made 16,241 false or misleading claims in his first three years in office, and that was just until January. Many of them have been made on Twitter and Facebook.


Yet Facebook and its leader, Mark Zuckerberg, continue to insist that the public needs to be factually informed about, say, a satirical illustration, but not the easily verifiable lies that can devastate lives and society, even cause sickness and death (as with people who took various forms of unprescribed hydroxychloroquine, or the immediate uptick in some US emergency rooms of people who had ingested bleach).

That’s before we even start to consider the indirect effect of such lies – the dog whistling to those seeking what they will hear as implicit support to discriminate, abuse, inflict violence or death.

Twitter, and not before time, decided last week to take some action against – and how bizarre it is to even write this sentence – tweets from the president of the United States that promulgated clear lies or were violating the platform's rules against "glorifying violence".

Not that Twitter should hoist a halo any time soon. Any other person tweeting out repeated falsehoods and threats would have been removed from the platform. Yet at this point something is better than nothing, and we can hope for broader ethical conversations inside the company.

Executive order 

Meanwhile, Trump retaliated swiftly with a dubious executive order, seeking to strip free-speech protections enjoyed by platforms in the US as supposedly “neutral” carriers of others’ speech.

Yet private companies have no obligation to carry everything someone posts. And they don’t. Both Twitter and Facebook have all sorts of rules on allowable content, regularly remove items and bar those who post violating material. Facebook also uses third-party fact-checkers.

Platforms are already functional gatekeepers of news, determining what we are most likely to see with proprietary algorithms and their hidden valuations. Independent journalism and transparency suffers as we see what the algorithms decide we will see (with wholly inadequate financial recompense to the content creators), and not necessarily what others see either.

So, no, platforms are not “neutral”, but a new shape-shifting hybrid that we as a society, and they as companies, have not adequately grappled with in order to better consider what protections and external controls are warranted.

Against Twitter’s modest gestures – to signal false information on relevant Trump tweets, or to hide them for violating guidelines barring irresponsible exploitations of a bully pulpit – Facebook and Zuckerberg look increasingly immoral and thoughtless, enabling and defending race-to-the-bottom inaction.

Zuckerberg can’t even rise to the occasion of a serious public crisis by doing more than almost word for word repeating the phrasing that appears on the Facebook website about free speech being an “arbiter of truth”, and believing “strongly”.

"I just believe strongly that Facebook shouldn't be the arbiter of truth of everything that people say online," he told Fox News last week. And Sheryl Sandberg said almost exactly the same in 2017.


Facebook claims we need to hear politicians’ falsehoods unscreened to judge for ourselves. Yet it is extremely difficult on the Facebook site to find the company’s stance on allowing politicians to say whatever they want.

I tried for 10 minutes using various obvious search keywords, on and off the site, and couldn’t re-find its policy. The false news and fact-checking page says nothing of any policy. The link for further information – a 2018 blogpost from a product executive – says zilch.

That leaves people to easily believe that dangerous lies from world leaders must be true. And because Facebook’s and Twitter’s algorithms keep you in an information bubble of others with similar views, any lies get reaffirmed by the supposed “community discussion” Facebook tell us should expose them.

This week Facebook is coming under increased pressure publicly – including from its own protesting, even resigning, employees – to do more than hide behind its tiresome incoherent “free speech” arguments.

But we all face a larger question: for how much longer do we allow such monolithic speech arbiters – which they unquestionably already are – to operate with little limitation or responsibility?