YouTube row raises serious questions on censorship by tech giants
Do we want Google, Twitter and Facebook to be sole gatekeepers of public speech?
A still image from a recent video by Gemma O’Doherty on YouTube.
Last week Google terminated the YouTube account of Irish journalist Gemma O’Doherty, alleging that she had violated their hate speech policies. The video in question is no longer accessible, but it is widely reported that the offending comments involved “criticism of ethnic minorities”.
If, as Google alleges, some of Ms O’Doherty’s comments were indeed inclined to incite hatred toward minorities, then the decision to shut down her YouTube channel is arguably reasonable and legitimate. Who, after all, could possibly object to the censoring of online hate speech?
Upon closer examination, however, this case raises serious questions about the role of private corporations as censors of public speech. Even assuming that Google made the right call in this particular instance, that is not dispositive for the larger issue, namely whether a handful of private corporations should be in a position to monitor and control the flow of information on what is now a large part of the infrastructure of the global public sphere.
Google would presumably argue that as a private company, it is fully entitled to regulate the use of its own services. As Google’s statement put it: “All users agree to comply with our terms of service and community guidelines when they sign up to use YouTube.”
But this is too quick: Google is not just like any private company offering a private service.
Imagine if a well-resourced citizen decided to set up a private debating club in a privately owned plaza, and began to make ticketed seats available to the public. Now, imagine that this private club gradually became the most coveted forum for discussing the town’s public affairs. In this instance, what was once a private debating club effectively becomes the very infrastructure of the town’s public sphere.
How would we feel if the original founder of the club, after the astonishing success of his initiative, insisted that as sole owner and manager of the service, it was up to him, and him alone, to decide who got to talk and for how long, depending on their compliance with his “community policies”? If the town acquiesced in his demand, they would be placing the power of censorship over their public sphere in the hands of one leading citizen.
Tech giants such as Google, Facebook and Twitter are similarly positioned with respect to the global public sphere. Their networks started out small, but in a relatively short period of time became the preferred channels of public debate and opinion formation for hundreds of millions of citizens across the world.
The future of free speech hangs on how much control we cede to private corporations
When they were niche enterprises, it made sense for social media corporations to police the usage of their own platforms, determining what constituted acceptable and unacceptable usage. They were, after all, private companies, who only hosted a relatively small volume of public debate.
But once these social media platforms began to rival television and radio as channels of public debate, the tech giants that owned them assumed a new public role. Whereas before they had moderated specialised communication networks, they were now acting as gatekeepers of public speech itself. They now had the formidable power to decide who had a voice in the public sphere and who did not.
Google, Twitter and Facebook typically defend their censorship of user content as a necessary antidote to harmful forms of communication such as “hate speech”. In fairness, it is hard to dispute the fact that some level of control is necessary, given the level of barbarity to which some social media users can sink. Indeed, one cannot help but sympathise with social media providers, who, in the absence of viable alternatives, find themselves with little choice but to police their own platforms.
On the other hand, many forms of offensive speech are not black and white. For instance, speech that contests dominant narratives about political and social authority, law, marriage, gender, religion or national identity may be deeply unpopular and offensive to many, yet offer intelligent critiques of conventional wisdom and not constitute any genuine incitement to hatred or violence.
The question is: who ought to have the authority to determine when public speech has crossed the line from civility to incivility, from vibrant critique to incitement to hatred? Private corporations with a vested interest in keeping their customers and shareholders happy are hardly promising candidates for the job.
Public speech is now effectively regulated by the diktats of a handful of self-appointed corporate watchdogs. It would be extremely naïve to believe that their gatekeeping role will not be undermined by their commercial interest in staying on the right side of public opinion and avoiding any backlash among their customer base.
The future of free speech hangs on how much control we cede to private corporations over the speech they host on their platforms. If we do not find a way to decentralise content-moderating functions and separate them from commercial incentives, then the prejudices of a handful of tech giants and their staff will determine which opinions get aired and debated on social media, and which do not.
Given the increasingly dominant role of social media in shaping public opinion and political outcomes, that is a truly frightening prospect.