Facebook argues it cannot be held in contempt of court – but its users can
Company has at times appeared unable to handle volume of harmful, unlawful content
Facebook’s affidavit repeated its strongly held corporate position that it is ‘not a publisher of user-generated content’. File photograph: Getty Images
An appearance by Facebook and Twitter representatives at the Central Criminal Court on Thursday will do little to assuage fears that social media companies are struggling to manage the use of their platforms, putting legal processes at risk.
The US companies were ordered to answer charges of contempt after users of both platforms identified the two boys convicted of murdering 14-year-old Ana Kriégel, despite an order by the trial judge prohibiting their naming by law.
An innocent boy was also wrongfully identified on social media, the court heard.
A temporary injunction had been issued against Facebook and Twitter on Wednesday restraining them from “any repeat or further publication” of material identifying Boy A and Boy B.
However, after hearing that neither company can prevent contempt of court from occurring on their platforms, Mr Justice White agreed to a change to the order’s wording. That means the social media giants must only remove identifying material once they become aware of it or it is brought to their attention.
Facebook Ireland Ltd and Twitter International Company successfully argued that it was not technically possible for them to completely prevent the posting of such material. As they did not know the names of the two boys, they could not be subjected to a prior restraint order.
Such identifying material can swiftly go viral. If it does so before social media platforms can remove it, this would effectively render the purpose of the trial judge’s order obsolete and undermine a Children Act provision that prohibits the identification of minors accused or convicted of a criminal offence.
A similar issue has previously surfaced in connection with rape trials where social media users have taken it upon themselves to name alleged victims, in breach of their right to anonymity.
Amid Mr Justice White’s “trenchant warning” to individuals not to name Boy A and Boy B, Brendan Grehan SC, for the DPP, said he wanted “some responsibility to be shouldered by the platform providers”.
But the amended order against Facebook and Twitter concedes that the companies do not have the power to stop either the identification of Boy A and Boy B or the defamatory misidentification of other children.
This is despite Facebook’s use of image-matching technology, which it began deploying on Wednesday before the court’s intervention. This technology involves the “banking” of a photograph or video previously flagged as problematic. The image or video is assigned a unique digital fingerprint. If someone tries to share the same content again, the attempt to upload can be automatically prevented by the software.
The technology has its “limitations”, Facebook Ireland’s market manager for community operations Eoin McDonald explained in an affidavit, in that it is only able to flag with certainty “identical versions of the same image”. Modified versions of the same images may slip through, meaning the process of removing them and “banking” them has to be repeated.
This Irish hearing has not happened in isolation. A string of contempt cases recently prompted the UK to carry out an investigation into whether social media is putting legal processes at risk. The operative word in its conclusion that it “does not yet pose a serious threat to the criminal justice system” appears to be “yet”.
The incident also has echoes of broader tensions that have cropped up around the world between tech platforms and the social order, highlighting the extent to which their very efficiency can be exploited by what Silicon Valley terms “bad actors”.
Facebook, for example, has notably been criticised for its role in spreading incitements to violence against the Rohingya people in Myanmar, as well as the live-streaming on its platform of the Christchurch massacre in March this year.
Through contractor companies, it has assembled a vast workforce of human content moderators, often handling extremely sensitive content under time pressure. These moderators work alongside Facebook’s technical tools in an attempt to quell potential harm and contain reputation-damaging scandals.
But in common with its smaller social media peers, Facebook has at times appeared unable to handle the sheer volume of harmful, unlawful and/or fake content that is posted and shared on its platform, usually in violation of its own community standards.
McDonald’s affidavit significantly repeated Facebook’s strongly held corporate position that it is “not a publisher of user-generated content”, but an information society service provider. He noted that as this is “a matter of European and Irish law”, the company itself could not itself be held in contempt.
Its users certainly can.