UK advertisers urge Facebook and Google to set up standards body
Independent body needed to enforce rules on content moderation, says industry group
Facebook recently promised to double the size of its safety and community teams to 20,000 by the end of 2018. Photograph: Dominic Lipinski/PA Wire
Phil Smith, director-general of the Incorporated Society of British Advertisers (ISBA), said the two technology companies should adopt common policies over the detection, monitoring and removal of inappropriate content. The group’s members include Lloyds Banking Group, Unilever and Procter & Gamble.
Google and Facebook should “thrash out some common principles” over content moderation and removal that could be adopted and enforced by an independent body, which they would fund, he said.
Other social networks and tech platforms, such as Twitter and Snapchat, would also be invited to join the regulatory framework.
“At a minimum, what we’re looking for is independent oversight and reporting,” Mr Smith said. “This would build confidence in the platforms themselves and would be good for their reputations.”
Funding an independent body would also strengthen consumer and advertiser confidence and ward off the threat of government regulation, he said.
Pressure for regulatory action against technology platforms is increasing across Europe. Last year Germany introduced fines of up to €50 million for companies that do not remove hate speech or fake news within 24 hours of receiving a complaint and Theresa May, the UK prime minister, has called on technology groups to remove terror-related content within two hours of it appearing online.
Google and Facebook declined to comment on the ISBA proposal.
Google said recently that it would increase the number of people reviewing content on its YouTube video channel to more than 10,000 – although it declined to say how many it already employs.
Facebook recently promised to double the size of its safety and community teams to 20,000 by the end of 2018 – a move that was partly in response to the furore in the United States over content associated with Russian entities that aimed to disrupt the outcome of the 2016 presidential election.
The technology companies have intensified their policing efforts amid growing public concern.
This year’s Edelman’s Trust Barometer found that public trust in big technology groups is declining. The fall over the past 12 months was particularly pronounced in the UK, where concerns have grown over the past year about the availability of extremist and other inappropriate content on YouTube.
Facebook said in a statement that it was taking an “aggressive approach towards illegal and inappropriate content” on its platform.
Mr Smith, a former marketing director of Kraft, said advertisers expect the big technology companies to take action because consumers are becoming sceptical of digital advertising.
“Our consumer research tells us that digital advertising is intrusive and not being trusted,” he said. Consumers “know that television advertising is regulated in some way – both the advertising and the content – but they don’t believe that to be the case in any respect when it comes to digital”.
Many big UK advertisers left YouTube in March last year when it emerged that ads from well-known brands had run alongside jihadi videos and other extremist content. Then in November, Diageo, Mars and Hewlett-Packard pulled advertising from the video channel after their campaigns appeared alongside videos featuring children and sexualised comments.
“We are applying the lessons we’ve learned from our work fighting violent extremism content over the last year in order to tackle other problematic content,” Susan Wojcicki, YouTube’s chief executive, wrote in a blog post last month. – Copyright The Financial Times Limited 2018