Facebook warns Government over child protection legislation

Social media giant claims proposed law could threaten freedom of expression

Facebook has called on the Government to clarify what constitutes ‘harmful communication’ as included in proposed legislation. Photograph: Lionel Bonaventure/AFP/Getty Images

Facebook has called on the Government to clarify what constitutes ‘harmful communication’ as included in proposed legislation. Photograph: Lionel Bonaventure/AFP/Getty Images

 

Facebook has called on the Government to clarify what constitutes “harmful communication” as included in proposed legislation to help protect children online, warning the proposal could interfere with freedom of expression.

In a statement to the Oireachtas communications committee, Facebook Ireland’s head of policy Niamh Sweeney said the social network understood the reasoning for establishing a Digital Safety Commissioner, but noted there was no clear definition of “harmful communication” in the draft legislation to achieve this.

“The exact parameters are left undefined . . . and this will lead to uncertainty and unpredictability,” she said.

“While it would clearly not be the intention of this Bill to impact on free speech in Ireland, the commissioner’s ability to issue a decision ordering the removal of ‘harmful communications’ without allowing an opportunity for the digital service undertaking to appeal, ought to be considered in light of the potential for limiting freedom of expression. Therefore, we believe it is important to have a clear definition of what constitutes a harmful communication included in the legislation.”

The Digital Safety Commissioner Bill, 2017, was published by Sinn Féin and follows on from the Law Reform Commission’s report on internet safety, which proposed the establishment of a Digital Safety Commissioner, among other reforms necessary to promote the protection of children online.

Facebook Ireland is appearing before the Oireachtas communications committee today in the wake of the Channel 4 Dispatches programme that raised serious questions about the implementation of its content review policies.

The company said it had already taken steps to remedy some of the issues raised by the programme, including a review of how its training for content reviewers is currently undertaken, and an audit of work conducted in the past six months.

Tough questioning

Facebook is likely to face tough questioning from the committee on its policing of content on the platform.

The Oireachtas hearing comes a day after the social media giant said it had discovered the first co-ordinated disinformation campaign designed to influence the US midterm elections, but stopped short of identifying Russia as being behind the attempts to interfere in US democracy.

Facebook has been working with the FBI on the attempted interference in the November vote, which follows its discovery of the Russian Internet Research Agency campaign to sow division in the US during its last presidential election campaign in 2016.

Facebook had removed 32 pages and accounts from both Facebook and Instagram because they were involved in “co-ordinated inauthentic behaviour”, the company said on Tuesday. Almost 300,000 people followed at least one of these pages, created between March 2017 and May 2018.

The pages ran about 150 ads at the cost of about $11,000, but Facebook said the operatives had not been able to run ads since the introduction of its new system to verify political advertisers.

Sheryl Sandberg, Facebook’s chief operating officer, said the company was sharing the information now because one page was promoting a protest set to take place in Washington next week.

“Security is an arm’s race and it is never done,” she said.

Although the company did not accuse Russia of being behind the effort, Mark Warner, a senator and the top Democrat on the Senate intelligence committee, said it was “further evidence that the Kremlin” was using social media to “sow division and spread disinformation”.

– Additional reporting: Financial Times Service