Technology giants Google and Facebook have cautioned the Government against introducing overly restrictive rules for removing harmful online content.
The companies’ views on how best to police the internet are among 84 submissions received by the Department of Communications in response to a public consultation on the introduction of a new online safety act.
That consultation also sought views on the best way to regulate online content with Minister for Communications Richard Bruton weighing up whether to establish a new regulator with responsibility for online safety.
Other organisations to submit proposals to the department include Apple, RTÉ, Virgin Media, Vodafone, the Data Protection Commissioner, the Broadcasting Authority of Ireland, the Advertising Standards Authority for Ireland and Three Ireland.
A number of charities also put forward submissions including the Irish Society for the Prevention of Cruelty to Children (ISPCC); as did other State departments, and lobby groups such as the Irish Council for Civil Liberties.
In its submission, Google noted that it already has strong enforcement policies in place covering the removal of harmful content, which had led to the removal of more than 8.7 million videos that broke its rules, and the blocking of more than 261 million online comments between October and December last.
“It makes sense for a proposed regulatory body to complement rather than compete with the removal processes already operated by internet service providers. It is not likely to be an efficient use of public resources to assign a regulator for online content that replicates the processes already pursued by online platforms, nor would this deliver significant additional benefits to users of online services,” Google said.
Google warned Mr Bruton that a failure to adequately define the term “harmful content” could lead to enforcement difficulties for the company and the public.
It said that failing to address this could lead to confusion among users over what is acceptable behaviour when using online platforms.
“There will be [a] gradual chilling of online expression, with internet users being fearful of expressing their opinions or participating in online debate,” it said. The internet giant also warned that if a precise self-contained definition for harmful content is not legislated for, online platforms may take a heavy-handed approach to policing it.
“Many platforms (particularly those who are just starting up) may be forced to take the path of least resistance and delete content irrespective of whether it is obviously lawful or not,” Google said.
Facebook said in its submission that while the proposed online safety regulator should have the ability to issue sanctions on companies that fail to take down harmful content when ordered to, “these should only be applied if the platform consistently and systematically fails to comply with valid takedown orders by a regulator”.
The ISPPC said it believed that any request for the removal of harmful content should go directly to the service provider first but added that the reporting mechanisms and complaints policies that some online platforms have in place “are not always applied in the most consistent of ways”.
Separately, Cybersafe Ireland said the proposed online safety commissioner should have the power to investigate complaints of serious, harmful communications targeting an individual, and issue takedown notices to platforms,
“This would be greatly facilitated by the use of a single point of contact process, as used in law enforcement for data retention purposes,” it said.
Mr Bruton said all submissions received by his department would be considered.
“While we are listening to all points of view, we will not accept a situation whereby online companies are permitted to regulate themselves. That day is gone. We need better controls in place,” he said.