Facebook removes 9.2m pieces of content for breaking bullying rules in Q3

Meta publishes report detailing prevalence of hate speech and harassment

Facebook included bullying and harassment metrics for the first time in its quarterly report. Photograph: iStock

Facebook included bullying and harassment metrics for the first time in its quarterly report. Photograph: iStock

 

Facebook owner Meta has published its latest transparency report for the social network, detailing the prevalence of hate speech, bullying and harassment on its platform.

The company, which changed its name to Meta in what it claims is a bid to focus on the next generation of the internet, said it had seen a reduction in hate speech on its network for the fourth consecutive quarter as it had improved technology. It also included bullying and harassment metrics for the first time in its quarterly report and said millions of of content was removed from Facebook for breaking its bullying and harassment rules.

The Community Standards Enforcement Report for the third-quarter of the year provides details on how Meta’s social networks, Facebook and Instagram, enforced policies between July 2021 through September 2021.

According to the report, prevalence of hate speech on Facebook was 0.03 per cent, or three views of hate speech per 10,000 views of content, which is a reduction from the five views of hate speech per 10,000 views of content in the second quarter.

On Instagram, hate speech prevalence was 0.02 per cent; it is the first time the company has reported a figure.

It said content classed as bullying and harassment was seen 14-15 times for every 10,000 views on the site during the three-month period, and between five and six times per 10,000 views of content on Instagram.

However, the numbers only document instances where the company did not need additional information, such as a report from a user, to decide if the content broke its rules.

Some 9.2 million pieces of content were removed from Facebook for breaking its bullying and harassment rules, with only 59.4 per cent found proactively.

Violence and incitement

Meanwhile, violence and incitement on Facebook accounted for four to five views per 10,000 views of content, and two views per 10,000 views of content on Instagram.

In total the company took down 13.6 million pieces of content on Facebook for violating policies on violence and incitement, with 96.7 per cent detected by Facebook’s systems. Instagram saw 3.3 million pieces of similar content removed, with a proactive detection rate of 96.4 per cent.

Guy Rosen, vice-president of integrity with Meta, said prevalence of hate speech on Facebook continued to decrease for the fourth quarter in a row.

“We’ve cut prevalence by more than half within the last year through improvements in our technology, and changes we made to reduce problematic content in News Feed,” he wrote in a blog post.

The company also pledged to undergo an audit by EY for the fourth quarter to validate that the metrics have been measured and reported correctly. The results of that audit will be released in spring.

Facebook has been under fire in recent weeks following allegations that it has knowingly hosted hate speech and illegal activity, and leaked documents shed light on how the company failed to heed internal concerns over election misinformation.