Twitter apologises for its ‘racist’ image cropping algorithm

Users highlight a feature that automatically focused on white faces over black ones

Employees walk past a lighted Twitter logo as they leave the company’s headquarters in San Francisco. Photograph:  Glenn Chapman / AFP

Employees walk past a lighted Twitter logo as they leave the company’s headquarters in San Francisco. Photograph: Glenn Chapman / AFP

 

Twitter has apologised for a “racist” image cropping algorithm, after users discovered the feature was automatically focusing on white faces over black ones.

The company says it had tested the service for bias before it started using it, but now accepts that it didn’t go far enough.

Twitter has long automatically cropped images to prevent them taking up too much space on the main feed, and to allow multiple pictures to be shown in the same tweet. The company uses several algorithmic tools to try to focus on the most important parts of the picture, trying to ensure that faces and text remain in the cropped part of an image.

But users began to spot flaws in the feature over the weekend. The first to highlight the issue was PhD student Colin Madland, who discovered the issue while highlighting a different racial bias in the video-conference software Zoom.

When Madland, who is white, posted an image of himself and a black colleague who had been erased from a Zoom call after its algorithm failed to recognise his face, Twitter automatically cropped the image to only show Madland.

Others followed up with more targeted experiments, including entrepreneur Tony Arcieri, who discovered that the algorithm would consistently crop an image of US senator Mitch McConnell and Barack Obama to hide the former president.

In a statement, a Twitter spokesperson admitted the company had work to do. “Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it’s clear from these examples that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, and will open source our analysis so others can review and replicate.”

Twitter is by no means the first technology firm to find itself struggling to explain apparent racial bias in its algorithms. In 2018, it was revealed that Google had simply banned its Photos service from ever labelling anything as a gorilla, chimpanzee, or monkey, after the company had come under fire for repeatedly mislabelling images of black people with those racist terms. – Guardian