The bigot in the machine: Tackling big data’s inherent biases

Discrimination in statistical analysis goes deeper than algorithms written by humans


In 2015, a group of US researchers with an interest in internet user privacy developed a tool to examine Google’s advertisement settings feature, which allows users to select for the type of online ads they see based on their interests. The goal was to determine what effect different browsing behaviours and user profiles had on the ads that popped up but they were not expecting what they found: gender discrimination was baked right into the user experience.

The researchers concluded: “Setting the gender to female resulted in getting fewer instances of an ad related to high-paying jobs than setting it to male”. Similarly, a 2015 study from the University of Washington found that despite 27 per cent of chief executive titles being held by women, a search for “CEO” using Google’s images tab returned results of which only 11 per cent depicted women.

What is going on here? Is Google evil? Are there sexist advertisers at work? Or is the algorithm to blame?

Algorithms work on a “if a, then b” basis, carrying out instructions when provided with a set of rules to follow. Algorithms, by their nature, are quite literally logical, however, the results they produce can be unfair, biased, and downright discriminatory. This is why the Russian developers behind the hugely popular FaceApp were accused of creating a racist algorithm: a feature of the app that “beautifies” the user’s headshots also automatically lightens skin tone. Did the developers write their racist version of beauty into the code?

READ MORE

Accusations of racist algorithms also surfaced at Google in 2015 when its Photos app tagged two African American users as gorillas. Shortly thereafter Google Photos lead Bradley Horowitz said the incident was “one of the worst days of my professional life” adding that Google needed a more diverse team in order to prevent something similar happening again.

‘Risk score’

This kind of bias can also occur where important decisions are made based on statistical analysis of large data sets in areas as diverse as personal finance, healthcare, job application processing and the legal system. In some US states, for example, analytics software is used to predict an individual's likelihood of re-offending and this "risk score" can be applied when setting parole conditions, bail, or the length of incarceration. The problem is that these algorithms "may exacerbate unwarranted and unjust disparities that are already far too common in [the] criminal justice system," according to former US attorney general Eric Holder.

One software company in particular, Northpointe, calculated its rick score in part using questions including, “Was one of your parents ever sent to jail or prison?” and “How many of your friends/acquaintances are taking drugs illegally?” ProPublica, a non-profit investigative journalism organisation, reported that these questions, based on demographic and socioeconomic factors, could be seen as disproportionately impacting upon African Americans.

Northpointe founder Tim Brennan has since announced that the company is working on developing a score that doesn't compute for criminality based on a disadvantaged background.

If these algorithms are producing biased results, are biased programmers always the problem? Not necessarily. In a paper exploring this problem, researchers at Eurecat – the Catalonian Centre for Technology – in Spain and the Institute for Scientific Interchange in Italy, agreed that "algorithmic bias exists even when there is no discrimination intention in the developer of the algorithm".

In the case of the US criminal justice system, because there is such a large database of criminals along with records of their re-offences, these records themselves are being mined for patterns in order to come up with parole recommendations.

"Instead of trying to come up with a set of rules that says it is, for example, a combination of demographic information, behaviours, gender, seriousness of the crime, etc, there's enough statistical patterns emerging from the raw data that there is no need to codify this manually," says Prof Alan Smeaton, director of the Insight Centre for Data Analytics at Dublin City University.

“In this situation, when statistics become your friend, when there is that volume of data, there can’t be such a thing as algorithmic bias when the data doesn’t lie. Or so we thought.”

Underlying presumption

The underlying presumption about taking data driven approaches, says Smeaton, is that all data points are created equally – and they are not.

A famous example of this mistaken belief that correlation between data sets equals causation is the tongue-in-cheek University of Edinburgh study linking higher chocolate consumption to serial killer activity. These scientists pulled in data on chocolate consumption per capita globally and found a significant correlation between this and the number of serial killers per capita. Imagine implementing a policy on banning chocolate based on this, they scoffed. Now imagine similar statistical methods being employed to determine if you get parole or not.

So rather than thinking of an algorithm itself as biased, it is important to understand that the bias could already reside within the data sources it uses.

Smeaton points out that all of these sectors have moved from one extreme of making decisions manually to blindly using big data: “Both approaches offer distinct advantages but we haven’t yet found the sweet spot between the manual coding and the entirely data-driven approach.”

The FaceApp developers trained their algorithm on a data set that most likely only included the faces of Caucasian Russians. Google Photos was developed at the Googleplex in Mountain View, California, where staff hadn't thought to include more diverse images while training their particular object detection software. The algorithm is only as good as the data.

"Algorithms simply grind out their results, and it is up to humans to review and address how that data is presented to users, to ensure the proper context and application of that data," says Keith Kirkpatrick, founder of 4K Research & Consulting.

‘Flavours’ of machine learning

This means there is an onus on software developers to actively detect and work to eliminate inherent biases in the data. This responsibility breaks down not just to the algorithms they use but also the particular “flavours” of machine learning or other forms of AI they may be using, says Smeaton.

“Developers need to be constantly revising and have oversight of the data sources that are then used as the input into the algorithm,” he explains.

There is no silver bullet for eliminating sexist ads or racist software. On the one hand, it should be as simple as ensuring more and better data is constantly fed into these algorithms to improve machine learning but on the other, we need to recognise what “better” is and design for this. As Horowitz says, it may involve more diversity in technology companies because a largely homogenous workplace is not as likely to spot these implicit biases.

But what if algorithms are part of the solution rather than the problem?

If we look at advertising or the legal system before the advent of computing, bias already existed in spades. Whether it is the explicit bias of an advertiser choosing what demographic they want to target for their product or the implicit bias of middle-class, educated, mostly-white judges issuing prison sentences, it is very difficult to exclude some sort of human bias in a decision-making process, explains Smeaton.

Algorithms are a way of codifying, or putting into code, and thus creating explicit rules for decision making, says Smeaton; seeing these rules laid bare before us in code can actually help us spot unconscious biases and come up with counter rules against them.

Algorithmic bias may be a blessing in disguise because it puts a spotlight on all-too human prejudices by reproducing and even amplifying them on a large scale. By learning a lesson from this it may be difficult but not impossible to code for a fairer world.