Una Mullally: AI comes hardwired with human bias

Unconscious prejudice pervades even the most conscientiously-constructed algorithms

“We are told that technology and algorithms are somehow exempt from human flaws, are clinically unbiased and coldly fair. But that is not so.”

“We are told that technology and algorithms are somehow exempt from human flaws, are clinically unbiased and coldly fair. But that is not so.”

On Friday, I was reading about one ethical policing conundrum in the form of the ongoing saga of Garda whistleblower Maurice McCabe, while pondering another ethical law-enforcement conundrum, the growth of predictive policing, at a conference in Rotterdam on data and privacy organised by the Goethe Institute, where I was giving a talk on algorithms in daily life.

One of the speakers was Prof Charles Raab, co-chair of the UK Independent Digital Ethics Panel for Policing, “a formal mechanism by which law-enforcement agencies can test ethical boundaries for policing an increasingly complex and digitised Britain”. There’s something reassuring about the fact that such a group exists in Britain, and something daunting about the fact that many other nations are so far from that level of awareness about the impact predictive policing and algorithms in policing and in the criminal justice system will have on our lives.

The Irish Times
Please subscribe or sign in to continue reading.
The Irish Times

How can I keep reading?

You’ve reached an article that is only available to Irish Times subscribers.

Subscribe today and get the full picture for just €1 for the first month.

Subscribe No obligation, cancel any time.