AI and the `responsibility gap’

Artificial morality

Sir, – As far as I know, I’ve never met anyone who has been part of a firing squad. Thankfully we live in a more enlightened era or society. The image comes to mind reading Joe Humphreys’s Unthinkable column where he discusses the idea of a “responsibility gap” (“A philosopher foresaw the biggest problem with AI. Now he’s sounding another warning”, Opinion & Analysis, September 25).

Without any personal accounts available, internet searches appear to agree that the concept of some rifles having blank cartridges was probably followed. Thus individual members of firing squads could assuage their guilt by reflecting that they might not have actually shot the victim. It seems like a pragmatic, low-tech version of an orchestrated responsibility gap with a positive effect.

It might seem irrelevant to today but in truth the opposite challenge faces a number of people in a modern society – that they cannot save everyone. Those who set speed limits, or license drugs or new treatments, or make them available to patients, must acknowledge that where they draw boundaries will determine life and death for unseen others. At any time there are hundreds of people awaiting organ donation, but donors arise at rates of a few per week in Ireland. Choices must be made, ideally on the basis of the best evidence, but all people are prey to psychological biases. It would seem that artificial intelligence, by offering an entirely objective viewpoint, might also offset some of the moral fatigue these roles can entail. Whether society is yet prepared for artificial morality is hard to know but given the extensive roles non-human intelligences have found already, it must be a consideration now or in the future. – Yours, etc,




Co Cork.