Can a robot sin?
Artificial intelligence raises very big ethical questions
‘Can a robot sin?” the journal Christian Today asks readers. That, in essence, is what we are asking when we consider, for example, how the artificial intelligence (AI) driving a driverless car programmed to avoid accidents, can consider such ethical questions as what to do when faced with a choice between two accidents?
Or how we are to control self-learning machines like the Twitter “chatbot” which had to be deactivated last year after it started posting increasingly racist, sexist and xenophobic messages, based on what it had “learnt” online.
The questions raised by the likes of HAL 9000 and Isaac Asimov are now mainstream. Worries about the ethics and control of new so-called “autonomous weapons”, which select and kill targets without human intervention, have led to calls for a ban from, among others, the likes of billionaire Elon Musk and Stephen Hawking.
And a report to next week’s World Economic Forum in Davos highlights such concerns. Although it argues that reducing human oversight may increase efficiency and is necessary for applications such as driverless cars, it warns of “dangers in coming to depend entirely on the decisions of AI systems when we do not fully understand how the systems are making those decisions”.
“Some serious thinkers fear,” the report says, “that AI could one day pose an existential threat: a ‘superintelligence’ might pursue goals that prove not to be aligned with the continued existence of humankind.” That “one day” may be upon us.
Such issues are also addressed in a new report from the legal affairs committee of the European Parliament which recommends the creation of “legal personhood” for robots and AI. As well as suggesting the need for new ethical codes governing their production, the report’s recommendations would be tantamount to giving human rights to an intelligent robot – for example, vesting ownership of its patentable intellectual property in it rather than its manufacturer. A legal and ethical minefield ... not least in defining the intelligence threshold at which such rights might be vested. This one will be hugely controversial.