Cog and Kismet. They sound like characters from a kid's cartoon, but they might be more at home in the science fiction section of a bookshop, except that they really exist. They are humanoid robots which are part of a project at the Artificial Intelligence (AI) Laboratory at the Massachusetts Institute of Technology (MIT), writes Breda O'Brien
Cog is a robot designed to learn in the way a human infant does. He or it has a body, two arms, a head, ears and eyes. Kismet is a robot who interacts with humans through her body posture and facial expressions. In online pictures, Kismet is really cute, with huge blue eyes. She looks a bit like a Furby without fur, so that all the internal workings show.
Aside from being cute, both Cog and Kismet have a serious purpose. At one time, efforts at creating artificial intelligence concentrated on very abstract aspects of intelligence, such as chess playing, and proving mathematical theorems.
Now, in order to simulate or create a human-like intelligence, the researchers are concentrating on the fact that humans live in a body and are social creatures. Cog experiences the world in the way a human child does, in that he has to wrestle with gravity and balance. People interact with Kismet and Cog as if they were human, because the researchers believe socialisation is an important part of the process of becoming human.
This kind of research raises tricky questions, and not only the paranoid worry that robots will take over the world, given half a chance. More importantly, if you are trying to create human-like intelligence, what does it mean to be human? What aspects of humanity are distinctive?
For a number of years Dr Anne Foerst was part of the AI project at MIT. She is a German Lutheran minister who is also a theologian working at the interface between religion and technology. From what I have read about her on the Internet, Dr Foerst gets a bit of a kick out the way people react to a theologian being a research scientist on an AI project.
Her explanation is that theologians explore the cultural and spiritual dimensions of the question of what it means to be human, and that the wisdom of religious studies enlarges our understanding of humans and is therefore valuable in the creation of humanoid machines.
No doubt it is also handy to have someone nearby to look at the ethical questions in trying to create human-like creatures, and how we should treat them if we succeed. Not that it is likely to happen soon.
Dr Foerst was here in Dublin during the week, as a guest at the monthly Open House seminars held by Media Lab Europe, and I am kicking myself that I missed her, and the seminar. It had the provocative title, "The soul of technology is in transition" and was also addressed by John Moriarty.
We are used to seeing a kind of stand-off between religion and science, so it is intriguing to see a seminar, which is aimed at CEOs, researchers and government policy-makers, exploring the links between spirituality and technology. Dr Ken Haase, acting director of Media Lab, who introduced the seminar, is quoted in a press release as saying that as a boy he became a "militant evangelical atheist" because he felt he had to be in order to be a scientist.
However, he changed. "As I grew older, education and experience led me to realise how the scientific orientation and the religious orientation might not only compatible but even intimate with each other."
Human beings are always asking questions, he says, about the world, about themselves, about their family and friends, about their triumphs and tragedies, about reasons and causes and consequences. These questions lead to answers, which lead to questions.
"But there is always a point, whether for scientist or minister, expert or amateurs, when the answers stop. This place is a place of mystery and challenge upon which both the scientist and the minister and sometimes the theologian focus their attention."
A recurring theme in Dr Foerst's work is that we tend to give scientists great authority. As a result, we fail to question when they move out from their own area of expertise, and begin to make statements about the truth of the human condition.
She no longer works for the AI project, but when she did, she worked with Marvin Minksy, also a former mentor and colleague of Dr Haase and a highly influential figure in the world of AI.
On the Internet, I came across an account which she gives of a conversation she had with Minsky. At the time she was working as a counsellor at a cancer care centre for children, and the questions which parents were asking were heart-rending. Why my child? Why is this happening?
She said to Minsky that even if you could tell the parents the exact genetic, environmental and social reasons the child developed cancer, you would not be able to answer their fundamental question: why?
Minsky replied: "Yes, but if I know exactly how the human system works and I know exactly how the brain works, then I can actually erase the question of why and erase the grief from the brain."
That might seem like a chilling response to those of us who believe that the capacity to care, to grieve and to question are human characteristics which we would not want to erase.
However, this reductionist vision is just Minsky's opinion, and it reflects a belief system which is no more capable of being proven in scientific terms than any theological vision. Those working in AI often talk of humans as "meat machines" as though it were scientifically possible to say with certainty that that is all we are.
Those of us who hold other beliefs, including we who feel that the events at Easter 2,000 years ago provide a clearer picture of the dignity of human beings, should not recoil in horror. Neither should we feel inadequate to challenge science when it steps into the realm of belief systems.
We need to engage in a non-antagonistic way, in the spirit of Dr Haase's contention that the "scientific and religious orientation might not only be compatible, but intimate with each other."