Unthinkable: Could we make a computer trip on acid?

Computers will never have human consciousness unless they have our foibles too, argues author Andrew Smart

For such a vital human capacity, consciousness is still a mystery to us. The question "What is consciousness?" has long been explored by philosophers but traditionally shunned by scientists, "because it was considered 'spooky' or too vague or new-agey", according to author Andrew Smart.

But that is starting to change, with neuroscientists and theorists in artificial intelligence joining the quest to locate what might be called the defining characteristic of humanity.

Smart's new book Beyond Zero and One: Machines, Psychedelics and Consciousness puts forward a tantalising hypothesis: that consciousness is a type of hallucination that may have evolved through the aeons as a survival mechanism.

To answer the question “What is consciousness?” one must imagine how a computer could become human, he says. It would have to do more than pass the Turing test, the traditional benchmark for artificial intelligence. It would have to have human insights, including the capacity to crack jokes, to empathise and to even trip on acid.

READ MORE

Yes, Smart suggests that one way of testing for consciousness would be to see if a supercomputer could achieve a psychedelic state. He calls this the Turing-Acid Test, aimed at detecting “a kind of biological marker for consciousness in artificial minds”.

Smart explains the theory, reflecting on the underlying idea:

“In order for a machine to have human consciousness and its own intuitions, the computer might also have to develop human-like biases and errors, even though these are the things we wish to eliminate by using robots to reason perfectly.”

You describe consciousness as a “useful hallucination” and suggest it could have evolved to aid our survival. How would one go about proving this?

“This goes back to an ancient philosophical debate about whether how the world

seems

to us really reflects how the world

is

. Many AI researchers think this old debate is irrelevant or resolved, but I don’t agree at all.

“Of course if there were a huge difference between how the world is and how it seems to us we would likely have died off. If it seems to me like there is a bus coming farther down the road, but it is actually only a few meters away, and I step in front of the bus something bad will happen to me. However there are many examples where this breaks down.

“I am not sure whether it is possible to prove the idea that consciousness is a useful hallucination is true, I think we can only argue for or against it.

“For example: to dragon flies huge pools of oil seem like an ideal place to lay eggs because dragon flies have evolved a mechanism to detect light polarization - which usually correlates with the presence of water. However, oil polarizes light much more so than water - thus dragon flies find asphalt roads and pools of oil much more attractive than water.

“This is not very useful for dragon flies because they get trapped in the oil. Nor does light polarization reflect the ‘truth’ about the presence of water. Whether or not dragon flies experience anything is another question, but at some level of description to dragon flies it seems like highly polarized light is water but it’s not always.

“While researching the book I looked at the field of hallucination research - which is kind of a small subfield within neuroscience and psychology - and the current perspective is that there is no hard distinction between normal waking consciousness and hallucinating. This is in contrast to our common sense idea that hallucinations are ‘mistakes’ or ‘erroneous’ perceptions.

“The idea is that normal waking awareness is physiologically constrained by our senses but fundamentally our experience of the world is created by the brain using the input from our senses. Hallucinations are also created by the brain in the exact same way only with less input from the outside world.

“Hallucinations then are real perceptions, they are real experiences, and they are just not constrained in the same way as so-called normal experience. But this is more of a spectrum and we constantly move around on this spectrum depending on how tired we are, our blood sugar, if we’re on drugs, or if we have schizophrenia for example.

“But I wanted to also argue for something that is missing from most mainstream discussions of AI - the idea of subjectivity or a point-of-view. AI typically starts with this Platonic idea of ‘human intelligence’ and asks how can we create a computer program that behaves in an intelligent way? And of course recently this approach has been very successful for domain-specific tasks like games.

“But I argue that consciousness is more fundamental than intelligence. I think intelligence is a human cultural invention; intelligence is a model of our thinking and behaviour, whereas consciousness just is. It’s a brute fact about the universe.

“So I think in order to explain or replicate human intelligence, we first have to explain and possibly replicate human consciousness. You can have behaviour that we would describe as intelligent without consciousness, but I argue that you cannot have human intelligence without human consciousness.”

You point out that “computation” is not really how the brain works. So should we replace the Turing test as a standard for AI with something like the Robin Williams test (referencing a comedian you mention in the book), namely creating a machine with a sense of humour?

“Yes, I argue that computation is not an intrinsic property of brains or physics for that matter. In other words, if there were no human minds in the universe I don’t think the universe is computing anything without us thinking it does.

“Computations are also very useful - amazingly useful - models for the behaviour of things. This is still debated of course - many physicists and AI researchers argue that the universe is a computer - but it’s a question of metaphysics and epistemology.

“I would say it’s the same with intelligence. The whole concept of ‘intelligence’ is very much like Daniel Dennett’s idea of the intentional stance. We see an ant colony behaving in very sophisticated ways and we call that ‘intelligence’ - but it’s just how we model that behaviour.

“Of course, Dennett also argues that consciousness is a similar model that we have created of ourselves - in other words there may not be any real consciousness. But I don’t agree, I think Descartes was right in this sense - the only thing that is real is our consciousness.

“This is not to say that there is no reality but I think scepticism is justified even regarding our models like intelligence and computation.

“But I agree that something like a Robin Williams test would be much more sensitive to whether the machine has human-level intelligence. Of course, Robin Williams was an extreme outlier in terms of talent, but I think humour is a fundamental part of human intelligence.

“Even an AI with an ordinary sense of humour is still far in the future. To me this would be a system that can combine logically unrelated and vague concepts into a novel concept and verbalise this new concept in such a way that violates certain expectations of the interlocutor; this is kind of what a joke is.”

You recommend giving computers acid - or attempting to do so - to try to make them more humanlike. But would it make them any safer? Is there anything to suggest a supercomputer on LSD would be more benevolent than one not so drugged?

“Giving a supercomputer LSD might not make is safer for us. But what was interesting to me was how this idea of LSD and AI opened up all kinds of questions about machine intelligence. Many people go, ‘oh that is crazy it would never work, superintelligent AI will not work that way’. To which I would then ask, ok what is different then?

“Why won’t a superintelligent machine be able to experience LSD? And it turns out I think that it reveals the kind of hidden assumptions among many philosophers and AI researchers: namely that superintelligent AI will not have a subjective point of view; it won’t need to be conscious; it won’t need to have this vague idea of phenomenal experience.

“I would argue that it won’t be superintelligent then - at least not in the sense of being able to change and adapt to its environment better than humans can and do.

“This superintelligent AI will be a zombie. Of course, a superintelligent system could still be very dangerous and exceed human capabilities in some narrow domain like mining resources or controlling the stock market. And the risk is that once machines understand how to learn, they will keep creating smarter and smarter machines. I question this idea unless at some point during its rise to superintelligence AI’s become conscious and have relativistic and subjective points of view.

“I don’t think it’s crazy though to start trying to engineer a machine to have subjective and even psychedelic states. And I do think if the AI can experience typically psychedelic reactions like a sense of reverence for nature and other conscious agents it would be a good thing.

“We might be hesitant to talk about engineering a psychedelic AI because at this point most humans do not want to alter their consciousness in such a way either. But the point is, to paraphrase the neuroscientist Christoff Koch, there is no fundamental law in the universe that would prevent the existence of subjective feelings in artefacts designed by humans - and I would add to that psychedelic experiences.”

Many university administrators would baulk at the idea of bringing psychedelics into academia. What role do you think they could play?

“Indeed, since psychedelics were put on the schedule I list of illegal drugs in the late 1960s, serious research on these molecules virtually stopped.

However, there are groups in the UK and Switzerland, and even in the US, who are reinvigorating this field using modern neuroscientific methods, and the results are astounding.

"If you look at the work of Robin Carhart-Harris and David Nutt at Imperial College and collaborators, they are really leading the charge of using psychedelics to inform theories of human consciousness, using brain-imaging data from people on psilocybin.

"These drugs not only offer a direct way to perturb consciousness and observe the effects, but they can also reduce fear of death in terminally ill patients, as a recent study showed."

Accepting that “to err is human”, would a supercomputer with human-like consciousness be more dangerous to humanity than one without?

“In my opinion, human-like consciousness would at least be a way to ensure that the AI valued the same things that humans do. Of course, we would need to be careful that the AI is not psychotic, and again this is where I think engineering something like the psychedelic state in silicon – or whatever computing medium we ultimately use – would be very beneficial.

“Interestingly, it seems that some of the popular fear about AI is that it would develop consciousness and suddenly turn evil for some reason.

“To me, the greater danger is some unconscious superintelligent AI system that has control over safety-critical systems like hospitals or air traffic and without any malicious intent (indeed it would have no intentions of its own) cause massive destruction because of some unintended consequence of its blind pursuit of the goals humans have programmed into it.

“If this AI had human-like consciousness, it might at least pause and think about human lives before carrying out its plans – even if the plans make complete sense relative to the AI’s goals.”

philosophy@irishtimes.com

Twitter @JoeHumphreys42