Artificial Intelligence isn't all in the mind

Interesting things are happening on the margins of Artificial Intelligence (AI)

Interesting things are happening on the margins of Artificial Intelligence (AI). The currently dominant approach in AI is to model the knowledge and rules underlying an intelligent activity, and then construct a computer program that implements this model.

A growing minority of researchers, however, argues that we must pay more attention to results from cognitive science and psychology that emphasise the importance of the physical properties of the human body in shaping our cognitive processes.

In other words, AI needs to take account of the fact that natural intelligence whether in chimpanzees, dolphins, or humans is embodied, and that embodiment plays a critical role in its formation. For AI, this means focusing attention on developing intelligent programs that are equipped with a means of perceiving and acting upon their environment, in other words, robotics.

A definition of AI ultimately depends on what we mean by intelligence? If for example, I develop a robotic device with the agility of a goat, capable of successfully negotiating the narrowest of mountain tracks, am I permitted to describe this device as intelligent? Well within the traditional definition of AI, probably not - just a clever piece of engineering.

READ MORE

The irony is that although we now have a computer program that can beat the reigning chess champion, AI technology has yet to produce a system with the behavioural repertoire of even a housefly. But this may be about to change.

The "I" in AI has traditionally referred to the higher-level processes involved in such activities as speech understanding, scene perception and problem solving. The development of systems (or more accurately, fragments of systems) that can deal with some of these tasks has been the preoccupation of most of AI since its emergence in the early sixties.

AI systems of this type are constructed by creating a simplifying symbolic representation of the part of the world dealt with by the program. This work has met with some limited success. Many products of AI research have found themselves in such applications as stockmarket trading systems, and the sometimes irritating grammar checker that's currently snooping on my prose as I write this article.

But there are serious limitations to all AI applications of this type. Take, for example, the speech-to-text translator that an American colleague of mine recently purchased and spent some time training; when I use it, the result is a sequence of words bearing no relation to anything I've said. What characterises this and similar applications is that they are specific to just one problem, not generally intelligent.

So how do we deal with this difficulty? The obvious solution of adding more knowledge and more rules doesn't work: the system becomes unwieldy and its performance degrades. Another possible solution becomes apparent if we look at natural intelligence.

Current AI systems are monolithic, general-purpose reasoning engines. The human brain on the other hand, far from being monolithic, comprises many modules that are "experts" in particular sub-domains (e.g., vision, audition, fine motor control). These modules interact and collaborate, but there is no one single central controller. So some sort of modularity like that found in real brains may be the key. Studying how these modules have evolved and developed, we must take embodiment seriously.

Taking embodiment seriously means that we must take account of the environment in which an intelligent system operates. Compare the steps involved in reading a text for a computer and a human. A computer-based scanner with a character recognition capability will transform a whole page of text into an editable electronic form in one pass. In the case of a human reader, reading involves many interdependent activities that are circumscribed by the physical details of their sensory apparatus.

The fact that my eyes have only a limited region of high sensitivity where they can pick up the fine detail of letters (yours do as well) means that I must move them systematically in jumps across the text. These jumps, or saccades as they are called, are on average about eight letters in extent. Another constraint is that the eyes are not under precise control during saccade execution.

Making a saccade in reading is a bit like throwing a ball: the eye is aimed in the general direction of the target but where exactly the ball lands is not precisely determined. Traditionally, AI has ignored these physical details, considering them to be unimportant. Intelligent systems have evolved over many millions of years, and the vast majority of that evolutionary history has been involved in refining the sensory and motor systems. Intelligence of the type that we possess makes a very late appearance and is constructed upon pre-existing physical structures.

Evolution's twin is development: rather than genetically wiring-in a repertoire of behaviours, evolution has opted to equip the new-born of species like ourselves with a flexible learning system which can be configured by its environment.

Embodiment is not a panacea for the ills of conventional AI; it places constraints on what can be immediately achieved. Taking the embodied route means that one must adopt a bottom-up approach to the construction of AI systems: we must start with the goat before tackling the chess grandmaster. Nonetheless, in the long term, many researchers in AI believe that embodiment will provide solutions to some of the hard problems currently confronting AI researchers.

Among this group are Rodney Brooks and his colleagues at MIT. His team are currently engaged in building a human-like robot called Cog equipped with binocular vision, eyes that move, human-like hands, and many other sensory and motor features designed to correspond to those of humans (see Brooks' web page on Cog at www.ai.mit.edu/projects/cog/). Brooks' philosophy is that humanoid intelligence requires humanoid interactions with the world.

His research is still very much a work-in-progress, but one of the key findings to emerge is that the preoccupation of traditional AI with the symbolic representation of the world may be misguided.

Much of intelligent behaviour in both humans and animals, according to Brooks, is the result of complex interactions between local sensory and motor sub-systems and does not involve either the creation of, or reasoning with, abstract, symbolic representations.

As a simple illustration of this point, imagine I'm placing a sixpack of beer into my fridge by swinging it at the end of my fully extended arm (don't try this at home!). Now if my actions were being controlled by a conventional AI program, this might entail some rather complicated "cruise missile" style calculations to get the trajectory right and ensure that the beer didn't end up on the floor.

An embodied system, on the other hand, would take advantage of the physical properties of my extended arm, such as its pendulum-like dynamics when fully extended, thus avoiding the need for complex computation. In the next AI article, I'll look at how this embodied approach is suggesting new solutions to some of the hard AI problems in language and vision.

Ronan Reilly is at: ronan.reilly@ucd.ie