I have been reading Daniel Hutto and Erik Myin’s book Radicalizing Enactivism for a critical notice in the Canadian Journal of Philosophy. Enactivism is the view that cognition consists of a dynamic interaction between the subject and her environment, and not in any kind of contentful representation of that environment. I am struck by H&M’s reliance on a famous 1991 paper by the MIT roboticist Rodney Brooks, “Intelligence Without Representation.” Brooks’s paper is quite a romp—it has attracted the attention of a number of philosophers, including Andy Clark in his terrific book, Being There (1996). It’s worth a quick revisit today.
To soften his readers up for his main thesis, Brooks starts out his paper with an argument so daft that it cannot have been intended seriously, but which encapsulates an important strand of enactivist thinking. Here it is: Biological evolution has been going for a very long time, but “Man arrived in his present form [only] 2.5 million years ago.” (Actually, that’s a considerable over-estimate: homo sapiens is not more than half a million years old, if that.)
He invented agriculture a mere 19,000 years ago, writing less than 5000 years ago and “expert” knowledge only over the last few hundred years.
This suggests that problem solving behaviour, language, expert knowledge and application, and reason are all pretty simple once the essence of being and reacting are available. That essence is the ability to move around in a dynamic environment, sensing the surroundings to a degree sufficient to achieve the necessary maintenance of life and reproduction. This part of intelligence is where evolution has concentrated its time—it is much harder. (141)
(a) That evolution took a long time to do something shows that it is hard,
(b) Animals other than humans have intelligence only in the form of being able to react to the environment in real time.
(c) Other kinds of intelligence took a very short time to evolve because they are “pretty simple” add-ons to “being and reacting.”
Conceptual reasoning and language are just gewgaws, Brooks suggests: to focus on them in the attempt to understand cognition is like time-travelling engineers of the 1890s being taken on a flight aboard a Boeing 747. Asked to duplicate its amazing capacity for “artificial flight," they recreate its seats and windows and suppose they have grasped the central innovation.
Now, premise (b) of this argument is dead wrong. The evolution of non-sensorimotor cognition and learning is very ancient indeed (in evolutionary terms). For example, even simple invertebrates sense and learn—Eric Kandel found conditioning in Aplysia, a very simple and ancient organism possessing only about 20,000 neurons in total. (In fact, Kandel demonstrated conditioning at the cellular level, so system complication was, to a point, irrelevant.) Classical conditioning (and learning in general) consists of modifications to the internal states of organisms as a consequence of exposure to the environment; it is an outside-in mode of non-behavioural cognition; it is the creation of inner states corresponding to environmental regularities that an organism has been exposed to. Operant conditioning is a bit more complex: it is an internal modification as a result of sensorimotor interaction with the environment. (It is worth noting that though it results from dynamic interaction, it is nonetheless an internal modification, not a mode of interaction.) The evolutionary history of conditioning and learning shows that there is a very long history of cognitive evolution that is independent of sensorimotor evolution. Language is the product of that evolutionary stream as much as it is of any other. It is neither discontinuous with what went before nor a simple add-on to environmental interaction.
Brooks’s flagship example is a robot (dating back to 1987) that wanders around avoiding obstacles. In his introductory description, he says:
It is necessary to build this system by decomposing it into parts, but there need be no distinction between a “perception subsystem,” a central system, and an “action system.” In fact, there may well be two independent channels connecting sensing to action (one for initiating motion, and one for emergency halts), so there is no single place where “perception” delivers a representation of the world in the traditional sense. (147, emphasis added)
The traditional idea of obstacle avoidance relied on an egocentric map of the surrounding area. Brooks found that this was not necessary. He talks repeatedly about “data” and the like, but protests:
Even at a local level we do not have traditional AI representations. We never use tokens that have semantics that can be attached to them. The best that can be said in our implementation is one number is passed from a process to another. (149)
The second sentence above sounds perversely like Fodor’s syntactic theory of mind: the machine runs by the interactions of its internal tokens without knowing its own semantics. But that is not the question. The question is: Does it have semantics? Or: Why is this number passed from one process to another? What is the significance of the transfer. The answers to such questions are embedded in Brooks's description of his machine:
The finite state machine labelled sonar simply runs the sonar devices and every second emits an instantaneous map with the readings converted to polar coordinates. This map is passed on to the collide and feelforce finite state machine. The first of these simply watches to see if there is anything dead ahead, and if so sends a halt message . . . Simultaneously, the other finite state machine computes a repulsive force on the robot, based on an inverse square law . . . (153)
I am not suggesting that this kind of agentive talk should be taken literally. My point is that it provides a design perspective on the machine without which you cannot comprehend the setup. In an evolutionary setting, this kind of description shows us why an organic system has the external connections that it does. In short, it tells us what environmental significance various state transitions possess. And if the machine could learn, we’d want to figure the environmental significance of its interactions, wouldn’t we? How else could we figure out what it had learned?
Two points, then. First, the evolution of cognition has cognitive starting points. Second, even Brooks's robots have cognitive states. But I am surely not saying anything new. Dan Dennett said it all, didn’t he, a decade or so earlier? I am just a bit surprised to find all of this still being taken so seriously nearly a quarter of a century later.
Recent Comments