Because we do not understand the brain very well we are constantly tempted to use the latest technology as a model for trying to understand it. In my childhood we were always assured that the brain was a telephone switchboard. (‘What else could it be?’) I was amused to see that Sherrington, the great British neuroscientist, thought that the brain worked like a telegraph system. Freud often compared the brain to hydraulic and electro-magnetic systems. Leibniz compared it to a mill, and I am told some of the ancient Greeks thought the brain functions like a catapult. At present, obviously, the metaphor is the digital computer. (John Searle, Minds, Brains and Science, 44)
As I am now preparing my philosophy of cognitive science course, which I will start teaching in November for the first time, one of the inevitable topics on my mind is the idea of the mind (or the brain) as a computer. I am a relatively newcomer to the field of philosophy of cogsci, which in a sense means that I approach it with a certain naiveté and absence of, shall we say, prior indoctrination. At the same time, I am now also reading Louise Barrett’s wonderful book Beyond the Brain, whose chapter 7 is called ‘Metaphorical mind fields’. It begins with the famous quote by Searle above, and goes on to argue that the ‘mind as a computer’ conception is a metaphor; what is more, the tendency we have to forget that it is a metaphor does much damage to a proper understanding of what a human brain/mind is and does (problematizing the brain=mind equation is the general topic of the whole book).
… our use of the computer metaphor is so familiar and comfortable that we sometimes forget that we are dealing only with a metaphor, and that there may be other, equally interesting (and perhaps more appropriate) ways to think about brains and nervous systems and what they do. After all, given that our metaphors for the brain and mind have changed considerably over time, there’s no reason to expect that, somehow, we’ve finally hit on the correct one, as opposed to the one that just reflects something about the times in which we live. (Barrett 2011, 114/115)
(The fact that we tend to take artifacts as analogies to explain natural phenomena probably stems from the fact that, since we make artifacts, we think we have a privileged access to how they work and what they do. Similarly, anthropomorphism is motivated by the fact that we think we understand ourselves better than any other creature or thing – well, little do we know…)
Indeed, the ‘mind as a computer’ analogy may well be viewed as the central pillar of much (though perhaps not all) of what is done under the heading of cognitive science. (“The central hypothesis of cognitive science is that thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures.” P. Thagard, SEP entry on cognitive science) But the extent to which the claim is truly seen as a metaphor/analogy varies significantly; I think it is fair to say that many theorists in fact endorse the ‘strong’ view that the mind is not like a computer; the mind is a computer.
As a matter of fact (and now the fact that I am a newcomer in the field becomes relevant), one of the aspects that bother me quite a bit in the literature both in cognitive science and in the philosophy of cognitive science is the excessive leniency with which metaphorical language is used. Metaphorical and non-metaphorical language are superimposed, and it is often unclear whether what is being said is intended as a metaphor or as a true description of the phenomena in question; in fact, I suspect that the authors often do not know themselves where to draw this line, so much arguing through metaphors has become the norm.
One example is Clark’s recent BBS article on predictive coding. (I do think Clark is even more lenient with his use of metaphorical language than other authors, but my choosing this paper as an example should not be understood as singling him out as the only 'culpable' one.) The paper is full of spatial metaphors (top-down, bottom-up, forward flow, backward flow), which for me at least made it increasingly difficult to grasp the exact details of the cognitive architecture he intends to be describing.
I am of course aware that there is an extensive literature on the use of metaphors in science (‘the selfish gene’ and other anthropomorphic metaphors are also discussed in Barrett’s book), and I certainly do not want to maintain that use of metaphorical/analogical reasoning in science is always pernicious. But it seems to me that in cognitive science in particular, excessive use of metaphorical language is hindering rather than facilitating progress (here is a post over at Nature making a similar point but not restricted to cogsci), especially given how easy it is to for us to forget when we are talking metaphorically and when we are talking ‘for reals’. So I anticipate that I will spend a great deal of my philosophy of cogsci reflecting on this (to my mind) somewhat problematic aspect of current practice within cognitive science. ('The mind as a computer’ will be the first one under attack.)
I wonder, am I the only one bothered by this? I am truly interested in thoughts readers may have on the whole thing.
Recent Comments