Last week I had a post up on metaphorical language in cognitive science, which generated a very interesting discussion in comments. I don’t think I’ve sufficiently made the case for the ‘too much’ claim, and the post was mostly intended to raise the question and foster some debate. (It succeeded in that respect!)
There is one aspect of it though, which I would like to follow up on. One commenter (Yan) pointed out that it’s not so surprising that digital computers ‘think’ like us, given that they are based on a conception of computation – the Turing machine – which was originally proposed as a formal explanans for some cognitive activities that humans in fact perform: calculations/computations. It is important to keep in mind that before Turing, Post, Church and others working on the concept of computability in the 1930, computation/effective calculation was an informal concept, with no precise mathematical definition (something that has been noted by e.g. Wilfried Sieg in his ‘Gödel on computability’.). To provide a mathematically precise account of this concept, which in turn corresponds to cognitive tasks that humans do engage in, was precisely the goal of these pioneers. So from this point of view, to say that digital computers are (a bit) like human minds gets the order of things right; but to say that human minds are like digital computers goes the wrong way round.
There are two related aspects in Turing’s original conception of computation and later developments that seem worth pointing out. The first concerns the scope of the claim: even if a Turing machine succeeds in emulating the exact cognitive processes that humans go through when performing computations/calculations (which is in itself debatable), this is in no way incompatible with the idea that there are other cognitive processes that humans engage in which are of a very different nature. In other words, the original claim was that what is now known as a ‘Turing machine’ captures a fragment of a human’s cognitive life, not that all of what we want to count as manifestations of (human) cognition could be adequately explained by the same model. (Of course, later Turing said things that do seem to point in the second direction.)
In my opinion, one of the ‘mistakes’ of the computational conception of the mind is to extrapolate this model, which was intended to have a rather limited scope originally, to explain all the phenomena that we now want to count as cognitive (including perception, action, memory etc.) Of course, this leads to the discussion of what we want to count as cognition, and the familiar demarcation debate. Still, the core of this extrapolation is the (to my mind, contentious) claim that calculation/computation serves as the paradigmatic case to understand human cognition in its multifaceted nature.
(A historical aside to make Eric Schliesser happy. This idea goes back at least to Hobbes, who said:
By ratiocination, I mean computation. Now, to compute is either to collect the sum of many things that are added together, or to know what remains when one thing is taken out of another. Ratiocination, therefore, is the same with addition and subtraction. (Hobbes, Computatio, p. 3)
And Leibniz:
The most profound investigator of the principle of all things, Thomas Hobbes, has rightly contended that every work of the human mind consists in computation, and on this understanding, that it is effected either by adding up a sum or subtracting a difference… (Leibniz, Logical Papers, p. 194)
I owe these two passages to a great article by M. Mugnai, ‘Logic and mathematics in the Seventeenth Century’.)
The second aspect worth pointing out is that the original Turing machine is entirely compatible with (in fact, even seems to suggest) an externalist, embodied/embedded approach to cognition. Recall that, on Turing’s original model, the computer is a person who executes the bodily procedures defined by the algorithm, on the basis of external representations. This is a point that I’ve briefly touched upon in my book that came out last year (towards the end of Chapter 1), and that Barrett also refers to in her Beyond the Brain, Chapter 7. Barrett relies in particular on the work of Andrew Wells, who defends an interpretation of Turing computation which emphasizes the role of the environment and of the agent’s bodily interactions with it for cognitive processes – in fact, and perhaps surprisingly, very much in the spirit of Gibson’s ecological approach (see here for a paper by Wells on Gibson’s affordances and Turing’s notion of computation). I still need to read Wells’ work more carefully (2006 book here), but it seems really exciting, and very much in the spirit of the account of the cognitive role of notations that I defended in my book Formal Languages in Logic.
These observations suggest that those who took the concept of a Turing machine to imply a full-blown computational (internalist) conception of the mind failed to attend to these two important aspects of Turing’s original discussion. Thus, they ended up with an impoverished conception of the human mind: one which takes some of its activities to define the nature of all of its activities, and which overlooks the role of the body and the environment for cognition.
[PS: As of tomorrow around lunch time, I will have virtually no access to the internet for 10 days, so I won't be able to moderate and respond to comments during this period.]
Recent Comments