(Cross-posted at M-Phi)
In a recent paper, the eminent psychologist of reasoning P. Johnson-Laird says the following:
[T]he claim that naïve individuals can make deductions is controversial, because some logicians and some psychologists argue to the contrary (e.g., Oaksford & Chater, 2007). These arguments, however, make it much harder to understand how human beings were able to devise logic and mathematics if they were incapable of deductive reasoning beforehand.
This last claim strikes me as very odd, or at the very least as poorly formulated. (To be clear, I side with those, such as Oaksford and Chater, who think that deductive reasoning must be learned to be mastered and competently practiced by reasoners.) It looks like a doubtful inference to the best explanation: humans have in fact devised logic and mathematics, which are crucially based on the deductive method, so they must have been capable of deductive reasoning before that. Something like: birds had to have fully formed wings before they could fly – hum, I don’t think so… Instead, the wing analogy suggests that there must be some precursors to deductive reasoning skills in untrained reasoners, but the phylogeny of the deductive method (and to be clear, I’m speaking of cultural evolution here) would have been a gradual, self-feeding process.
Now, the first point requiring a clarification is: what do we mean by ‘deductive reasoning’? Depending on how broadly or narrowly we construe the concept of deduction, the question of whether untrained reasoners do or do not engage in deductive reasoning will receive different answers, even on the basis of the same data. And yet, it is surprising that few psychologists have in fact addressed the issue of what they mean by ‘deduction’ in the first place. My own conceptualization of deduction rests on two basic (and fairly uncontroversial) components: (1) the willingness to reason from unknown or false premises; (2) the formulation of indefeasible arguments, where the premises necessitate the truth of the conclusion in that they allow for no counterexamples (i.e. situations where the premises are the case and the conclusion is not). (Oaksford and Chater also insist on the indefeasible vs. defeasible divide, which in turn can be formally treated in different ways, such as from a probabilistic, Bayesian perspective (as they do), or from the perspective of non-monotonic logics, such as in the work of Stenning and van Lambalgen.) Taking these two components, the task is now to explain how they could have emerged, both from the point of view of their cultural phylogeny and from the point of view of their ontogeny in a particular individual. The first question requires a historical approach, while the second question is to be answered on the basis of research in psychology and education.
With respect to the cultural phylogeny of deduction, the hypothesis I am working on at the moment (together with the other members of my ‘Roots of Deduction’ research project) is that dialogical practices of argumentation are the actual historical precursors both for the idea of reasoning from unknown premises and for the concept of indefeasible arguments. It is now widely accepted that the debating, dialectical practices of the early Academy form the background for the emergence of ‘logic as we know it’, which finds in Aristotle’s Prior Analytics its first mature formulation. Now, granting a premise ‘for the sake of the argument’ is a familiar move in these contexts (as can be seen, for example, in Aristotle’s Topics). As for the concept of indefeasible arguments, at this point my working hypothesis is that the requirement of necessary truth preservation was initially a strategic desideratum, a powerful way to ‘beat’ your opponent in a debate. It is only at a later stage that it became a constitutive feature of the deductive method as such.
By emphasizing the argumentative, dialogical origins of deductive reasoning, my account bears some similarities with Mercier and Sperber’s recently proposed ‘argumentative theory of reasoning’, but it differs from their account in two important points: my claim is restricted to deductive reasoning, thus not ruling out that at least some forms of human reasoning may not have dialogical, argumentative origins; my story is a story of cultural evolution, whereas Mercier and Sperber are interested in the biological, evolutionary emergence of reasoning as an adaptation. But to pursue the evolutionary analogy, we may say that the fact that the deductive method was later co-opted as a methodology for scientific inquiry (thus aiming at truth and not just at winning the debate) can be seen as a case of exaptation: a shift of function occurred.
With respect to the ontogeny of deductive reasoning, we are now interested in how a human reasoner may develop deductive skills upon training. Again, this training will have to be grounded in cognitive possibilities that are available to humans from the start, but we need not postulate innate deductive abilities to explain how humans can learn to reason deductively. (A helpful analogy here is with writing, as investigated in particular by S. Dehaene: it requires extensive training to be learned, but naturally it taps into skills and abilities which are part of a human’s neural make-up from the start.)
The first component – taking unknown or false premises to reason with – is usually taken for granted by reasoning researchers, but this is arguably a consequence of sampling bias: research on reasoning has for the most part been conducted with university undergraduates, who thus had a fair amount of formal education behind their backs. The few studies with unschooled participants, such as the classical study by Luria, suggest that this too is a skill that needs to be learned, and schooling is the typical context for this to happen (think of how a teacher formulates a simple arithmetic problem by giving the students some initial conditions which they must accept uncritically). But seemingly, there are precursors outside the school context for the practice of taking premises at face value. As investigated by Paul L. Harris (with whom I am collaborating on a research project at the moment) and collaborators, situations of story-telling and pretense play can prompt children to accept premises which they know are false, and thus to reason closer to the deductive canons.
The second component, namely the notion of indefeasible arguments, is where the typical undergraduate participants deviate more clearly from the deductive canons. Here too it seems to be first and foremost a question of training (as shown e.g. by the results in this great paper by Morris and Sloutsky and also in a paper by Evans et al.), especially in school settings. But again, there seems to be a clear precursor in the practice of adversarial argumentation, as suggested by the idea that necessary truth preservation was initially a strategic desideratum in the historical development of the deductive method. (This is a hypothesis to be further investigated, which I intend to do with my collaborators if our funding application is successful. It seems plausible even if one does not hold the view that ontogeny recapitulates phylogeny.)
Now, this is admittedly all somewhat sketchy; there is still a lot of work ahead to spell out these ideas in more detail and provide further corroboration for the main claims. But I hope to have at least convinced some of you that Johnson-Laird’s predicament is only an imaginary one: it is perfectly possible to come up with a coherent story for the emergence of the deductive method in logic and mathematics without having to resort to the prior existence of deductive reasoning in human reasoning practices. (I'm working on a paper on this topic at the moment, so feedback is much appreciated.)