(Cross-posted at M-Phi)
A well-known phenomenon in the empirical study of human reasoning is the so-called Modus Ponens-Modus Tollens asymmetry. In reasoning experiments, participants almost invariably ‘do well’ with MP (or at least something that looks like MP – see below), but the rate for MT success drops considerably (from almost 100% for MP to around 70% for MT – Schroyens and Schaeken 2003). As a result, any theory purporting to describe human reasoning accurately must account for this asymmetry. Now, given that for classical logic (and other non-classical systems) MP and MT are equally valid, plain vanilla classical logic fails rather miserably in this respect.
As noted by Oaksford and Chater (‘Probability logic and the Modus Ponens-Modus Tollens asymmetry in conditional inference’, in this 2008 book), some theories of human reasoning (mental rules, mental models) explain the asymmetry at what is known as the algorithmic level (a terminology proposed by Marr (1982)) – that is, in terms of the mental process that (purportedly) implement deductive reasoning in a human mind. So according to these theories, performing MT is harder than performing MP (for a variety of reasons), which is why reasoners, while still trying to reason deductively, have difficulties with MT. Other theorists defend that participants are not in fact trying to reason deductively at all, so the asymmetry is not related to some presumed competence-performance gap. (Marr’s term to refer to the general goal of the processes, rather than the processes themselves, is ‘computational level’ – the terminology is somewhat unnatural, but it has now become standard.) Oaksford and Chater are among those favoring an analysis at the computational level, in their case proposing a Bayesian, probabilistic account of human reasoning as a normative theory not only explaining but also justifying the asymmetry.