In my previous post on gender and philosophical intuitions I mentioned that I am quite critical of intuition-based philosophical methodology, and since then I’ve been telling myself (and others) that I should develop my criticism in more detail. So here it is!
I want to articulate my criticism of this methodology in terms of a cognitive tendency that human agents seem to have, which has been identified and studied by the experimental psychology of reasoning: the tendency to reason towards the beliefs we already hold so as to lend them additional support. One of the terms used in the literature to describe this phenomenon is ‘belief bias’, but I also like to use the less judgmental term ‘doxastic conservativeness’.
Let me briefly summarize these results. In several experiments, it has been shown that subjects have a strong tendency towards endorsing or drawing inferences whose conclusions are ‘believable’, and similarly a tendency towards rejecting or refusing to draw inferences whose conclusions are unbelievable. One typical example comes from Sá, West and Stanovich (1999): subjects were presented the following invalid syllogism and asked to assess whether it was a correct argument:
All living things need water.
Roses need water.
Thus, roses are living things.
Only 32% of the subjects said that this was not a valid argument. They were then given a little scenario of a different planet, involving an imaginary species, wampets, and an imaginary class, hudon, and subsequently were asked to evaluate the following syllogism:
All animals of the hudon class are ferocious.
Wampets are ferocious.
Thus, wampets are animals of the hudon class.
Interestingly, 78% of the very same subjects whose wide majority had failed to give the ‘logically correct’ response in the previous task gave the ‘logically correct’ response here, i.e. that the syllogism is invalid. Even more significantly, the two syllogisms have the exact same mood, AAA-2 (the universal quantifiers are omitted in the second premise and the conclusion).
Similarly, subjects tend to reject valid arguments if their conclusion is unbelievable. In (Evans et. al 1983), two valid syllogisms having the same underlying mood, but one with a believable conclusion and the other with an unbelievable conclusion, were evaluated as valid by 89% and 56% of the subjects, respectively. A similar tendency is observed in conclusion production tasks, where subjects are asked to draw their own conclusions rather than to judge the correctness of fully formulated arguments (see Oakhill & Johnson-Laird 1985). Such results have been replicated numerous times, and belief bias continues to be a topic of much interest in the psychology of reasoning community.
Now, how is this relevant for philosophical methodology? My general criticism is that intuition-based philosophical methodology in fact institutionalizes our tendency (as human reasoners) towards doxastic conservativeness. Again, doxastic conservativeness has much to be recommended for in several situations, but one should think that in contexts of theoretical inquiry, it is not entirely advantageous. Here I find myself in agreement with Tim Williamson (a bit to my surprise, I must admit):
“Again, philosophy is often presented as systematizing and stabilizing our beliefs, bringing them into reflective equilibrium: the picture is that in doing philosophy what we have to go on is what our beliefs currently are, as though our epistemic access were only to those belief states and not to the states of the world they are about. The picture is wrong; we frequently have better epistemic access to our immediate physical environment than to our own psychology. A popular remark is that we have no choice but to start from where we are, with our current beliefs. But where we are is not only having various beliefs about the world; it is also having significant knowledge about the world. Starting from where we are involves starting from what we already know, and the goal is to know more […].” (The Philosophy of Philosophy, p. 5) (emphasis added)
Yes, the goal is to know more, i.e. to go beyond the beliefs we already hold, possibly replacing them by beliefs that are more accurate, more sophisticated, more encompassing, what have you. But given the empirically detected tendency we have towards seeking confirmation for the beliefs we already hold, shouldn’t philosophical methodology take this feature of human reasoners into account and offer mechanisms that might act as a counterbalance to this tendency, when needed? Again, I am not saying that one should never be doxastically conservative in theoretical contexts (that would constitute another form of bias, perhaps to be named ‘the novelty bias’). But intuition-based methodology only seems to reinforce a reasoning tendency we already have, and which should arguably be compensated for when doing philosophy.
In this respect, the conservativeness of this particular style of doing philosophy is in stark contrast with what is taken to be good practice in other fields of inquiry. In the natural sciences as elsewhere, one very important virtue in a theory is that it should make (preferably testable) non-trivial predictions; in philosophy, however (at least a particular blend of philosophy), theories are judged to be successful precisely if they make as few non-trivial (aka counterintuitive) predictions as possible, i.e. if they leave our so-called ‘pre-theoretical intuitions’ largely intact. But what exactly are we accomplishing when formulating theories which confirm precisely the beliefs we already hold? Have we learned anything new? I take it that one of the main features of scientific methodology is to formulate precepts and guidelines that allow us to reason more adequately in scientific situations, precisely because our everyday life forms of reasoning are often not appropriate for science. Now, I submit that philosophical methodology should play a similar role, and while it undoubtedly should capitalize on the cognitive resources we already spontaneously possess as human agents, it should also help us become better reasoners. But I don’t see intuition-based philosophical methodology accomplishing anything like it.
(Observations on the conservativeness of philosophy when compared to other fields of inquiry and the idea that methodology should make us better reasoners are also found in the work of Bishop and Trout, which I find very inspiring.)