In the spirit of the recent dueling posts here at NewAPPS, here’s a rejoinder to posts by my esteemed co-bloggers Helen de Cruz and Mohan Matthen. I am a big fan of the work of both, but disagree quite a bit with their take on evolutionary psychology, as I am no fan of most of the work done under this heading. (Ok, that’s a massive understatement.) What I don’t like about it is not only the fact that it is often much too speculative to my taste (Cosmides’ ‘cheating module’ being a good example), but also the fact that it takes the ‘wrong’ conception of evolution as its starting point. I am a staunch partisan of the anti-ultra-adaptionist conception of evolution of S. Jay Gould and others, and thus reject both the idea that phenotypic traits in organisms are primarily adaptations, and the related thesis of massive modularity. Gould emphasized in particular the constraints connected to the internal architecture of the organism, and the mutual influence of its different aspects (hence a rejection of massive modularity and a more holistic conception of organisms).
A case in point is the issue of the relation between truth and fitness with respect to evolutionary explanations of belief. Prima facie, the idea that having true beliefs about its environment would enhance the fitness of an organism is rather plausible. Quoting from the abstract of the Griffiths and Wilkins paper, it can be argued that “the truth of beliefs in a certain domain is, in fact, connected to evolutionary success, so that evolution can be expected to design systems that produce true beliefs in that domain.”
The posts by Mohan and Helen tackle in particular evolutionary accounts of moral beliefs, an area that I readily admit is not really my cup of tea (I seem to even have issues believing that there is such a thing as morality at all, but that’s a different story). I am more familiar with evolutionary accounts of reasoning and logic, most of which are, to my mind, prototypical ‘just-so stories’. (Exception: the work of Stenning and van Lambalgen, where the evolution framework does play an important role, but in the ‘right’ way).
Personally, I tend to think that in the wide majority of cases/domains, fitness and truth are largely orthogonal desiderata; they may coincide at times, but they may as well not coincide. In fact, I can think of a few examples of evolutionarily advantageous false beliefs. So for instance, suppose there is no such thing as free will; arguably, it will still be evolutionarily advantageous for a human individual to believe that there is in fact free will, and indeed the wide majority of humans seem to hold this view. (It is immaterial for my argument here whether there is such a thing as free will or not.) Thinking that your child is beautiful, special, awesome etc. also offers an obvious evolutionary advantage to the genes in question, even though not all kids are beautiful, special, awesome etc.
But anyway, here I want to offer a Gouldian argument on why fitness is most likely not truth-conducive (or perhaps better put, truth is not fitness-enhancing). It is inspired by a study that I came across just yesterday, which investigates why a large number of people (around 40% in most studies) do not see the invisible gorilla. This is a famous experiment designed by Chabris and Simons a few years back: a video is shown to participants, where two teams pass a basketball. Participants are asked to count bounce passes and aerial passes by the black team. At some point, a person dressed as a gorilla comes by and walks among the players. Then they are asked for the two pass counts and whether they noticed anything unusual. Usually, only about 60% of participants see the gorilla, who is nevertheless very conspicuously presented.
The new study investigated the correlation between not seeing the gorilla and working memory. The hypothesis was that individuals with lower working memory (measured by a standard test involving numerical calculations and remembering letters) would be more likely not to see the gorilla, simply because they were using most of their available working memory to keep track of the passes. The results confirmed the hypothesis:
"if you are on task and counting passes correctly, and you're good at paying attention, you are twice as likely to notice the gorilla compared with people who are not as good at paying attention," Watson says. "People who notice the gorilla are better able to focus their attention. They have a flexible focus in some sense."
What does this tell us about evolutionary arguments for the truth of our beliefs? The main point is that constraints on cognitive resources available to human individuals must be taken into account. Humans have limited cognitive resources (working memory is just one of the relevant dimensions), but the world is infinitely complicated. Clearly, many of us can't count passes and see gorillas at the same time.
Given what we’ve got to go by, the best we can expect is to form beliefs that roughly approximate how the world really is – what is often referred to as ‘quick and dirty solutions’. A more accurate set of beliefs would arguably not enhance fitness, as it would overburden the internally determined limited resources of the agent. Having a workable, tractable theory of the world is just as crucial as having an accurate theory, but these two desiderata obviously compete with one another. (There are also some interesting considerations that could be made on the pronounced individual differences that emerge from these results, and what they mean for evolutionary accounts of belief-forming processes, but I will leave these aside for now.)
It may be argued that, while a human cannot have accurate beliefs about all facts of the world, she can have true beliefs about small portions of reality, and this is why the correlation between fitness and true beliefs must be analyzed on a case-by-case basis. But here too the trade-off argument can be applied: it may be more advantageous to have rough theories of larger portions of reality than to have accurate theories of small portions.
I realize that similar arguments based on resource-bounded conceptions of human cognition have been put forward before (though right now no obvious reference comes to mind). Rather than claiming to be offering a new argument, I am simply bringing to the fore an element which I thought was missing in the posts by Helen and Mohan. As a good Gouldian, I deem it important to emphasize that, while for an ideal organism it might be truly adaptive to have highly truth-conducive belief-forming tendencies, given that organisms are always constrained by the possibilities afforded by their biological make-up, we may be better off entertaining a large chunk of false beliefs about the world.
UPDATE: Let me add a plug to the work of Thomas Reydon, a philosopher of biology in Hannover whom I had the pleasure of meeting a few weeks ago. Among other things, he's been looking into "the explanatory scope of evolutionary theory (which involves asking how widely evolutionary theory can be applied in domains outside of biology proper, as well as critically examining the feasibility of Universal Darwinism)." Good stuff!
Recent Comments