How can we combine the economic necessities of work with caring for infants? This dilemma recurs across cultures, and western culture is no exception. In a series of interviews with professors who are mothers (which I hope to put on NewApps by the end of this month), one of my respondents, who has grown children remarked about their preschool years:
"I was completely stressed out. It wasn’t just that childcare was expensive—and even with two salaries it was a stretch: It was insecure. If a childcare provider decided to quit, I would be left in the lurch; if my kid wet his pants once too often he’d be kicked out of pre-school [which had strict rules about children being toilet-trained] and I’d have to make other arrangements."
This concern resonates with many parents. It is especially acute among low-income, single mothers who struggle to find last-minute childcare to fit their employers' unpredictable scheduling. Also symptomatic are heart-wrenching stories about a woman whose children were taken away because she failed to find childcare when she had to go on a job interview and left them in a car, or a woman who was arrested for allowing her nine-year-old daughter to play in a park while she worked in a nearby fast food restaurant.
Can we learn anything from how other cultures solve the working mother's dilemma?
Thomas Reid argued that the human default trust in testimony is a gift of nature, which is sustained by two principles that "tally with each other", the propensity to speak the truth, and the tendency to trust what others tell us. Interestingly, he observed an embodied aspect of this trust:
It is the intention of nature, that we should be carried in arms before we are able to walk upon our legs; and it is likewise the intention of nature, that our belief should be guided by the authority and reason of others, before it can be guided by our own reason. The weakness of the infant, and the natural affection of the mother, plainly indicate the former; and the natural credulity of youth, and authority of age, as plainly indicate the latter. The infant, by proper nursing and care, acquires strength to walk without support (1764, Inquiry into the Human Mind, chapt VI, Of Seeing)
Reid's observations point to an intriguing possibility: to what extent is social cognition, such as trust in testimony, influenced by our bodily position, in particular the position we have as helpless infants? The Japanese primatologist Tetsuro Matsuzawa has argued that the supine position (that is, position on the back) of human newborns, has been a decisive factor in the evolution of human social cognition.
Humans and chimpanzees differ quite markedly in how much they trust others. For instance, although both chimpanzees and humans imitate, human children are more prone to overimitation than juvenile chimps, the children, but not the chimps, indiscriminately follow actions by an adult that are reduntant in obtaining a desired result (see e.g., here).
There are several variants of a list in circulation with skills our grandparents could do but the majority of us can't, for instance, 7 skills your grandparents had and you don't. Examples include ironing really well, sewing, knitting, crocheting, canning, cooking a meal from scratch, writing in beautiful longhand, basic DIY skills... What have the majority of us lost by not having these skills, which I'll call granparent skills for short, anymore?
As Lizzie Fricker argued today in a workshop held in honor of Charlotte Coursier, trust in other people is common and is a pervasive element of human life. We defer to the knowledge of others (testimonial dependence) and to their expertise (practical dependence): we rely on experts to tell us what the weather will be like, to fix our car, to give us a new haircut. Often, this deference is shallow and dispensable (we could in principle do it ourselves), but it can also be deep and ineluctable, as when we rely on electricians and other specialists.
This division of cognitive labor provides us with enormous gains, but does an increased reliance on testimony and expertise of others also come with costs? Fricker feels we do not reflect enough on this question, especially as the extent of both testimonial and practical dependence seems have increased dramatically in recent years. People increasingly rely on Google rather than internally stored semantic knowledge, and they increasingly outsource practical skills – navigation with maps, dead reckoning, and compasses is replaced by user-friendly technologies like GPS devices.
Some time ago, I wrote a blog post defending the idea that a particular family of non-monotonic logics, called preferential logics, offered the resources to explain a number of empirical findings about human reasoning, as experimentally established. (To be clear: I am here adopting a purely descriptive perspective and leaving thorny normative questions aside. Naturally, formal models of rationality also typically include normative claims about human cognition.)
In particular, I claimed that preferential logics could explain what is known as the modus ponens-modus tollens asymmetry, i.e. the fact that in experiments, participants will readily reason following the modus ponens principle, but tend to ‘fail’ quite miserably with modus tollens reasoning – even though these are equivalent according to classical as well as many non-classical logics. I also defended (e.g. at a number of talks, including one at the Munich Center for Mathematical Philosophy which is immortalized in video here and here) that preferential logics could be applied to another well-known, robust psychological phenomenon, namely what is known as belief bias. Belief bias is the tendency that human reasoners seem to have to let the believability of a conclusion guide both their evaluation and production of arguments, rather than the validity of the argument as such.
Well, I am now officially taking most of it back (and mostly thanks to working on these issues with my student Herman Veluwenkamp).
Readers of the Brains blog might know about a symposium there concerning a paper by Philipp Koralus. In his commentary on the paper, Felipe de Brigard mentions the problem of captured attention:
"I have a hard time understanding how ETA may account for involuntary attention. Suppose you are focused on your task—reading a book at the library, say—and you hear a ‘bang’ behind you. A natural way of describing the event is to say that one’s attention has been involuntarily captured by the sound. Now, how does ETA explain this phenomenon?"
"So, you might have been asking, as part of your task of reading the blog, 'What does the blog say?' Now, you are getting the incongruent and irrelevant answer 'There’s a loud noise behind you.' There are now two possibilities, similar to what happens in the equivalent case in a conversation. One possibility is that you accommodate the answer, adopting a new question (and thereby a new task) to which 'There’s a loud noise behind you' would be a congruent answer, maybe, 'what sort of thing going on behind me?...You could also refuse to be distracted and then exercise some top-down control on your focus assignment to bring it back to something that’s relevant to your task.'
When I coined "the problem of captured attention" in my 2012 Synthese paper, "The Subject of Attention" (not cited by Koralus/de Brigard), I took a similar line, but focused on the activity of the subject, rather than on questions and answers:
A few weeks ago I had a post on different ways of counting infinities; the main point was that two of the basic principles that hold for counting finite collections cannot be both transferred over to the case of measuring infinite collections. Now, as a matter of fact I am equally (if not more) interested in the question of counting finite collections at the most basic level, both from the point of view of the foundations of mathematics (‘but what are numbers?’) and from the point of view of how numerical cognition emerges in humans. In fact, to me, these two questions are deeply related.
In a lecture I’ve given a couple of times to non-academic, non-philosophical audiences (so-called ‘outreach lectures’) called ‘What are numbers for people who do not count?’, my starting point is the classic Dedekindian question, ‘What are numbers?’ But instead of going metaphysical, I examine people’s actual counting habits (including among cultures that have very few number words). The idea is that Benacerraf’s (1973) challenge of how we can have epistemic access to these elusive entities, numbers, should be addressed in an empirically informed way, including data from developmental psychology and from anthropological studies (among others). There is a sense in which all there is to explain is the socially enforced practice of counting, which then gives rise to basic arithmetic (from there on, to the rest of mathematics). And here again, Wittgenstein was on the right track with the following observation in the Remarks on the Foundations of Mathematics:
This is how our children learn sums; for one makes them put down three beans and then another three beans and then count what is there. If the result at one time were 5, at another 7 (say because, as we should now say, one sometimes got added, and one sometimes vanished of itself), then the first thing we said would be that beans were no good for teaching sums. But if the same thing happened with sticks, fingers, lines and most other things, that would be the end of all sums.
“But shouldn’t we then still have 2 + 2 = 4?” – This sentence would have become unusable. (RFM, § 37)
In the recent Mind & Language workshop on cognitive science of religion, Frank Keil presented an intriguing paper entitled "Order, Order Everywhere and Not an Agent to Think: The Cognitive Compulsion to Make the Argument from Design." Keil does not believe the argument from design is inevitable - I've argued elsewhere that while teleological reasoning and creationism is common, arguing for the existence of God on the basis of perceived design is rare; it typically only happens when there are plausible non-theistic worldviews available.
Rather, Keil argues that from a very early age on, humans can recognize order, and that they prefer agents as causes for order. Taken together, this forms the cognitive basis for making the argument from design (AFD). (For similar proposals, see here and here). He proposes two very intriguing puzzles, and I'm wondering what NewApps readers think:
Some forms of orderliness give us a sense of design, others do not. What kinds of order give rise to an inference to design, or a designer?
Babies already seem to recognize ordered states from disordered states. How do they do it? What is it they recognize?
In comment #9 at this post, Susan makes a kind of canonical case I've heard from lots of assessment people.
First, I should say that I agree with 95% of the intended answers to Susan's rhetorical questions. We should be much clearer about what we want our students to get out of their degrees, and we should put in the hard work of assessing the extent that we are successful.
But "assessment" in contemporary American bureaucracies almost always accomplishes exactly the opposite of the laudable goals that Susan and I share. And there are deep systematic reasons for this. Below, I will first explain three fallacies and then explain why everyone involved in assessment faces enormous pressure to go along with these fallacies. Along the way I hope to make it clear how this results in "assessment" making things demonstrably worse.**
[H]e is proclaiming his new project, the Wolfram Language, to be the biggest computer language of all time. It has been in the works for more than 20 years, and, while in development, formed the underlying basis of Wolfram’s popular Mathematica software. In the words of Wolfram, now 54, his new language “knows about the world” and makes the world computable.
From the point of view of the philosophical debates on artificial intelligence, the crucial bit is the claim that his new language, unlike all other computer languages, “knows about the world”. Could it be that this language does indeed constitute a convincing reply to Searle’s Chinese Room argument?
To be clear, I take Searle’s argument to be problematic in a number of ways (some of which very aptly discussed in M. Boden’s classic paper), but the challenge posed by the Chinese Room seems to me to still stand; it still is one of the main questions in the philosophy of artificial intelligence. So if Wolfram’s new language does indeed differ from the other computer languages thus far developed specifically in this respect, it may offer us reasons to revisit the whole debate (which for now seems to have reached a stalemate).
In a recent blog entry, Laurie Santos and Tamar Gendler very nicely lay out the idea that explicit propositional knowledge is only a small part of the sort of understanding that guides action. As they say “Recent work in cognitive science has demonstrated that knowing is a shockingly tiny portion of the battle for most real world decisions. You may know that $19.99 is pretty much the same price as $20.00, but the first still feels like a significantly better deal. …You may know that a job applicant of African descent is as likely to be qualified as one of European descent, but the negative aspects of the former's resume will still stand out. “ (The post is short and really well written, go read the whole thing.) They then note, “You might think that this is old news. After all, thinkers for the last 2500 years have been pointing out that much of human action isn't under rational control.”
I would add: not only is this a point that one finds in Aristotle, but for the last 350 years it has been central to: Pascal, Marx Heidegger, Merleau-Ponty, Althusser, Foucault, pretty much every feminist epistemologist and philosopher of science (longino, Harding, Kukla, and on and on), and forcefully developed within mainstream analytic philosophy by Dreyfus, Haugeland, and others. )I sometimes think that the only important philosopher not to accept the point is Jason Stanley. – j/k!)
I have been reading Daniel Hutto and Erik Myin’s book Radicalizing Enactivism for a critical notice in the Canadian Journal of Philosophy. Enactivism is the view that cognition consists of a dynamic interaction between the subject and her environment, and not in any kind of contentful representation of that environment. I am struck by H&M’s reliance on a famous 1991 paper by the MIT roboticist Rodney Brooks, “Intelligence Without Representation.” Brooks’s paper is quite a romp—it has attracted the attention of a number of philosophers, including Andy Clark in his terrific book, Being There (1996). It’s worth a quick revisit today.
To soften his readers up for his main thesis, Brooks starts out his paper with an argument so daft that it cannot have been intended seriously, but which encapsulates an important strand of enactivist thinking. Here it is: Biological evolution has been going for a very long time, but “Man arrived in his present form [only] 2.5 million years ago.” (Actually, that’s a considerable over-estimate: homo sapiens is not more than half a million years old, if that.)
He invented agriculture a mere 19,000 years ago, writing less than 5000 years ago and “expert” knowledge only over the last few hundred years.
This suggests that problem solving behaviour, language, expert knowledge and application, and reason are all pretty simple once the essence of being and reacting are available. That essence is the ability to move around in a dynamic environment, sensing the surroundings to a degree sufficient to achieve the necessary maintenance of life and reproduction. This part of intelligence is where evolution has concentrated its time—it is much harder. (141)
Evolutionary accounts of deductive reasoning have been
enjoying a fair amount of popularity in the last decades. Some of those who
have defended views of this kind are Cooper, Maddy, and more recently Joshua
Schechter. The basic idea is that an explanation for why we have developed
the ability to reason deductively (if indeed we have developed this ability!)
is that it conferred a survival advantage to those individuals who possessed it among our ancestors, who in
turn were reproductively more successful than those individuals in the
ancestral population who did not possess this ability. In other words,
deductive reasoning would have arisen as an adaptation
in humans (and possibly in non-human animals too, but I will leave this question
aside). Attractive though it may seem at first sight (and I confess having had
a fair amount of sympathy for it for a while), this approach faces a number of
difficulties, and in my opinion is ultimately untenable. (Some readers will not
be surprised to hear this, if they recall a previous post where I argue that
deductive reasoning is best seen as a cultural product, not as a biological,
genetically encoded endowment in humans.)
In this post, I will spell out what I take to be the main
flaw of such accounts, namely the fact that they seem incompatible with the
empirical evidence on deductive reasoning in human reasoners as produced by
experimental psychology. In this sense, these accounts fall prey to the same
mistake that plagues many evolutionary accounts of female orgasm, in particular
those according to which female orgasm has arisen as an adaptation in the human
species. To draw the parallel between the case for deductive reasoning and the
case for the female orgasm, I will rely on Elisabeth Lloyd’s fantastic book The Case of the Female Orgasm (which, as
it so happens, I had the pleasure of re-reading during my vacation last
Last week I had a post up on metaphorical
language in cognitive science, which generated a very interesting discussion in
comments. I don’t think I’ve sufficiently made the case for the ‘too much’
claim, and the post was mostly intended to raise the question and foster some
debate. (It succeeded in that respect!)
There is one aspect of it though, which I
would like to follow up on. One commenter (Yan) pointed out that it’s not so
surprising that digital computers ‘think’ like us, given that they are based on
a conception of computation – the Turing machine – which was originally
proposed as a formal explanans for some cognitive activities that humans in
fact perform: calculations/computations. It is important to keep in mind that
before Turing, Post, Church and others working on the concept of computability
in the 1930, computation/effective calculation was an informal concept, with no precise mathematical definition
(something that has been noted by e.g. Wilfried Sieg in his ‘Gödel on
computability’.). To provide a mathematically precise account of this concept,
which in turn corresponds to cognitive tasks that humans do engage in, was
precisely the goal of these pioneers. So from this point of view, to say that
digital computers are (a bit) like human minds gets the order of things right; but
to say that human minds are like digital computers goes the wrong way round.
[note: this blogpost collects some scattered thoughts I hope to organize in article form sooner rather than later, for my British Academy project on religious social epistemology, see here]
There is an ongoing debate what we should do when we are confronted with disagreement with an epistemic peer; someone who is as knowledgeable and intellectually virtuous in the domain in question. Should we revise our beliefs (conciliationism), or not engage in any doxastic revision (steadfastness)? Epistemologists aim to settle this question in a principled way, hoping general principles like conciliationism and steadfastness can offer a solution not only for the toy examples that are being invoked, but also for real-world cases that we care passionately about, such as scientific, religious, political and philosophical disagreements. However, such cases have proven to be a hard nut to crack. A referee once commented on a paper I submitted on epistemic peer disagreement in science that the notion of epistemic peer in scientific practice was useless. S/he said "It works for simple cases like two spectators who disagree on which horse finished first, but when it comes to two scientists who disagree whether a fossil is a Homo floresiensis or Homo sapiens, the notion is just utterly useless."
That referee comment has always stuck in my mind as bad news for epistemology: if we can't use our principled answers in epistemology to apply to real-world cases of epistemic peerage, the debate is of marginal value. There seems to be an easy escape: one common response, both by steadfasters and conciliationists has been that we need not revise our beliefs in complex messy cases if we have reason to believe that we have access to some sort of insight that our epistemic peer lacks. van Inwagen, for instance, muses about his disagreements about some philosophical matters with David Lewis, whom he greatly respects: they both know the arguments, and both have considered them equally carefully. But ultimately, van Inwagen thinks
I suppose my best guess is that I enjoy some sort of philosophical insight (I mean in relation to these three particular theses) that, for all his merits, is somehow denied to Lewis. And this would have to be an insight that is incommunicable- -at least I don't know how to communicate it--, for I have done all I can to communicate it to Lewis, and he has understood perfectly everything I have said, and he has not come to share my conclusions.
As one can see, the notion of epistemic peer simply dissolves here, since van Inwagen just asserted that he has insights in the domain in question that are denied to Lewis. To take another example, suppose you are a Christian faced with a seemingly equally intelligent atheist. According to Plantinga (WCB), this disagreement is not a defeater to your beliefs, as you can confidently assume your dissenting peer "has made a mistake, or has a blind spot, or hasn’t been wholly attentive, or hasn’t received some grace she has, or is blinded by ambition or pride or mother love or something else". But how do we know when we are right? Is the "feeling of knowledge", the conviction we are right, any indication that we actually are right? I will argue here that it is not, and therefore, that simply discounting the other as epistemic peer on account of this is not warranted.
we do not understand the brain very well we are constantly tempted to use the
latest technology as a model for trying to understand it. In my childhood we
were always assured that the brain was a telephone switchboard. (‘What else
could it be?’) I was amused to see that Sherrington, the great British
neuroscientist, thought that the brain worked like a telegraph system. Freud
often compared the brain to hydraulic and electro-magnetic systems. Leibniz
compared it to a mill, and I am told some of the ancient Greeks thought the
brain functions like a catapult. At present, obviously, the metaphor is the
digital computer. (John Searle, Minds,
Brains and Science, 44)
As I am now preparing my philosophy of
cognitive science course, which I will start teaching in November for the first
time, one of the inevitable topics on my mind is the idea of the mind (or the
brain) as a computer. I am a relatively newcomer to the field of philosophy of
cogsci, which in a sense means that I approach it with a certain naiveté and
absence of, shall we say, prior indoctrination. At the same time, I am now also
reading Louise Barrett’s wonderful book Beyond
the Brain, whose chapter 7 is called ‘Metaphorical mind fields’. It begins
with the famous quote by Searle above, and goes on to argue that the ‘mind as a
computer’ conception is a metaphor; what is more, the tendency we have to
forget that it is a metaphor does much damage to a proper understanding of what
a human brain/mind is and does (problematizing the brain=mind equation is the
general topic of the whole book).
… our use of the computer metaphor is so
familiar and comfortable that we sometimes forget that we are dealing only with
a metaphor, and that there may be other, equally interesting (and perhaps more
appropriate) ways to think about brains and nervous systems and what they do.
After all, given that our metaphors for the brain and mind have changed
considerably over time, there’s no reason to expect that, somehow, we’ve
finally hit on the correct one, as opposed to the one that just reflects
something about the times in which we live. (Barrett 2011, 114/115)
Last week, Neil Sinhababu had a great post
here at New APPS picking up on an attempted explanation for why members of the
so-called Generation Y seem so dissatisfied with their lives (if indeed they
are). This latter post has been receiving a fair share of attention at the usual
places (Facebook, Twitter), and though admittedly funny, it seems to suffer
precisely from the limitation pointed out by Neil; it treats the problem mostly
as a psychological problem pertaining to the individual sphere (including of
course the parents component, as any good Freudian would have it), thus disregarding
the significant economic changes that took place in recent decades. However, I
do want to disagree with Neil’s quick dismissal of the non-negligible role that
the article claims for new technologies such as Facebook and social media in
general in the phenomenon. Neil says:
And I'm suspicious of explanations in
terms of the special properties of social media -- mostly it gives you a new
way to do kinds of social interaction that have been around forever.
There has been a lot of empirical and philosophical research on social cognition, in particular, on our ability to share attention with others, to put ourselves in the perspective of others, and to understand other people's intentions. As Gallotti and Frith point out in their recent article "Social cognition in the we-mode", many theorists still conceptualize social cognition in terms of individual minds and their capacities. Gallotti and Frith propose that interacting minds result in an irreducible 'we' mode, a collective mode of sharing minds that expands each of the participating minds' potential for understanding and action. This shift in social cognition research moves away from looking at social cognition in disembodied minds, but rather examines how people can augment their abilities by acting together with others.
This can shed light on the persistence and role of cognitive variability in humans, which I believe has also been studied too much in terms of individual cognition, and not enough in terms of interacting minds.
Take, for instance, people on the autistic spectrum. The prevalence of autism is quite high (although the high prevalence recently might largely be due to improved detection and early diagnosis). There have been numerous attempts to try to identity the underlying causes, such as (recently) older fathers, inducing labor, or genetic factors. Autism is also sometimes regarded as an extreme outlier in the normal range. In all cases, the underlying idea is that autism is abnormal, something that needs to be prevented, and failing that, treated so that the patient behaves in a way that approximates neurotypical children and adults. Terms like 'autism epidemic' and the recent scare over the alleged (now debunked) link between autism and the MMR vaccine reinforce this negative image of autism, something to be prevented at all costs.
While people with autism and their families undeniably face many challenges, looking at cognition in the "we" mode may shed new light on the phenomenon and may help explain its prevalence.
[X-posted at Prosblogion] In the epistemology of religion, authors like Swinburne and Alston have argued influentially that mystical experience of God provides prima facie justification for some beliefs we hold about God on the basis of such experiences, e.g., that he loves us, is sovereign etc. Belief in God, so they argue, is analogous to sense perception. If I get a mystical experience that God loves me, prima facie, I am justified in believing that God loves me.
Alston relies critically on William James' Varieties of Religious Experience (1902). This seminal, but now dated psychological study draws on self-reports by mystics to characterize mystical experience. The mystical experiences James (and others) describe are unexpected, unbidden; they immediately present something (God) to one's experience, i.e., they provide a direct, unmediated awareness of God. More recent empirical work on the phenomenology of religious experience, such as that conducted by Tanya Luhrmann and other anthropologists, suggests that ordinary sense experience is a poor and misleading analogy for religious experience.
Inspired by Plato onwards, theories of cosmic, physical, and moral sympathy (συμπάθεια--'fellow feeling') were developed in a variety of contexts (e.g., Galenic medicine, Stoic metaphysics, magnetism, moral psychology, magic, etc.). For all its variety, in most thinkers and traditions the very possibility of sympathy presupposes that sympathy takes place among things that are in one sense or another alike (sometimes within a single being/unity/organism) to be contrasted with the antipathy (ἀντιπάθεια) of un-alikes. (Here I just flag the non-trivial moral issues this raises for ethical theories that rely on sympathy/empathy.) Let's call this condition of the possibility of sympathy, "The Likeness Principle" (or TLP). I learned the significance of the TLP in Plotinian and Stoic thought from Eyjólfur Emilsson, René Brouwer.
[this is cross-posted at Prosblogion] Richard Dawkins has argued several times (e.g., here) that bringing up your child religiously is a form of child abuse. I think his argument that religious upbringing in general is child abuse has little merit (after all, Dawkins himself is the product of a traditional Anglican upbringing and calls himself - rather proudly - a cultural Anglican, hardly the victim of child abuse). However, his claim in the linked article is that parents who attempt to instill things like Young Earth Creationism (henceforth YEC) in their children are doing something wrong, or are somehow overstepping their role as parents. This question, I believe, is worthy of further attention.
(This post is dedicated to my friends Marian and Jan-Willem, who last week welcomed a lovely baby girl into the world. They will most certainly talk to her an awful lot.)
Why is it that
children from socioeconomically disadvantaged backgrounds tend to have lower
school performance than children from wealthier environments? This may seem
like a naïve question at first, but understanding the exact mechanisms in place proves to be much more challenging than one might think. Most likely, the
phenomenon is due to a conjunction of factors involving level of education of
primary caregiver, parental involvement, a stable environment, adequate
nutrition, among others. (Some would like to see ‘genetic predisposition’ on
the list. Now, while this cannot be ruled out, I take it that the currently
available data are too tangled up with the above-mentioned social factors to allow
for an analysis of the genetic component in isolation.)
A recent post at
the Fixes blog of the New York Times
(Fixes and The Stone are both members of the larger Opinionator family) highlights
one specific element: how much people from different socioeconomic backgrounds actually
talk to their infants. As reported in the 1995 book Meaningful Differences in the
Everyday Experience of Young American Children
(by Betty Hart and Todd R. Risley), it turns out that poorer parents talk
considerably less to and around their babies than more affluent parents:
"The problem with panpsychism is not that it is false; it does not get up to the level of being false. It is strictly speaking meaningless because no clear notion has been given to the claim. Consciousness comes in units and panpsychism cannot specify the units." John Searle, NYRB, 10 January, 2013, 55, reviewing Christof Koch, Consciousness: Confessions of A Romatic Reductionist.
Here is an excellent interview with Jesse Prinz (H/T Markus Schlosser) on the
themes of his new book, Beyond Human
Nature (which I still haven’t gotten around to reading). The main idea of the book is
that experience and culture, as opposed to genetic and biology, play a much larger
role in determining our behavior than is often thought. Some excerpts:
“If we are interested in differences in intelligence, the
thing we should be interested in is learning and culture.”
“Brazilians are super-nice.”
I find myself agreeing with pretty much everything that
Prinz says in the interview (including the bit about Brazilians…), which is not
so surprising, given that, like him, I am very much of a ‘nurture-culture’
person on the nature-nurture dimension. (A bit of self-promotion: here is a recent paper of mine, "A dialogical account of deductive reasoning as a case study for how culture shapes cognition", forthcoming in the Journal of Cognition and Culture.) But more importantly, to my mind he
manages to set up the debate in a very subtle and informative way, so I very
much recommend the interview to anyone interested in this debate. (Btw, I’ve posted
on my enthusiasm for his work before.)
In the 1980s, Ruse wrote a series of important papers that revived evolutionary ethics. The debate on the implications of evolved moral intuitions for ethics remains very active up to today (see e.g., this conference that I'll be attending in a couple of hours, at least if the British railway system isn't disrupted by half an inch of snow!). Contemporary evolutionary ethics can build on a wealth of research, for instance, in the cognitive neuroscience of morality, developmental psychology, and the study of altruism in animals. But the metaethics of the folk remains a relatively understudied area. Are people intuitive moral realists? If so, what is the connection between metaethics and behavior?
Ruse hypothesized that humans are intuitive moral realists, and that this metaethical intuition has an evolved function: "human beings function better if they are deceived by their genes into thinking that there is a disinterested objective morality binding upon them, which all should obey" (Ruse & Wilson, 1986, 179). Ruse thought that if everyone thought that morality was subjective, that it was merely a matter of taste or convention, our social systems would collapse. Intuitive moral realism was thus a key component in human altruistic behavior, held together by moral beliefs, which in turn were cemented by intuitive moral realism. As Ruse wrote later on: "Substantive morality stays in place as an effective illusion because we think that it is no illusion but the real thing" (Ruse, 2010, 310).
When Ruse first formulated this hypothesis, it was by no means clear that humans were intuitive moral realists. Also, it was not clear to what extent an intuitive moral realism, if anything, helped us to act more morally. In the meantime, there is some empirical work on this, which I'll discuss briefly below the fold.
A well-known phenomenon in the empirical study of human
reasoning is the so-called Modus
Ponens-Modus Tollens asymmetry. In reasoning experiments, participants
almost invariably ‘do well’ with MP (or at least something that looks like MP –
see below), but the rate for MT success drops considerably (from almost 100%
for MP to around 70% for MT – Schroyens and Schaeken 2003). As a result, any
theory purporting to describe human reasoning accurately must account for this
asymmetry. Now, given that for classical logic (and other non-classical
systems) MP and MT are equally valid, plain vanilla classical logic fails rather miserably in this respect.
As noted by Oaksford and Chater (‘Probability logic and the Modus Ponens-ModusTollens asymmetry in conditional inference’, in this 2008 book), some theories
of human reasoning (mental rules, mental models) explain the asymmetry at what
is known as the algorithmic level (a terminology proposed by Marr (1982)) –
that is, in terms of the mental process that (purportedly) implement deductive
reasoning in a human mind. So according to these theories, performing MT is
harder than performing MP (for a variety of reasons), which is why reasoners,
while still trying to reason deductively, have difficulties with MT. Other
theorists defend that participants are not in fact trying to reason deductively
at all, so the asymmetry is not related to some presumed competence-performance
gap. (Marr’s term to refer to the general goal of the processes, rather than
the processes themselves, is ‘computational level’ – the terminology is
somewhat unnatural, but it has now become standard.) Oaksford and Chater are
among those favoring an analysis at the computational level, in their case
proposing a Bayesian, probabilistic account of human reasoning as a normative
theory not only explaining but also justifying
In a recent paper, the eminent psychologist of reasoning P. Johnson-Laird says the following:
[T]he claim that naïve individuals can make deductions is controversial, because some logicians and some psychologists argue to the contrary (e.g., Oaksford & Chater, 2007). These arguments, however, make it much harder to understand how human beings were able to devise logic and mathematics if they were incapable of deductive reasoning beforehand.
This last claim strikes me as very odd, or at the very least as poorly formulated. (To be clear, I side with those, such as Oaksford and Chater, who think that deductive reasoning must be learned to be mastered and competently practiced by reasoners.) It looks like a doubtful inference to the best explanation: humans have in fact devised logic and mathematics, which are crucially based on the deductive method, so they must have been capable of deductive reasoning before that. Something like: birds had to have fully formed wings before they could fly – hum, I don’t think so… Instead, the wing analogy suggests that there must be some precursors to deductive reasoning skills in untrained reasoners, but the phylogeny of the deductive method (and to be clear, I’m speaking of cultural evolution here) would have been a gradual, self-feeding process.
The U.S. legal system gives preference to adult testimony in court
cases. In 2002 Thomas Junta was accused of killing a man in a
Massachusetts hockey rink quarrell in 2000. Thomas’ son 12-year-old
Quinlan Junta was a key defense witness for his father but his testimony
did not convince the jury. Thomas was found guilty and sentenced to 6
to 10 years in state prison.
In his famous paper entitled "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information"
cognitive psychologist George A. Miller of Princeton University argued
that our working memory, our ability to hold information in our minds
for a few seconds, is limited to 9 items. That's fewer items than the
items of a regular out of town American phone number. In light of this
you might wonder what to say about cases of people with extreme memory
abilities. Chao Lu
holds the Guinness world record in reciting Pi, a record dating back to
2005. Lu recalled 67,890 digits of Pi in 24 hours and 4 minutes with an
error at the 67,891st digit, saying it was a 5, when it was actually a
0. How is it possible to retrieve this quantity of information
accurately through working memory? Is it magic? After talking to several
people working in memory sports, we found out that it's not.
Daniel Kahneman’s Thinking, Fast and Slow is making quite a splash (the other
day, I saw at Bristol airport that it is currently at the top of the bestseller list for non-fiction -- naturally, it still can’t compete with Fifty Shades of Gray). I haven’t read it
yet, but people whose opinion I hold in high esteem tell me that it has been
successful in striking the difficult balance between being accessible to a
wider audience and scientifically accurate (for the most part at least) at the
same time. The book summarizes research on cognitive and reasoning biases of
the last decades, a research program in which Kahneman himself has been a major
player. The conceptual cornerstone of the book is the (still) popular
distinction between System 1 and System 2, the two systems which allegedly run
in parallel underpinning all our cognitive processes, and which often conflict
with each other.
Now, as I’ve stated a few times before (here for example), I
am no fan of System 1/System 2 talk at all (not even of weaker versions, the
so-called dual-process theories of cognition), even though I agree that the
empirical findings on cognitive biases should be taken very seriously. (I also
agree that there is something to the idea of debiasing as suppressing automatic
processes.) So I was curious to see how Kahneman himself introduces the System
1/System 2 distinction, and took a quick look at the book (my husband was
reading it during our holiday of a few weeks ago, after having gotten it from
me as a birthday present – that’s what you get for having a nerdy wife). The
first thing that struck me is that, on footnote 20, he lists some of the
pioneers of dual-system theories, including Jonathan Evans, Steve Sloman and
Keith Stanovich, and adds: “I borrow the terms System 1 and System 2 from early writings
of Stanovich and West that greatly influenced my thinking” (he refers to their
2000 BBS article on individual differences in reasoning). But what is puzzling
is that Stanovich himself now overtly rejects the conceptualization
of the distinction in terms of systems, which unduly suggests reified entities,
and now uses the process terminology
instead (same with Jonathan Evans).
But perhaps most striking is what Kahneman
says in the conclusion of the book:
(OK, so it looks
like I’m over-posting a bit today… Just one more!)
Between today and
tomorrow, the workshop ‘Groundedness in Semantics and Beyond’ is taking place
at MCMP in Munich, co-organized with the the ERC project Plurals,
Predicates, and Paradox led by Øystein
Linnebo. The workshop’s program seems excellent across the board, but the
opening talk is what really caught my attention: Patrick Suppes on ‘A
neuroscience perspective on the foundations of mathematics’. The abstract:
I mainly ask and partially answer three questions. First,
what is a number? Second, how does the brain process numbers? Third, what are
the brain processes by which mathematicians discover new theorems about
numbers? Of course, these three questions generalize immediately to mathematical
objects and processes of a more general nature. Typical examples are abstract
groups, high dimensional spaces or probability structures. But my emphasis is
not on these mathematical structures as such, but how we think about them. For the grounding of mathematics, I argue
that understanding how we think about mathematics and discover new results is
as important as foundations of mathematics in the traditional sense.