Today is UNESCO’s World Philosophy Day, which is celebrated on the third Thursday of November every year. As it so happens, November 20th is also the United Nations’ Universal Children’s Day (here is a blog post I wrote for the occasion 2 years ago). I am truly delighted that these two days coincide today, as children and philosophy are two of my greatest passions. But the intimate connection between children and philosophy runs much deeper than my particular, individual passions, and so it should be celebrated.* As Wittgenstein famously (but somewhat dismissively) put it:
Philosophers are often like little children, who first scribble random lines on a piece of paper with their pencils, and now ask an adult "What is that?" (Philosophical Occasions 1912-1951)
My own favorite definition of philosophy is that philosophy is at heart the activity of asking questions about things that appear to be obvious but are not. (True enough, it also involves attempting to provide answers and giving arguments to support one’s preferred answers.) And so it is incumbent on the philosopher to ask for example ‘What is time, actually?’, while everybody else goes about their daily business taking the nature of time for granted. Indeed, philosophy is intimately connected with curiosity and inquisitiveness, and this idea famously goes back all the way to the roots of philosophy as we know it:
Although over half the world' population are theists (according to Pew survey results), God's existence isn't an obvious fact, not even to those who sincerely believe he exists. To put it differently, as Keith DeRose recently put it, even if God exists, we don't know that he does. This presents a puzzle for theists: why doesn't God make his existence more unambiguously known? The problem of divine hiddenness has long been recognized by theists (for instance, Psalm 22), but only fairly recently has it become the focus of debate in philosophy of religion.
In several works, J.L. Schellenberg has argued that divine hiddenness constitutes evidence against God's existence. A simple version of this argument goes as follows (Schellenberg 1993, 83):
If there is a God, he is perfectly loving.
If a perfectly loving God exists, reasonable non-belief in the existence of God does not occur
Reasonable non-belief in the existence of God does occur.
No perfectly loving God exists.
There is no God.
The controversial premises are 2 and 3. Authors like Swinburne and Murray have argued against premise 2: God may have reasons to make his existence less obviously true. Their arguments state that if we knew God existed, we wouldn't be able to make morally significant choices. This is an empirical claim. Obviously, it cannot be experimentally tested directly. However, research in the cognitive science of religion (CSR) on the relationship between belief in God and morality may indicate whether or not this is a plausible claim.
Most readers will have had at least some exposure to John Searle’s interview by Tim Crane, which was published earlier this week. It was then hotly debated in the philosophical blogosphere at large (in particular at the Leiter Reports). Together with Peter Unger’s interview published roughly around the same time, it seems that the ‘old guard’ is on a Quixotesque crusade to chastise the younger crowd for the allegedly misguided, sorrow state of current philosophy. Now, I do think there is some truth to be found in what Searle says about the role of formal modeling in the philosophy of language, but his objections do not seem to apply at least to a growing body of research in formal semantics/philosophy of language. Moreover, it is not clear whether his own preferred methodology (judging from his seminal work on speech acts etc.) in fact does justice to what he himself views as the primary goal of philosophical analyses of language.
Here are the crucial passages from the interview (all excerpts from the passage posted by Leiter), the main bits in bold:
Well, what has happened in the subject I started out with, the philosophy of language, is that, roughly speaking, formal modeling has replaced insight. My own conception is that the formal modeling by itself does not give us any insight into the function of language.
Any account of the philosophy of language ought to stick as closely as possible to the psychology of actual human speakers and hearers. And that doesn’t happen now. What happens now is that many philosophers aim to build a formal model where they can map a puzzling element of language onto the formal model, and people think that gives you an insight. …
Some time ago, I wrote a blog post defending the idea that a particular family of non-monotonic logics, called preferential logics, offered the resources to explain a number of empirical findings about human reasoning, as experimentally established. (To be clear: I am here adopting a purely descriptive perspective and leaving thorny normative questions aside. Naturally, formal models of rationality also typically include normative claims about human cognition.)
In particular, I claimed that preferential logics could explain what is known as the modus ponens-modus tollens asymmetry, i.e. the fact that in experiments, participants will readily reason following the modus ponens principle, but tend to ‘fail’ quite miserably with modus tollens reasoning – even though these are equivalent according to classical as well as many non-classical logics. I also defended (e.g. at a number of talks, including one at the Munich Center for Mathematical Philosophy which is immortalized in video here and here) that preferential logics could be applied to another well-known, robust psychological phenomenon, namely what is known as belief bias. Belief bias is the tendency that human reasoners seem to have to let the believability of a conclusion guide both their evaluation and production of arguments, rather than the validity of the argument as such.
Well, I am now officially taking most of it back (and mostly thanks to working on these issues with my student Herman Veluwenkamp).
I live very close to Port Meadow, one of the largest meadows of open common land in the UK, already in existence in the 10th century, and mentioned in the Domesday book in 1086. I saw my first-ever live, wild oriole there. The land has been never ploughed, so it is possible to discern outlines of older archaeological remains, some going back to the Bronze Age. The consistent management of the land makes the changes predictable: it turns into a lake in winter, is sprinkled with buttercups this time of year (see pictures below the fold - both are taken at about the same place, but one in May and the other in November), and looks mysterious and misty in the fall. Whenever I walk on Port Meadow I take my camera, anxious to preserve any beautiful view that falls on my retina, to preserve it for future memories. And, like many other parents, I take dozens of pictures of my growing children. Recently, I saw an NPR piece (no author given) that took issue with this tendency to want to preserve pictures for future memory.
The article launches a two-pronged attack against pictures. First, by worrying about capturing the moment, we lose the transience and beauty of the moment and enjoy it less. Second, the article cites psychological evidence that shows that people actually remember fewer objects during a museum visit if they were allowed to take photos of them, compared to when they only were allowed to observe them. The phenomenon is known as the photo-taking-impairment effect. Linda Henkel, who discovered the effect, says: "Any time…we count on these external memory devices, we're taking away from the kind of mental cognitive processing that might help us actually remember that stuff on our own."
An important and somewhat neglected topic is what happens when biopolitics intersects with juridical power in courts of law. Today, we got a good example of one way it can happen. Several years ago, the Supreme Court ruled that states could not execute the “intellectually disabled.” They also let the states decide what that meant. Today, they specified (5-4, with the usual lineup for a “liberal” Kennedy opinion) that, although using an IQ score of 70 or below as evidence of such disability is ok, it’s not ok to draw a bright line cutoff at a score of 70 because one had to take into account the 5 point margin of error in the test itself. In so doing, the SCOTUS spared the life of a Florida inmate with a measured IQ of 71.
There is a lot to say here (and for me, quibbling about where the IQ cutoff should be distracts from the larger point, which is that we shouldn’t be executing people. And, IQ testing is its own set of problems), but I do think it’s notable the extent to which the decision is expressly biopolitical, and not juridical. Recall Foucault’s claim one symptom of the emergence of biopower is a decline in the death penalty (History of Sexuality 1, p. 138). Here, we see how that decline can manifest itself even within the judicial system.
Another sad loss this week: psychologist Sandra Bem, a pioneer in the empirical study of gender roles, passed away on Tuesday, May 20th. Here is the most complete obituary I could find so far, which details nicely her scientific contributions and the practical impact they had in gender policies. For example, it was largely based on her scientific work that the infamous practice of segregating classified job listings under "Male Help Wanted" and "Female Help Wanted" columns was finally abandoned, after a 1973 decision of the US Supreme Court ruling against the practice. (The case was against a particular press, but within a year all other newspapers in the country changed how their classified ads were listed.)
There are many other aspects of Sandra Bem’s life and work worth mentioning, but let me focus on two of them. As an undergraduate in 1965, she met Daryl Bem, then a young assistant professor, and a romantic relationship between them began. (Yes, there are successful stories too, apparently…) Initially, she did not want to get married, as this course of events seemed to preclude the professional path she had in mind for herself. But Daryl was not deterred, and so together they agreed on an arrangement that would allow her to flourish professionally, and which would basically consist in what is now known as equally shared parenting – an ideal that many couples aspire to, but which remains a challenge to implement (speaking from personal experience!). The ‘experiment’ was largely successful, and Sandra narrates all the ups and downs of raising two children (a boy and a girl) on this model in her 1998 book An Unconventional Family. (I’ve been meaning to read the book for years, and now may well be the time to stop procrastinating.)
An article by Alla Katsnelson in Nature (28 April; doi:10.1038/nature.2014.15106; currently free) reports on new results from Jeffrey Mogil, a well-known pain researcher at McGill. Mogil and his team have shown that olfactory exposure to males (humans, rats, cats, dogs, guinea pigs) dampens pain responses in mice. In a paper published in Nature Methods (doi:10.1038/nmeth.2935), Mogil and his team report that even a T-shirt, or the scent of chemicals from a male armpit, had the same effect. The only exception was male cage-mates of the subjects. Women, on the other hand, had no effect on pain sensitivity.
A few weeks ago I had a post on different ways of counting infinities; the main point was that two of the basic principles that hold for counting finite collections cannot be both transferred over to the case of measuring infinite collections. Now, as a matter of fact I am equally (if not more) interested in the question of counting finite collections at the most basic level, both from the point of view of the foundations of mathematics (‘but what are numbers?’) and from the point of view of how numerical cognition emerges in humans. In fact, to me, these two questions are deeply related.
In a lecture I’ve given a couple of times to non-academic, non-philosophical audiences (so-called ‘outreach lectures’) called ‘What are numbers for people who do not count?’, my starting point is the classic Dedekindian question, ‘What are numbers?’ But instead of going metaphysical, I examine people’s actual counting habits (including among cultures that have very few number words). The idea is that Benacerraf’s (1973) challenge of how we can have epistemic access to these elusive entities, numbers, should be addressed in an empirically informed way, including data from developmental psychology and from anthropological studies (among others). There is a sense in which all there is to explain is the socially enforced practice of counting, which then gives rise to basic arithmetic (from there on, to the rest of mathematics). And here again, Wittgenstein was on the right track with the following observation in the Remarks on the Foundations of Mathematics:
This is how our children learn sums; for one makes them put down three beans and then another three beans and then count what is there. If the result at one time were 5, at another 7 (say because, as we should now say, one sometimes got added, and one sometimes vanished of itself), then the first thing we said would be that beans were no good for teaching sums. But if the same thing happened with sticks, fingers, lines and most other things, that would be the end of all sums.
“But shouldn’t we then still have 2 + 2 = 4?” – This sentence would have become unusable. (RFM, § 37)
For my graduate seminar on attention last night we read papers outside my usual range of expertise, on the intersection of attention and culture. We read Nisbett et al.'s Culture and Systems of Thought and Hedden et al.'s Cultural Influences on the Neural Substrates of Attentional Control. Both are fascinating and worth a read. But the Nisbett et al. article, in particular, is full of ideas that may be interesting to readers of New APPS. Here are some of what I found to be salient points:
The article maintains that different cultural groups have different, opposed styles of argument. Specifically, "Westerners" are committed to avoiding the appearance of contradiction as part of an analytic style of argumentation, but "East Asians" embrace contradiction as part of "naive dialecticism." They give an example of one study that tests this claim:
In comment #9 at this post, Susan makes a kind of canonical case I've heard from lots of assessment people.
First, I should say that I agree with 95% of the intended answers to Susan's rhetorical questions. We should be much clearer about what we want our students to get out of their degrees, and we should put in the hard work of assessing the extent that we are successful.
But "assessment" in contemporary American bureaucracies almost always accomplishes exactly the opposite of the laudable goals that Susan and I share. And there are deep systematic reasons for this. Below, I will first explain three fallacies and then explain why everyone involved in assessment faces enormous pressure to go along with these fallacies. Along the way I hope to make it clear how this results in "assessment" making things demonstrably worse.**
Whether animals can experience romantic love is unknown. But there is some evidence that they are capable of experiencing the same range of emotions as we can. The brains of many mammals are surprisingly similar to the human brain. Take as an example the brain of a cat. A cat’s brain is small compared to ours, occupying only about one percent of their body mass compared to about two percent in an average human. But size doesn't always matter. Neanderthals, the hominids that went extinct more than twenty thousand years ago, had bigger brains than Homo sapiens, but they probably weren’t smarter than the Homo sapiens that beat them in the survival game. Surface folding and brain structure matter more than brain size. The brains of cats have an amazing surface folding and a structure that is about ninety percent similar to ours. This suggests that they could indeed be capable of experiencing romantic love. But we will probably never know for sure.
I'm thinking (again) about beeping people during aesthetic experiences. The idea is this. Someone is reading a story, or watching a play, or listening to music. She has been told in advance that a beep will sound at some unexpected time, and when the beep sounds, she is to immediately stop attending to the book, play, or whatever, and note what was in her stream of experience at the last undisturbed moment before the beep, as best she can tell. (See Hurlburt 2011 for extensive discussion of such "experience sampling" methods.)
This summer I learned to walk. More precisely, I learned to walk normally. My gait had gotten unsteady, and I was dragging my right foot. Work with an excellent physical therapist helped straighten me out. But balance problems, tremors, and hesitations continued.
At the beginning of August I was diagnosed with Parkinson’s. I want to describe the phenomenology of my version of it, and begin thinking through its implications for the philosophy of perception and action. But first the disease itself.
Evolutionary accounts of deductive reasoning have been
enjoying a fair amount of popularity in the last decades. Some of those who
have defended views of this kind are Cooper, Maddy, and more recently Joshua
Schechter. The basic idea is that an explanation for why we have developed
the ability to reason deductively (if indeed we have developed this ability!)
is that it conferred a survival advantage to those individuals who possessed it among our ancestors, who in
turn were reproductively more successful than those individuals in the
ancestral population who did not possess this ability. In other words,
deductive reasoning would have arisen as an adaptation
in humans (and possibly in non-human animals too, but I will leave this question
aside). Attractive though it may seem at first sight (and I confess having had
a fair amount of sympathy for it for a while), this approach faces a number of
difficulties, and in my opinion is ultimately untenable. (Some readers will not
be surprised to hear this, if they recall a previous post where I argue that
deductive reasoning is best seen as a cultural product, not as a biological,
genetically encoded endowment in humans.)
In this post, I will spell out what I take to be the main
flaw of such accounts, namely the fact that they seem incompatible with the
empirical evidence on deductive reasoning in human reasoners as produced by
experimental psychology. In this sense, these accounts fall prey to the same
mistake that plagues many evolutionary accounts of female orgasm, in particular
those according to which female orgasm has arisen as an adaptation in the human
species. To draw the parallel between the case for deductive reasoning and the
case for the female orgasm, I will rely on Elisabeth Lloyd’s fantastic book The Case of the Female Orgasm (which, as
it so happens, I had the pleasure of re-reading during my vacation last
The debate around the Black Pete tradition in the
Netherlands rages on: while many outspoken voices have presented different
arguments on why the tradition should be at the very least severely modified (I
recommend in particular the pieces by Asha ten Broeke), a very large portion of
the population has expressed its support and fondness for the tradition as is,
in particular by ‘liking’ a Facebook page, a ‘Pete-tion’, defending the
continuation of the tradition. As of now, more than 2 million Facebook users
have ‘liked’ this page, and last Saturday supporters gathered for a rally in
Interestingly, in its most recent update, the Pete-tion FB
page (Pietitie, in Dutch) proudly announces that it is ‘against racism, let us
be clear on that’. Now, what they mean by ‘racism’ here must surely be
different from what Black Pete critics mean when they describe the tradition as
racist. More generally, and as often the case, it seems that those involved in
the debate may at least to some extent be talking past each other because
different meanings of ‘racism’ are floating around. (To be clear, I do not
think this is a merely verbal dispute; there does seem to be a core of true
disagreement.) Well, one of the skills we philosophers pride ourselves on is the skill of language precisification and conceptual analysis. So in what
follows I’ll attempt to distinguish some of the different meanings of racism
underpinning the debate, in the hope that such a clarification may somehow
contribute to its advancement. (Full disclosure: what I really want to accomplish is to
convince my many intelligent, well-meaning friends who do not see the racist
component of the tradition that it is there,
and that it is problematic.)
In a recent post I introduced a distinction between two types of pragmatic functions corresponding to two
directions of fit with social norms. An invocative function was one whereby a
properly performed speech act contributed to the institution of a social norm -
say, giving a warranted order - and a reflective function - that asserts the
(prior) existence of norm. Here, I first mention two other independent
dimensions of variation and develop a single example that has been on my mind
One annoying feature of re-reading other people's scholarship, is the possibility of discovering that one's treasured ideas may well be anticipated by others. Memory and self-deception can be funny like that. So, it's probably not uncommon that folk really fail to attribute to others what is due to them without realizing they are in the wrong. Even when the mistakes are honest, they still involve injustices, and these may be quite large given that they may, say, reinforce gender related unfairness, too. Such injustices are not easy to excuse or forgive when one feels that one's work or presence has been silenced or unfairly ignored. Even so, we try to cope with this kind of injustice. Yet, faking data or copying (and pasting) texts without attribution is legitimately an unpardonable sin in the Academy, especially if it is part of a pattern of such (plagiarism/faking) cases. One might be willing to give a student a second chance, but recoil from letting a confirmed fraudulent senior scholar back into the fold. Paradoxically many of us treat such cases as worse sin than many crimes on the 'outside.' (Coetzee's Disgrace reflects on this.)
It is, thus, understandable that the good folk at Retractionwatch react with dismay that prominent scholars, including philosophy's very own Philip Pettit, are willing to endorse Marc Hauser's forthcoming book, Evilicious. What really rankles Retractionwatch is that Hauser has not owned up to his record of misconduct and "only acknowledged “mistakes.”" (As they write: "But we do prefer when those given a second chance acknowledge that they
did something wrong. That might start with noting a retraction, instead
of continuing to list the retracted paperamong your publications.")
is a condition in which attributes, such as color, shape, sound, smell
and taste, bind together in unusual ways, giving rise to atypical
experiences, mental images or thoughts. For example, a synesthete may
experience numbers and letters printed in black as having their own
unique colors or spoken words as having specific tastes normally only
associated with food and drinks. People who have the condition usually
have had since early childhood, though there are also cases in which people acquire it after brain injury or disease later in life.
hypothesis about how synesthesia develops in early childhood suggests
that somtimes the brain fails to get rid of structural connections
between neural regions that do not normally project to each other. In
early childhood the brain develops many more neural connections than it
ends up using. During development, pruning processes eliminate a large
number of these structural connections. We don't know much about the
principles underlying neural pruning, though some of the connections
that the brain prunes away appear to be pathways that are not needed.
So, one possibility is that the pruning processes in synesthetes are
less effective compared to those in non-synesthetes, and that some
pathways that are pruned away in most people remain active in
Let me here observe too, continued CLEANTHES, that this religious argument, instead of being weakened by that scepticism so much affected by you, rather acquires force from it, and becomes more firm and undisputed. To exclude all argument or reasoning of every kind, is either affectation or madness. The declared profession of every reasonable sceptic is only to reject abstruse, remote, and refined arguments; to adhere to common sense and the plain instincts of nature; and to assent, wherever any reasons strike him with so full a force that he cannot, without the greatest violence, prevent it. Now the arguments for Natural Religion are plainly of this kind; and nothing but the most perverse, obstinate metaphysics can reject them. Consider, anatomise the eye; survey its structure and contrivance; and tell me, from your own feeling, if the idea of a contriver does not immediately flow in upon you with a force like that of sensation. The most obvious conclusion, surely, is in favour of design; and it requires time, reflection, and study, to summon up those frivolous, though abstruse objections, which can support Infidelity. Who can behold the male and female of each species, the correspondence of their parts and instincts, their passions, and whole course of life before and after generation, but must be sensible, that the propagation of the species is intended by Nature? Millions and millions of such instances present themselves through every part of the universe; and no language can convey a more intelligible irresistible meaning, than the curious adjustment of final causes. To what degree, therefore, of blind dogmatism must one have attained, to reject such natural and such convincing arguments?--Hume, Dialogues 3.
In her post yesterday, Helen de Cruz asserted that Cleanthes "makes an important empirical claim, namely that belief in a designer flows spontaneously, irresistibly and non-inferentially from our consideration of order in the natural world." Because Helen only quoted the sentence on with "anatomise the eye," she left me the straightforward rejoinder that according to Hume such anatomizing always presupposes expert judgment/taste/cultivation. In response, the up-and-coming Hume scholar, Liz Goodnick, pointed to more evidence for Helen's position. (I think it is a bit misleading to call that evidence "Later in Part III,"--it is the very same paragraph, and part of a single, non-trivial argument, but strictly speaking Goodnick is correct.) I am afraid that in larger context the claim by Helen and Liz cannot be sustained, or so I argue below the fold in some detail (apologies).
Everyone working on emotional development (see here for a previous discussion on maternal deprivation studies using monkeys) has read about the devastating toll life in the Romanian orphanage system exacts. This Aeon Magazine piece discusses the ethics of investigation, intervention, and policy advocacy in a study comparing foster care and orphanages.
The latest The Stone installment is a piece by Gregory
Currie (Nottingham) where he examines critically the claim made by several
prominent people – he mentions in particular Martha Nussbaum in Love’s Knowledge – that reading “great
literature make[s] us better”. He points out that in the philosophical debates
so far, proponents of this view have presented arguments on how literature and
fiction might have this effect, but
no compelling evidence to the effect that it does have the purported effect. He adds the parenthetical remark:
a schools inspector reported on the efficacy of our education system by listing
ways that teachers might be helping students to learn; the inspector would be
out of a job pretty soon.
When reading the piece, I was intrigued by
the claim that there is no, or hardly any, empirical evidence on the effects of reading
literature for moral traits such as empathy, kindness etc. Currie seems correct
in noting that authors such as Nussbaum and others coming from the
philosophical perspective do not refer to empirical data potentially
corroborating the position; but is it true that there are virtually no
empirical results on the issue?
This semester, I’ve experimented with anonymous grading for
the first time. Now that I think about it, it is a mystery why it took me so
long to realize the obviousness of it, but better late than not at all, I
suppose. As many other countries, the Netherlands does not have a tradition of
anonymous grading at all, but I recently found out that in the UK it is fairly common practice, showing that it can be done. This was one of the topics of
Jennifer Saul’s recent Aspasia Lecture in Groningen, and I am happy to report
that she made such a good case for it that my colleagues in the evaluation
board of the Faculty are already looking into adopting anonymous grading
Why should it be done? Well, for those of you familiar with
the literature on implicit biases, the answer will not be hard to find: we
inevitably rely on stereotypes and preconceptions to perceive and judge people,
which serve as convenient heuristic shortcuts. This can have a negative effect
on how we judge members of stigmatized groups (based on gender, ethnicity,
class, geographical origin etc.), and it can also unfairly boost our
judgment of privileged groups. With grading in particular, it has been noticed
that anonymity significantly increases the average grades of members of these
stigmatized groups, simply because their work is looked upon more objectively
without the association to a particular person. (See this informative report by
the British National Union of Students.)
Aarøe Nissen is a 22-year-old math student at Aarhus University,
Denmark, with extraordinary memory abilities. He has competed in memory
sports for several years. He can recite the number Pi to more than
20,000 decimal points, recall thousands of names, faces and historical
dates and remember the order of a pack of cards.
(This post is dedicated to my friends Marian and Jan-Willem, who last week welcomed a lovely baby girl into the world. They will most certainly talk to her an awful lot.)
Why is it that
children from socioeconomically disadvantaged backgrounds tend to have lower
school performance than children from wealthier environments? This may seem
like a naïve question at first, but understanding the exact mechanisms in place proves to be much more challenging than one might think. Most likely, the
phenomenon is due to a conjunction of factors involving level of education of
primary caregiver, parental involvement, a stable environment, adequate
nutrition, among others. (Some would like to see ‘genetic predisposition’ on
the list. Now, while this cannot be ruled out, I take it that the currently
available data are too tangled up with the above-mentioned social factors to allow
for an analysis of the genetic component in isolation.)
A recent post at
the Fixes blog of the New York Times
(Fixes and The Stone are both members of the larger Opinionator family) highlights
one specific element: how much people from different socioeconomic backgrounds actually
talk to their infants. As reported in the 1995 book Meaningful Differences in the
Everyday Experience of Young American Children
(by Betty Hart and Todd R. Risley), it turns out that poorer parents talk
considerably less to and around their babies than more affluent parents:
Here I am, back from my vacation and trying
desperately to catch up with the accumulated work and all the interesting
events in internet-world of the last week. At NewAPPS alone there are quite a
few posts I want to react to, in particular Eric’s post on the genealogy of
genealogy. But let me start by commenting on the ‘hot topic’ of the moment, at
least among philosophy geeks: L.A. Paul’s draft paper on how decision theory
is useless when it comes to making life-transforming decisions such as having a
child. Eric and Helen already have nice posts up reacting to the paper, but I
hope there is still room for one more NewAPPS post on the topic.
Perhaps the first thing to notice, which
comes up only at the end of Paul’s paper, is that the very idea of having
children being a matter of choice/decision is a very recent one. For the
longest part of human history, and for the largest portion of the human
population (excluding, for example, some of those who took up religious vows), finding a partner and procreating was simply the normal course of
events, no questions asked. (Indeed, Christian faith even views it as a moral
obligation.) It is only fairly recently, possibly only towards the end of the
20th century, that having a child became a matter of choice at least
for some people, in some parts of the planet. Contributing factors are the
availability of contraceptive methods, and a wider range of life options which
are now deemed ‘acceptable’, or at least more acceptable than before. (People
who choose to remain child-free, in particular women, are still often looked at
In the supernatural thriller Memory,
written by Bennett Joshua Davlin, Dr. Taylor Briggs, who is the leading
expert on memory, examines a patient found nearly dead in the Amazon.
While checking on the patient, Taylor is accidentally exposed to a
psychedelic drug that unlocks memories of a killer that committed
murders many years before Taylor was born. The killer turns out to be
his ancestor. Taylor’s memories, despite being of events Taylor never
experienced, are very detailed. They contain the point-of-view of his
ancestor and the full visual scenario experienced by the killer.
the movie is supernatural, it brings up an interesting question. Is it
possible to inherit our ancestors’ memories? The answer is not black and
white. It depends on what we mean by ‘memory’. The story of the movie is
farfetched: there is no evidence or credible scientific theory
suggesting that we can inherit specific episodic memories of events that
our ancestors experienced. In other words, it’s highly unlikely that
you will suddenly remember your great-great-grandfather’s wedding day or
your great-great-grandmother’s struggle in childbirth.
Here is an excellent interview with Jesse Prinz (H/T Markus Schlosser) on the
themes of his new book, Beyond Human
Nature (which I still haven’t gotten around to reading). The main idea of the book is
that experience and culture, as opposed to genetic and biology, play a much larger
role in determining our behavior than is often thought. Some excerpts:
“If we are interested in differences in intelligence, the
thing we should be interested in is learning and culture.”
“Brazilians are super-nice.”
I find myself agreeing with pretty much everything that
Prinz says in the interview (including the bit about Brazilians…), which is not
so surprising, given that, like him, I am very much of a ‘nurture-culture’
person on the nature-nurture dimension. (A bit of self-promotion: here is a recent paper of mine, "A dialogical account of deductive reasoning as a case study for how culture shapes cognition", forthcoming in the Journal of Cognition and Culture.) But more importantly, to my mind he
manages to set up the debate in a very subtle and informative way, so I very
much recommend the interview to anyone interested in this debate. (Btw, I’ve posted
on my enthusiasm for his work before.)
In the 1980s, Ruse wrote a series of important papers that revived evolutionary ethics. The debate on the implications of evolved moral intuitions for ethics remains very active up to today (see e.g., this conference that I'll be attending in a couple of hours, at least if the British railway system isn't disrupted by half an inch of snow!). Contemporary evolutionary ethics can build on a wealth of research, for instance, in the cognitive neuroscience of morality, developmental psychology, and the study of altruism in animals. But the metaethics of the folk remains a relatively understudied area. Are people intuitive moral realists? If so, what is the connection between metaethics and behavior?
Ruse hypothesized that humans are intuitive moral realists, and that this metaethical intuition has an evolved function: "human beings function better if they are deceived by their genes into thinking that there is a disinterested objective morality binding upon them, which all should obey" (Ruse & Wilson, 1986, 179). Ruse thought that if everyone thought that morality was subjective, that it was merely a matter of taste or convention, our social systems would collapse. Intuitive moral realism was thus a key component in human altruistic behavior, held together by moral beliefs, which in turn were cemented by intuitive moral realism. As Ruse wrote later on: "Substantive morality stays in place as an effective illusion because we think that it is no illusion but the real thing" (Ruse, 2010, 310).
When Ruse first formulated this hypothesis, it was by no means clear that humans were intuitive moral realists. Also, it was not clear to what extent an intuitive moral realism, if anything, helped us to act more morally. In the meantime, there is some empirical work on this, which I'll discuss briefly below the fold.