Evolutionary accounts of deductive reasoning have been enjoying a fair amount of popularity in the last decades. Some of those who have defended views of this kind are Cooper, Maddy, and more recently Joshua Schechter. The basic idea is that an explanation for why we have developed the ability to reason deductively (if indeed we have developed this ability!) is that it conferred a survival advantage to those individuals who possessed it among our ancestors, who in turn were reproductively more successful than those individuals in the ancestral population who did not possess this ability. In other words, deductive reasoning would have arisen as an adaptation in humans (and possibly in non-human animals too, but I will leave this question aside). Attractive though it may seem at first sight (and I confess having had a fair amount of sympathy for it for a while), this approach faces a number of difficulties, and in my opinion is ultimately untenable. (Some readers will not be surprised to hear this, if they recall a previous post where I argue that deductive reasoning is best seen as a cultural product, not as a biological, genetically encoded endowment in humans.)
In this post, I will spell out what I take to be the main flaw of such accounts, namely the fact that they seem incompatible with the empirical evidence on deductive reasoning in human reasoners as produced by experimental psychology. In this sense, these accounts fall prey to the same mistake that plagues many evolutionary accounts of female orgasm, in particular those according to which female orgasm has arisen as an adaptation in the human species. To draw the parallel between the case for deductive reasoning and the case for the female orgasm, I will rely on Elisabeth Lloyd’s fantastic book The Case of the Female Orgasm (which, as it so happens, I had the pleasure of re-reading during my vacation last week).
The debate around the Black Pete tradition in the Netherlands rages on: while many outspoken voices have presented different arguments on why the tradition should be at the very least severely modified (I recommend in particular the pieces by Asha ten Broeke), a very large portion of the population has expressed its support and fondness for the tradition as is, in particular by ‘liking’ a Facebook page, a ‘Pete-tion’, defending the continuation of the tradition. As of now, more than 2 million Facebook users have ‘liked’ this page, and last Saturday supporters gathered for a rally in The Hague.
Interestingly, in its most recent update, the Pete-tion FB page (Pietitie, in Dutch) proudly announces that it is ‘against racism, let us be clear on that’. Now, what they mean by ‘racism’ here must surely be different from what Black Pete critics mean when they describe the tradition as racist. More generally, and as often the case, it seems that those involved in the debate may at least to some extent be talking past each other because different meanings of ‘racism’ are floating around. (To be clear, I do not think this is a merely verbal dispute; there does seem to be a core of true disagreement.) Well, one of the skills we philosophers pride ourselves on is the skill of language precisification and conceptual analysis. So in what follows I’ll attempt to distinguish some of the different meanings of racism underpinning the debate, in the hope that such a clarification may somehow contribute to its advancement. (Full disclosure: what I really want to accomplish is to convince my many intelligent, well-meaning friends who do not see the racist component of the tradition that it is there, and that it is problematic.)
One annoying feature of re-reading other people's scholarship, is the possibility of discovering that one's treasured ideas may well be anticipated by others. Memory and self-deception can be funny like that. So, it's probably not uncommon that folk really fail to attribute to others what is due to them without realizing they are in the wrong. Even when the mistakes are honest, they still involve injustices, and these may be quite large given that they may, say, reinforce gender related unfairness, too. Such injustices are not easy to excuse or forgive when one feels that one's work or presence has been silenced or unfairly ignored. Even so, we try to cope with this kind of injustice. Yet, faking data or copying (and pasting) texts without attribution is legitimately an unpardonable sin in the Academy, especially if it is part of a pattern of such (plagiarism/faking) cases. One might be willing to give a student a second chance, but recoil from letting a confirmed fraudulent senior scholar back into the fold. Paradoxically many of us treat such cases as worse sin than many crimes on the 'outside.' (Coetzee's Disgrace reflects on this.)
It is, thus, understandable that the good folk at Retractionwatch react with dismay that prominent scholars, including philosophy's very own Philip Pettit, are willing to endorse Marc Hauser's forthcoming book, Evilicious. What really rankles Retractionwatch is that Hauser has not owned up to his record of misconduct and "only acknowledged “mistakes.”" (As they write: "But we do prefer when those given a second chance acknowledge that they did something wrong. That might start with noting a retraction, instead of continuing to list the retracted paper among your publications.")
[This post is also cross-posted at our Psychology Today blog]
Synesthesia is a condition in which attributes, such as color, shape, sound, smell and taste, bind together in unusual ways, giving rise to atypical experiences, mental images or thoughts. For example, a synesthete may experience numbers and letters printed in black as having their own unique colors or spoken words as having specific tastes normally only associated with food and drinks. People who have the condition usually have had since early childhood, though there are also cases in which people acquire it after brain injury or disease later in life.
One hypothesis about how synesthesia develops in early childhood suggests that somtimes the brain fails to get rid of structural connections between neural regions that do not normally project to each other. In early childhood the brain develops many more neural connections than it ends up using. During development, pruning processes eliminate a large number of these structural connections. We don't know much about the principles underlying neural pruning, though some of the connections that the brain prunes away appear to be pathways that are not needed. So, one possibility is that the pruning processes in synesthetes are less effective compared to those in non-synesthetes, and that some pathways that are pruned away in most people remain active in synesthetes.
Let me here observe too, continued CLEANTHES, that this religious argument, instead of being weakened by that scepticism so much affected by you, rather acquires force from it, and becomes more firm and undisputed. To exclude all argument or reasoning of every kind, is either affectation or madness. The declared profession of every reasonable sceptic is only to reject abstruse, remote, and refined arguments; to adhere to common sense and the plain instincts of nature; and to assent, wherever any reasons strike him with so full a force that he cannot, without the greatest violence, prevent it. Now the arguments for Natural Religion are plainly of this kind; and nothing but the most perverse, obstinate metaphysics can reject them. Consider, anatomise the eye; survey its structure and contrivance; and tell me, from your own feeling, if the idea of a contriver does not immediately flow in upon you with a force like that of sensation. The most obvious conclusion, surely, is in favour of design; and it requires time, reflection, and study, to summon up those frivolous, though abstruse objections, which can support Infidelity. Who can behold the male and female of each species, the correspondence of their parts and instincts, their passions, and whole course of life before and after generation, but must be sensible, that the propagation of the species is intended by Nature? Millions and millions of such instances present themselves through every part of the universe; and no language can convey a more intelligible irresistible meaning, than the curious adjustment of final causes. To what degree, therefore, of blind dogmatism must one have attained, to reject such natural and such convincing arguments?--Hume, Dialogues 3.
In her post yesterday, Helen de Cruz asserted that Cleanthes "makes an important empirical claim, namely that belief in a designer flows spontaneously, irresistibly and non-inferentially from our consideration of order in the natural world." Because Helen only quoted the sentence on with "anatomise the eye," she left me the straightforward rejoinder that according to Hume such anatomizing always presupposes expert judgment/taste/cultivation. In response, the up-and-coming Hume scholar, Liz Goodnick, pointed to more evidence for Helen's position. (I think it is a bit misleading to call that evidence "Later in Part III,"--it is the very same paragraph, and part of a single, non-trivial argument, but strictly speaking Goodnick is correct.) I am afraid that in larger context the claim by Helen and Liz cannot be sustained, or so I argue below the fold in some detail (apologies).
Everyone working on emotional development (see here for a previous discussion on maternal deprivation studies using monkeys) has read about the devastating toll life in the Romanian orphanage system exacts. This Aeon Magazine piece discusses the ethics of investigation, intervention, and policy advocacy in a study comparing foster care and orphanages.
The latest The Stone installment is a piece by Gregory Currie (Nottingham) where he examines critically the claim made by several prominent people – he mentions in particular Martha Nussbaum in Love’s Knowledge – that reading “great literature make[s] us better”. He points out that in the philosophical debates so far, proponents of this view have presented arguments on how literature and fiction might have this effect, but no compelling evidence to the effect that it does have the purported effect. He adds the parenthetical remark:
Suppose a schools inspector reported on the efficacy of our education system by listing ways that teachers might be helping students to learn; the inspector would be out of a job pretty soon.
When reading the piece, I was intrigued by the claim that there is no, or hardly any, empirical evidence on the effects of reading literature for moral traits such as empathy, kindness etc. Currie seems correct in noting that authors such as Nussbaum and others coming from the philosophical perspective do not refer to empirical data potentially corroborating the position; but is it true that there are virtually no empirical results on the issue?
This semester, I’ve experimented with anonymous grading for the first time. Now that I think about it, it is a mystery why it took me so long to realize the obviousness of it, but better late than not at all, I suppose. As many other countries, the Netherlands does not have a tradition of anonymous grading at all, but I recently found out that in the UK it is fairly common practice, showing that it can be done. This was one of the topics of Jennifer Saul’s recent Aspasia Lecture in Groningen, and I am happy to report that she made such a good case for it that my colleagues in the evaluation board of the Faculty are already looking into adopting anonymous grading systematically.
Why should it be done? Well, for those of you familiar with the literature on implicit biases, the answer will not be hard to find: we inevitably rely on stereotypes and preconceptions to perceive and judge people, which serve as convenient heuristic shortcuts. This can have a negative effect on how we judge members of stigmatized groups (based on gender, ethnicity, class, geographical origin etc.), and it can also unfairly boost our judgment of privileged groups. With grading in particular, it has been noticed that anonymity significantly increases the average grades of members of these stigmatized groups, simply because their work is looked upon more objectively without the association to a particular person. (See this informative report by the British National Union of Students.)
[Cross-posted from our Psychology Today blog]
By Berit Brogaard and Kristian Marlow
Mark Aarøe Nissen is a 22-year-old math student at Aarhus University, Denmark, with extraordinary memory abilities. He has competed in memory sports for several years. He can recite the number Pi to more than 20,000 decimal points, recall thousands of names, faces and historical dates and remember the order of a pack of cards.
(This post is dedicated to my friends Marian and Jan-Willem, who last week welcomed a lovely baby girl into the world. They will most certainly talk to her an awful lot.)
Why is it that children from socioeconomically disadvantaged backgrounds tend to have lower school performance than children from wealthier environments? This may seem like a naïve question at first, but understanding the exact mechanisms in place proves to be much more challenging than one might think. Most likely, the phenomenon is due to a conjunction of factors involving level of education of primary caregiver, parental involvement, a stable environment, adequate nutrition, among others. (Some would like to see ‘genetic predisposition’ on the list. Now, while this cannot be ruled out, I take it that the currently available data are too tangled up with the above-mentioned social factors to allow for an analysis of the genetic component in isolation.)
A recent post at the Fixes blog of the New York Times (Fixes and The Stone are both members of the larger Opinionator family) highlights one specific element: how much people from different socioeconomic backgrounds actually talk to their infants. As reported in the 1995 book Meaningful Differences in the Everyday Experience of Young American Children (by Betty Hart and Todd R. Risley), it turns out that poorer parents talk considerably less to and around their babies than more affluent parents:
Here I am, back from my vacation and trying desperately to catch up with the accumulated work and all the interesting events in internet-world of the last week. At NewAPPS alone there are quite a few posts I want to react to, in particular Eric’s post on the genealogy of genealogy. But let me start by commenting on the ‘hot topic’ of the moment, at least among philosophy geeks: L.A. Paul’s draft paper on how decision theory is useless when it comes to making life-transforming decisions such as having a child. Eric and Helen already have nice posts up reacting to the paper, but I hope there is still room for one more NewAPPS post on the topic.
Perhaps the first thing to notice, which comes up only at the end of Paul’s paper, is that the very idea of having children being a matter of choice/decision is a very recent one. For the longest part of human history, and for the largest portion of the human population (excluding, for example, some of those who took up religious vows), finding a partner and procreating was simply the normal course of events, no questions asked. (Indeed, Christian faith even views it as a moral obligation.) It is only fairly recently, possibly only towards the end of the 20th century, that having a child became a matter of choice at least for some people, in some parts of the planet. Contributing factors are the availability of contraceptive methods, and a wider range of life options which are now deemed ‘acceptable’, or at least more acceptable than before. (People who choose to remain child-free, in particular women, are still often looked at with suspicion.)
[cross-posted from our Psychology Today blog]
In the supernatural thriller Memory, written by Bennett Joshua Davlin, Dr. Taylor Briggs, who is the leading expert on memory, examines a patient found nearly dead in the Amazon. While checking on the patient, Taylor is accidentally exposed to a psychedelic drug that unlocks memories of a killer that committed murders many years before Taylor was born. The killer turns out to be his ancestor. Taylor’s memories, despite being of events Taylor never experienced, are very detailed. They contain the point-of-view of his ancestor and the full visual scenario experienced by the killer.
Though the movie is supernatural, it brings up an interesting question. Is it possible to inherit our ancestors’ memories? The answer is not black and white. It depends on what we mean by ‘memory’. The story of the movie is farfetched: there is no evidence or credible scientific theory suggesting that we can inherit specific episodic memories of events that our ancestors experienced. In other words, it’s highly unlikely that you will suddenly remember your great-great-grandfather’s wedding day or your great-great-grandmother’s struggle in childbirth.
Here is an excellent interview with Jesse Prinz (H/T Markus Schlosser) on the themes of his new book, Beyond Human Nature (which I still haven’t gotten around to reading). The main idea of the book is that experience and culture, as opposed to genetic and biology, play a much larger role in determining our behavior than is often thought. Some excerpts:
“If we are interested in differences in intelligence, the thing we should be interested in is learning and culture.”
“Brazilians are super-nice.”
I find myself agreeing with pretty much everything that Prinz says in the interview (including the bit about Brazilians…), which is not so surprising, given that, like him, I am very much of a ‘nurture-culture’ person on the nature-nurture dimension. (A bit of self-promotion: here is a recent paper of mine, "A dialogical account of deductive reasoning as a case study for how culture shapes cognition", forthcoming in the Journal of Cognition and Culture.) But more importantly, to my mind he manages to set up the debate in a very subtle and informative way, so I very much recommend the interview to anyone interested in this debate. (Btw, I’ve posted on my enthusiasm for his work before.)
In the 1980s, Ruse wrote a series of important papers that revived evolutionary ethics. The debate on the implications of evolved moral intuitions for ethics remains very active up to today (see e.g., this conference that I'll be attending in a couple of hours, at least if the British railway system isn't disrupted by half an inch of snow!). Contemporary evolutionary ethics can build on a wealth of research, for instance, in the cognitive neuroscience of morality, developmental psychology, and the study of altruism in animals. But the metaethics of the folk remains a relatively understudied area. Are people intuitive moral realists? If so, what is the connection between metaethics and behavior?
Ruse hypothesized that humans are intuitive moral realists, and that this metaethical intuition has an evolved function: "human beings function better if they are deceived by their genes into thinking that there is a disinterested objective morality binding upon them, which all should obey" (Ruse & Wilson, 1986, 179). Ruse thought that if everyone thought that morality was subjective, that it was merely a matter of taste or convention, our social systems would collapse. Intuitive moral realism was thus a key component in human altruistic behavior, held together by moral beliefs, which in turn were cemented by intuitive moral realism. As Ruse wrote later on: "Substantive morality stays in place as an effective illusion because we think that it is no illusion but the real thing" (Ruse, 2010, 310).
When Ruse first formulated this hypothesis, it was by no means clear that humans were intuitive moral realists. Also, it was not clear to what extent an intuitive moral realism, if anything, helped us to act more morally. In the meantime, there is some empirical work on this, which I'll discuss briefly below the fold.
(Cross-posted at M-Phi)
A well-known phenomenon in the empirical study of human reasoning is the so-called Modus Ponens-Modus Tollens asymmetry. In reasoning experiments, participants almost invariably ‘do well’ with MP (or at least something that looks like MP – see below), but the rate for MT success drops considerably (from almost 100% for MP to around 70% for MT – Schroyens and Schaeken 2003). As a result, any theory purporting to describe human reasoning accurately must account for this asymmetry. Now, given that for classical logic (and other non-classical systems) MP and MT are equally valid, plain vanilla classical logic fails rather miserably in this respect.
As noted by Oaksford and Chater (‘Probability logic and the Modus Ponens-Modus Tollens asymmetry in conditional inference’, in this 2008 book), some theories of human reasoning (mental rules, mental models) explain the asymmetry at what is known as the algorithmic level (a terminology proposed by Marr (1982)) – that is, in terms of the mental process that (purportedly) implement deductive reasoning in a human mind. So according to these theories, performing MT is harder than performing MP (for a variety of reasons), which is why reasoners, while still trying to reason deductively, have difficulties with MT. Other theorists defend that participants are not in fact trying to reason deductively at all, so the asymmetry is not related to some presumed competence-performance gap. (Marr’s term to refer to the general goal of the processes, rather than the processes themselves, is ‘computational level’ – the terminology is somewhat unnatural, but it has now become standard.) Oaksford and Chater are among those favoring an analysis at the computational level, in their case proposing a Bayesian, probabilistic account of human reasoning as a normative theory not only explaining but also justifying the asymmetry.
"So many tangles in life are ultimately hopeless that we have no appropriate sword other than laughter," said Gordon Allport, an American psychologist and one of the founders of the study of personality. Scientists have studied the effects of mirthful laughter, positive thinking and optimism on feelings of self-worth, mood disorders and depression since the 1970s.
In The Antidote: Happiness for People Who Can't Stand Positive Thinking British author and Guardian feature writer Oliver Burkeman takes issue with "the cult of optimism," the convention that phony smiles, jovial laughter and positive thinking is a surefire path to happiness. Positive thinking is the problem, not the solution, Burkeman teaches us. He believes people have come to trust that a "Don't worry. Be happy" attitude toward life is the only route to contentment. People seem to be of the conviction that if you have negative thoughts and see your own limits, you cannot be happy. So to be happy we must set out on a journey that changes your mindset from negative and inhibited to enthusiastic, fervent and animated. We are told to visualize our dreams and goals, eliminate the word "impossible" from our vocabulary and put a big fabricated smile on our physiognomy. All that actually can lead to unhappiness, Burkeman says.
Diederik Stapel, also known as the ‘Lying Dutchman’, was the protagonist of one of the nastiest cases of professional misconduct in experimental psychology, amidst a recent surge of such cases. The committee in charge of investigating the extent of his fraudulent conduct has recently announced its conclusions. As could have been expected, it looks very bad, also affecting a number of his collaborators who, due to negligence, unwittingly allowed him to engage in such practices (article here in Dutch).
Stapel now says he feels ‘sadness and shame’, but in a surprising turn of events, he has also been writing a diary since the whole commotion started, parts of which he is planning to publish in book form! (Article in Dutch) Is it “a way to try to make money off of his terrible decisions”, as suggested by Bryce Huebner (to whom I owe the pointer to the article on Twitter)? Or is it a case of someone who is so used to being in the spotlight that any form of public attention is welcome? I don’t know what to make of it, but I suppose one shouldn't be too surprised by his penchant for poor judgment.
By Berit Brogaard and Kristian Marlow (Cross-posted from Psychology Today)
Whether "Lucy in the Sky with Diamonds" was a product of the Beatles' experimentation with psychedelic drugs is still a subject of great debate among Beatles fans and music experts. But it was no secret that the lyrics of many of the pop legend's famous tracks was inspired by LSD, including "I am a Walrus," "Tomorrow Never Knows," and "What's The Real Mary Jane." The Beatles' creating during a hallucinogenic trip is not a rare case of acid-driven creation, invention or discovery. The double helix structure of DNA occurred to geneticist and neuroscientist Francis Crick while he was tripping on the Lucy drug and low-level tech Kari Mullis hit on the idea behind Polymerase Chain Reaction (PCR), a now widely-used technique for amplifying a single piece of DNA by a factor of 100 billion, while cruising along the Pacific Coast Highway one night in his car on LSD.
(Cross-posted at M-Phi)
In a recent paper, the eminent psychologist of reasoning P. Johnson-Laird says the following:
[T]he claim that naïve individuals can make deductions is controversial, because some logicians and some psychologists argue to the contrary (e.g., Oaksford & Chater, 2007). These arguments, however, make it much harder to understand how human beings were able to devise logic and mathematics if they were incapable of deductive reasoning beforehand.
This last claim strikes me as very odd, or at the very least as poorly formulated. (To be clear, I side with those, such as Oaksford and Chater, who think that deductive reasoning must be learned to be mastered and competently practiced by reasoners.) It looks like a doubtful inference to the best explanation: humans have in fact devised logic and mathematics, which are crucially based on the deductive method, so they must have been capable of deductive reasoning before that. Something like: birds had to have fully formed wings before they could fly – hum, I don’t think so… Instead, the wing analogy suggests that there must be some precursors to deductive reasoning skills in untrained reasoners, but the phylogeny of the deductive method (and to be clear, I’m speaking of cultural evolution here) would have been a gradual, self-feeding process.
[Cross-posting from Psychology Today]
The U.S. legal system gives preference to adult testimony in court cases. In 2002 Thomas Junta was accused of killing a man in a Massachusetts hockey rink quarrell in 2000. Thomas’ son 12-year-old Quinlan Junta was a key defense witness for his father but his testimony did not convince the jury. Thomas was found guilty and sentenced to 6 to 10 years in state prison.
In his famous paper entitled "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information" cognitive psychologist George A. Miller of Princeton University argued that our working memory, our ability to hold information in our minds for a few seconds, is limited to 9 items. That's fewer items than the items of a regular out of town American phone number. In light of this you might wonder what to say about cases of people with extreme memory abilities. Chao Lu holds the Guinness world record in reciting Pi, a record dating back to 2005. Lu recalled 67,890 digits of Pi in 24 hours and 4 minutes with an error at the 67,891st digit, saying it was a 5, when it was actually a 0. How is it possible to retrieve this quantity of information accurately through working memory? Is it magic? After talking to several people working in memory sports, we found out that it's not.
You are preparing for your upcoming exam, reading through thousands of pages. Suddenly you realize that you forgot to pay attention to what you actually read. You were reading along but your thoughts were elsewhere. "Good God," you think. "Hours of wasted time." You turn back the pages and start over. This time you make sure you pay close attention.
Recent research, to appear in the journal PNAS, suggests that you may be wasting even more time by doing that. You don't need attention to comprehend what you read or to do math. In fact, you may not even need consciousness. The researchers, located at Hebrew University, used a technique known as Continuous Flash Suppression (CFS) to suppress consciousness in some 300 research participants for a short period of time. In CFS a series of rapidly changing images is presented to one eye, whereas a constant image is presented to the other. When using this technique, the constant image supposedly is not consciously perceived until after about 2 seconds.
At our lab in St. Louis we are working with several people with superhuman abilities, also known as “savant skills.” My research assistant Kristian Marlow and I are also currently finishing a book entitled The Superhuman Mind (under contract with an agency, see updates here). We are blogging about these cases almost daily over at Psychology Today. The following are four brief stories about some of the individuals we are working with.
Daniel Kahneman’s Thinking, Fast and Slow is making quite a splash (the other day, I saw at Bristol airport that it is currently at the top of the bestseller list for non-fiction -- naturally, it still can’t compete with Fifty Shades of Gray). I haven’t read it yet, but people whose opinion I hold in high esteem tell me that it has been successful in striking the difficult balance between being accessible to a wider audience and scientifically accurate (for the most part at least) at the same time. The book summarizes research on cognitive and reasoning biases of the last decades, a research program in which Kahneman himself has been a major player. The conceptual cornerstone of the book is the (still) popular distinction between System 1 and System 2, the two systems which allegedly run in parallel underpinning all our cognitive processes, and which often conflict with each other.
Now, as I’ve stated a few times before (here for example), I am no fan of System 1/System 2 talk at all (not even of weaker versions, the so-called dual-process theories of cognition), even though I agree that the empirical findings on cognitive biases should be taken very seriously. (I also agree that there is something to the idea of debiasing as suppressing automatic processes.) So I was curious to see how Kahneman himself introduces the System 1/System 2 distinction, and took a quick look at the book (my husband was reading it during our holiday of a few weeks ago, after having gotten it from me as a birthday present – that’s what you get for having a nerdy wife). The first thing that struck me is that, on footnote 20, he lists some of the pioneers of dual-system theories, including Jonathan Evans, Steve Sloman and Keith Stanovich, and adds: “I borrow the terms System 1 and System 2 from early writings of Stanovich and West that greatly influenced my thinking” (he refers to their 2000 BBS article on individual differences in reasoning). But what is puzzling is that Stanovich himself now overtly rejects the conceptualization of the distinction in terms of systems, which unduly suggests reified entities, and now uses the process terminology instead (same with Jonathan Evans).
But perhaps most striking is what Kahneman says in the conclusion of the book:
A few days ago Eric linked to a report by Lori Gruen (Ethics and Animals blog here; Wesleyan University website here) on the renewal of cruel maternal deprivation research on primates. The comments on Eric's post were such that we asked Lori to write a guest post for us. She graciously agreed; the post follows: [UPDATED 1:40 pm 16 Oct. See below for contact info for Madison's Provost.]
PAINFUL SCIENTIFIC FOOLISHNESS
“Major steps in scientific progress are sometimes followed closely by outbursts of foolishness. New discoveries have a way of exciting the imagination of the well-meaning and misguided, who see theoretical potentialities in new knowledge that may prove impossible to attain.” – Dr. Sherwin Nuland, Yale School of Medicine
Does the system we have in place to curtail scientific “outbursts of foolishness” and protect research subjects from “misguided” scientific curiosity work?
There was no oversight system in place back in the days when Harry Harlow’s experiments psychologically tormenting baby monkeys were making news. Surely that sort of horrible work in which infant primates are taken from their mothers to make them crazy wouldn’t be approved of today. On my recent visit to the University of Wisconsin I was shocked to learn otherwise. The oversight committee chairs told me they have never rejected a proposal. Not one.
And one of the protocols they did not reject is a renewal of maternal deprivation research. Disturbingly, it has been approved by not one, but two oversight committees. A psychiatry professor who has a distinguished record of research on anxiety disorders plans to separate more monkey babies from their mothers, leave them with wire “surrogates” covered in cloth (a practice developed by Harlow) to emulate “adverse early rearing conditions,” then pair them with another maternally deprived infant after 3-6 weeks of being alone. The infants will then be exposed to fearful conditions. The monkeys in this group and another group of young monkeys who will be reared with their mothers, will then be killed and their brains examined. (The experimental protocol is here.)
The research in question is a new type of maternal deprivation research designed to study anxiety by creating adverse early rearing conditions and then exposing the maternally deprived young monkeys to a snake and other frightening stimuli. The monkeys will be killed after the experiment is over and their brains will be studied. I believe this experiment is unethical and I also think it violates the spirit, if not the promulgated regulations, of the Animal Welfare Act which explicitly requires that the psychological well-being of primates be promoted (not intentionally destroyed).--Lori Gruen
In 2007, a study by Hamlin, Wynn and Bloom was published in Nature claiming to show that preverbal babies had what could be described as a ‘moral compass’ (not the authors’ own terms in the article). From the abstract:
Here we show that 6- and 10-month-old infants take into account an individual's actions towards others in evaluating that individual as appealing or aversive: infants prefer an individual who helps another to one who hinders another, prefer a helping individual to a neutral individual, and prefer a neutral individual to a hindering individual. These findings constitute evidence that preverbal infants assess individuals on the basis of their behaviour towards others. This capacity may serve as the foundation for moral thought and action, and its early developmental emergence supports the view that social evaluation is a biological adaptation.