"So many tangles in life are ultimately hopeless that
we have no appropriate sword other than laughter," said Gordon Allport, an
American psychologist and one of the founders of the study of personality.
Scientists have studied the effects of mirthful laughter, positive thinking and
optimism on feelings of self-worth, mood disorders and depression since the
In The Antidote:
Happiness for People Who Can't Stand Positive Thinking British author and Guardian feature writer Oliver
Burkeman takes issue with "the cult of optimism," the convention that
phony smiles, jovial laughter and positive thinking is a surefire path to
happiness. Positive thinking is the problem, not the solution, Burkeman teaches
us. He believes people have come to trust that a "Don't worry. Be
happy" attitude toward life is the only route to contentment. People seem
to be of the conviction that if you have negative thoughts and see your own
limits, you cannot be happy. So to be happy we must set out on a journey that
changes your mindset from negative and inhibited to enthusiastic, fervent and
animated. We are told to visualize our dreams and goals, eliminate the word
"impossible" from our vocabulary and put a big fabricated smile on
our physiognomy. All that actually can lead to unhappiness, Burkeman says.
Diederik Stapel, also known as the ‘Lying Dutchman’, was the
protagonist of one of the nastiest cases of professional misconduct in
experimental psychology, amidst a recent surge of such cases. The committee in
charge of investigating the extent of his fraudulent conduct has recently
announced its conclusions. As could have been expected, it looks very bad, also
affecting a number of his collaborators who, due to negligence, unwittingly allowed him to engage in such
practices (article here in Dutch).
Stapel now says he
feels ‘sadness and shame’, but in a surprising turn of events, he has also been writing a diary since the whole
commotion started, parts of which he is planning to publish in book form! (Article in Dutch) Is it
“a way to try to make money
off of his terrible decisions”, as suggested by Bryce Huebner (to whom I owe
the pointer to the article on Twitter)? Or is it a case of someone who is so
used to being in the spotlight that any form of public attention is welcome?
I don’t know what to make of it, but I suppose one shouldn't be too surprised by his penchant for poor judgment.
"Lucy in the Sky with Diamonds" was a product of the Beatles'
experimentation with psychedelic drugs is still a subject of great
debate among Beatles fans and music experts. But it was no secret that
the lyrics of many of the pop legend's famous tracks was inspired by
LSD, including "I am a Walrus," "Tomorrow Never Knows," and "What's The
Real Mary Jane." The Beatles' creating during a hallucinogenic trip is
not a rare case of acid-driven creation, invention or discovery. The
double helix structure of DNA occurred to geneticist and neuroscientist Francis Crick
while he was tripping on the Lucy drug and low-level tech Kari Mullis
hit on the idea behind Polymerase Chain Reaction (PCR), a now
widely-used technique for amplifying a single piece of DNA by a factor
of 100 billion, while cruising along the Pacific Coast Highway one night
in his car on LSD.
In a recent paper, the eminent psychologist of reasoning P. Johnson-Laird says the following:
[T]he claim that naïve individuals can make deductions is controversial, because some logicians and some psychologists argue to the contrary (e.g., Oaksford & Chater, 2007). These arguments, however, make it much harder to understand how human beings were able to devise logic and mathematics if they were incapable of deductive reasoning beforehand.
This last claim strikes me as very odd, or at the very least as poorly formulated. (To be clear, I side with those, such as Oaksford and Chater, who think that deductive reasoning must be learned to be mastered and competently practiced by reasoners.) It looks like a doubtful inference to the best explanation: humans have in fact devised logic and mathematics, which are crucially based on the deductive method, so they must have been capable of deductive reasoning before that. Something like: birds had to have fully formed wings before they could fly – hum, I don’t think so… Instead, the wing analogy suggests that there must be some precursors to deductive reasoning skills in untrained reasoners, but the phylogeny of the deductive method (and to be clear, I’m speaking of cultural evolution here) would have been a gradual, self-feeding process.
The U.S. legal system gives preference to adult testimony in court
cases. In 2002 Thomas Junta was accused of killing a man in a
Massachusetts hockey rink quarrell in 2000. Thomas’ son 12-year-old
Quinlan Junta was a key defense witness for his father but his testimony
did not convince the jury. Thomas was found guilty and sentenced to 6
to 10 years in state prison.
In his famous paper entitled "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information"
cognitive psychologist George A. Miller of Princeton University argued
that our working memory, our ability to hold information in our minds
for a few seconds, is limited to 9 items. That's fewer items than the
items of a regular out of town American phone number. In light of this
you might wonder what to say about cases of people with extreme memory
abilities. Chao Lu
holds the Guinness world record in reciting Pi, a record dating back to
2005. Lu recalled 67,890 digits of Pi in 24 hours and 4 minutes with an
error at the 67,891st digit, saying it was a 5, when it was actually a
0. How is it possible to retrieve this quantity of information
accurately through working memory? Is it magic? After talking to several
people working in memory sports, we found out that it's not.
You are preparing for your upcoming exam, reading through thousands of pages. Suddenly you realize that you forgot to pay attention to what you actually read. You were reading along but your thoughts were elsewhere. "Good God," you think. "Hours of wasted time." You turn back the pages and start over. This time you make sure you pay close attention.
Recent research, to appear in the journal PNAS, suggests that you may be wasting even more time by doing that. You don't need attention to comprehend what you read or to do math. In fact, you may not even need consciousness. The researchers, located at Hebrew University, used a technique known as Continuous Flash Suppression (CFS) to suppress consciousness in some 300 research participants for a short period of time. In CFS a series of rapidly changing images is presented to one eye, whereas a constant image is presented to the other. When using this technique, the constant image supposedly is not consciously perceived until after about 2 seconds.
At our lab in St. Louis we are working with several people with superhuman abilities, also known as “savant skills.” My research assistant Kristian Marlow and I are also currently finishing a book entitled The Superhuman Mind (under contract with an agency, see updates here). We are blogging about these cases almost daily over at Psychology Today. The following are four brief stories about some of the individuals we are working with.
Daniel Kahneman’s Thinking, Fast and Slow is making quite a splash (the other
day, I saw at Bristol airport that it is currently at the top of the bestseller list for non-fiction -- naturally, it still can’t compete with Fifty Shades of Gray). I haven’t read it
yet, but people whose opinion I hold in high esteem tell me that it has been
successful in striking the difficult balance between being accessible to a
wider audience and scientifically accurate (for the most part at least) at the
same time. The book summarizes research on cognitive and reasoning biases of
the last decades, a research program in which Kahneman himself has been a major
player. The conceptual cornerstone of the book is the (still) popular
distinction between System 1 and System 2, the two systems which allegedly run
in parallel underpinning all our cognitive processes, and which often conflict
with each other.
Now, as I’ve stated a few times before (here for example), I
am no fan of System 1/System 2 talk at all (not even of weaker versions, the
so-called dual-process theories of cognition), even though I agree that the
empirical findings on cognitive biases should be taken very seriously. (I also
agree that there is something to the idea of debiasing as suppressing automatic
processes.) So I was curious to see how Kahneman himself introduces the System
1/System 2 distinction, and took a quick look at the book (my husband was
reading it during our holiday of a few weeks ago, after having gotten it from
me as a birthday present – that’s what you get for having a nerdy wife). The
first thing that struck me is that, on footnote 20, he lists some of the
pioneers of dual-system theories, including Jonathan Evans, Steve Sloman and
Keith Stanovich, and adds: “I borrow the terms System 1 and System 2 from early writings
of Stanovich and West that greatly influenced my thinking” (he refers to their
2000 BBS article on individual differences in reasoning). But what is puzzling
is that Stanovich himself now overtly rejects the conceptualization
of the distinction in terms of systems, which unduly suggests reified entities,
and now uses the process terminology
instead (same with Jonathan Evans).
But perhaps most striking is what Kahneman
says in the conclusion of the book:
A few days ago Eric linked
to a report
by Lori Gruen (Ethics and Animals blog here; Wesleyan University
website here) on the renewal
of cruel maternal deprivation research on primates. The comments on Eric's post
were such that we asked Lori to write a guest post for us. She graciously
agreed; the post follows: [UPDATED 1:40 pm 16 Oct. See below for contact info for Madison's Provost.]
steps in scientific progress are sometimes followed closely by outbursts of foolishness.
New discoveries have a way of exciting the imagination of the well-meaning and
misguided, who see theoretical potentialities in new knowledge that may prove
impossible to attain.” – Dr.
Sherwin Nuland, Yale School of Medicine
Does the system we have in place to curtail scientific
“outbursts of foolishness” and protect research subjects from “misguided”
scientific curiosity work?
There was no oversight system in place back in the
days when Harry Harlow’s experiments psychologically tormenting baby monkeys
were making news. Surely that sort of
horrible work in which infant primates are taken from their mothers to make
them crazy wouldn’t be approved of today. On my recent visit to the University
of Wisconsin I was shocked to learn otherwise.
The oversight committee chairs told me they have never rejected a
proposal. Not one.
And one of the protocols they did not reject is a renewal
of maternal deprivation research. Disturbingly, ithas been approved by not
one, but two oversight committees. A
psychiatry professor who has a distinguished record of research on anxiety
disorders plans to separate more monkey babies from their mothers, leave them
with wire “surrogates” covered in cloth (a practice developed by Harlow) to
emulate “adverse early rearing conditions,” then pair them with another
maternally deprived infant after 3-6 weeks of being alone. The infants will then be exposed to fearful
conditions. The monkeys in this group
and another group of young monkeys who will be reared with their mothers, will
then be killed and their brains examined. (The experimental protocol is here.)
The research in question is a new type of maternal deprivation research designed
to study anxiety by creating adverse early rearing conditions and then exposing
the maternally deprived young monkeys to a snake and other frightening stimuli. The monkeys will be killed after the
experiment is over and their brains will be studied. I believe this experiment
is unethical and I also think it violates the spirit, if not the promulgated
regulations, of the Animal Welfare Act which explicitly requires that the
psychological well-being of primates be promoted (not intentionally destroyed).--Lori Gruen
In 2007, a study by Hamlin, Wynn and
Bloom was published in Nature claiming to show that preverbal babies had what
could be described as a ‘moral compass’ (not the authors’ own terms in the
article). From the abstract:
Here we show that 6- and 10-month-old infants take
into account an individual's actions towards others in evaluating that
individual as appealing or aversive: infants prefer an individual who helps
another to one who hinders another, prefer a helping individual to a neutral
individual, and prefer a neutral individual to a hindering individual. These
findings constitute evidence that preverbal infants assess individuals on the
basis of their behaviour towards others. This capacity may serve as the
foundation for moral thought and action, and its early developmental emergence
supports the view that social evaluation is a biological adaptation.
Over the last week, there have been quite a few blog posts prompted
by Tim Williamson’s recent critique of experimental philosophy in his review of
J. Alexander’s Experimental Philosophy.
In particular, at NewAPPS Eric Schliesser and Berit Brogaard shared some of their
views on the debate. Here, however, I want to discuss a post by Eric
Schwitzgebel at Splintered Mind, as I think he identifies an important and
overlooked component of the whole debate. Eric puts forward the distinction
between X-Phi in a narrow and in a wide sense. The narrow conception can be
the work canonically identified as "experimental philosophy" surveys
ordinary people's judgments (or "intuitions") about philosophical
concepts, and it does so by soliciting people's responses to questions about
The wide conception is more difficult to define, and Eric
basically offers a definition by exclusion:
In this broad sense, philosophers who do empirical work aimed at addressing traditionally philosophical questions are also experimental philosophers, even if they don't survey people about their intuitions.
(I’ve been through a ridiculously busy period of work-related traveling and thus scarce blogging, and in the next four weeks I’m supposed to be on holiday, so again scarce blogging. But there is still one topic I really want to discuss before the summer break, so here it is.)
Here are a couple of brain-teasers for your amusement on this Monday morning/afternoon (depending on your time zone):
(1) A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? _____ cents
(2) If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets? _____ minutes
(3) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake? _____ days
Just now on NPR, there was a discussion about toddlers and iPads that could have really used a Heideggerian intervention. The issue was, more or less, what is happening when you give a 2 year old an iPad and they get completely absorbed for 5 hours straight? Is this good for them or not? And does it help them to learn what they need to learn in order to mature into smart, productive kids and adults? NPR seems to love this stuff; there’s a shorter article on the same topic here.
A range of experts was consulted, most of whom said that we don’t have enough (empirical) research to answer these questions yet, but that we shouldn’t panic – we just need to make sure that kids get a balance of screen time and face-to-face interaction with other people. But the question that started the whole discussion was a father’s question about what is going on for his son when he “zones out” in front of the iPad. This question remained unaddressed, as far as I could tell from my own zoning in and out of the radio discussion. But isn’t this basically a matter of Benommenheit, or captivation, literally “being taken,” being absorbed in an object to the point where everything else fades away.
I have written about our case study of a person with acquired synesthesia and savant syndrome in an earlier post on this blog. To make a long story short, JP was hit on the head in a mugging incident and acquired traumatic brain injury.
After the incidence he started experiencing the world in terms of geometrical figures. He also had lost his ability to see smooth boundaries and smooth motion. He sees objects as separated from their surroundings in terms of tiny tangent and secant lines. He experiences motion in picture frames. When objects are moving relative to him or he is moving relative to objects, three-dimensional geometrical figures form before his eyes.
Right after the incident he started drawing some of these images by hand. They turned into beautiful pieces of art that have received several awards. After some elementary math training following the accident, JP also experienced automatic visual imagery in response to certain mathematical formulas.
Ingrid Robeyns, professor of practical philosophy at the Erasmus University in Rotterdam, is known among other things for her work on the capability approach (see her SEP entry on the topic, and her review of Martha Nussbaum's Creating Capabilities), and as a blogger at the interdisciplinary blog Crooked Timber. This week, she will be running a series of posts on autism at Crooked Timber -- the first one is here, the second one here. Ingrid is herself the mother of an autistic child, and the combination of philosophical insight with her first-person experience is bound to yield a very interesting perspective on the topic.
Autism is a topic having many important philosophical implications, ranging from theories of cognition and philosophy of mind to ethics. So I for one look forward to the upcoming posts, and I suspect that many NewAPPS readers will be equally interested. Go check it out; in fact, today is World Autism Awareness Day, so as good a day as any!
With the growth of controversies conducted through blogs the really existing norms in various scientific disciplines can sometimes be revealed (perhaps unintentionally). In this blistering post, Yale Psychologist, John A. Bargh, Ph.D., criticizes a study that had not replicated his earlier results. Here I ignore the substance of his charges (for useful criticism see here). In his criticism he vehemently attacks the online journal, PLoS ON. But he follows with a most revealing, self-undermining comment: "If I'd been asked to review it (oddly for an article that purported to fail to replicate one of my past studies, I wasn't) I could have pointed out at that time the technical flaws." The parenthesis teaches us that the (once-standard?) norm among the peer-reviewed journals in his niche is that if one is targeted (and high status?) one can expects to be the referee. Perhaps, the vehemence of the little spat is indicative that an old-boys-network is on the way out? [Hat-tips to Bryce Huebner and Antti Kauppinen on Facebook.]
Many readers will have already seen Jess Prinz’s recent blog post criticizing a psychological study defending the Male Warrior hypothesis, according to which men are evolved to seek out violent conflicts in order to get women. He now has a reply to the objections raised by two other bloggers, one of them one of the authors of the study (H/T Feminist Philosophers). I’m not sure this is appropriate language for blogging, but I just can’t help myself: Prinz is really kicking ass, there is no better way to describe it. Some excerpts:
One of the subjects I work with, JP, has acquired synesthesia and acquired savant syndrome. This happened as a result of a brutal assault in 2002, during which he was kicked and hit on the head. He was subsequently diagnosed with a bleeding kidney and an unspecified head injury. What the doctors didn't know was that JP no longer saw the world the way he used to. Objects suddenly did not have smooth boundaries. Things no longer moved smoothly. Motion took place in picture frames. It looked like someone paused and unpaused the flow of the world very rapidly. Even more amazing: JP was suddenly able to see vivid fractal images of objects with a fractal structure (such as, broccoli).
JP's response to his new way of seeing the world was to withdraw from it. He spent the following three years in his apartment and refused to leave unless it was strictly necessary. After three years in complete isolation JP figured that he would try to draw what he saw, so he could make people understand him. He started drawing. And he continued. He drew and drew and drew, using only a pencil, a ruler and a compass. The results were beautiful hand-drawn fractal-like images. JP didn't know then that he was the first in the world to hand-draw mathematical fractals and that he would later win prizes for his drawings. He didn't even know what he was drawing, except that it was what he saw.
Almost a year ago I wrote a post on the dubious scientific status of psychoanalysis. One might think that this is an old and dated Popperian question, but in view of the influential position still occupied by psychoanalysis at least in some quarters, it remains a topical issue. In effect, via the Feminist Philosophers I came across this NYT article on a documentary which heavily criticizes psychoanalytic approaches to autism in France.
According to the article, psychoanalysis remains the standard approach to autism there, but not for particularly good reasons. In fact, the results seem to be quite discouraging (for example, a much smaller percentage of children with an autism diagnosis are sufficiently autonomous to be able to attend school in France than in e.g. the UK), and yet the grip of psychoanalysis remains strong – needless to say, arguably to the disadvantage of the children in question and their caregivers.
In the Feminist Philosophers’ post there is also a link to the documentary; it is well worth watching, but also quite depressing.
Another well-worn example bites the dust? You remember that famous study in which the participants, if primed with words connoting agedness, walked more slowly when leaving the lab.
A new study by the Belgian team of Stéphane Doyen, Olivier Klein, Cora-Lise Pichon, and Axel Cleeremans not only failed to replicate the effect, but also appeared to show that the effect observed in the original study was owing to the experimenters’ expectations.
This has been going around the internet over the last couple of days, but for those who have not seen it yet: The Nation has an excellent overview article of the Hauser affair, by distinguished psychology professor Charles Gross. Let me quote some of the concluding paragraphs, which discuss in particular the damaging effect of the affair for the whole field of animal cognition, and in particular of the secretive way in which the investigations have been handled.
As mentioned before, recently I read Cordelia Fine’s A Mind of its Own, a highly informative and accessible account of some of the traits of human psychology, as documented by empirical research, indicating that our cognitive and emotional apparatus is highly unreliable. From the introduction:
[…] the truth of the matter – as revealed by the quite extraordinary and fascinating research described in this book – is that your unscrupulous brain is entirely undeserving of your confidence. It has some shifty habits that leave the truth distorted and disguised. (p. 2)
The rhetoric is quite (too?) strong, and one may raise an eyebrow or two at the conflation of brain with human cognition and psychology generally speaking. Nevertheless, the evidence presented by Fine is compelling and unsettling. The chapters have the following titles: the vain brain, the emotional brain, the immoral brain, the deluded brain, the pigheaded brain, the secretive brain, the weak-willed brain, the bigoted brain, and finally the vulnerable brain. (You get the picture…) I highly recommend the book, especially for philosophers who still hold on to the idea that human cognition is for the most part reliable and truth-conducive.
As many of you have probably already seen, Rebecca Kukla has an excellent post up at Leiter’s blog on the effects of implicit biases, specifically as affecting hiring practices. However, as she is done with her job of guest-blogger over there, the post is not open for comments, and with Rebecca’s agreement, I figured it might be useful to have a discussion here.
Rebecca is making very good points about the effects of implicit biases in hiring practices, and in particular how hard (in fact, nearly impossible) it is to shield yourself from them if you are on the decision-making side of things. Now, as it turns out, one of the books I read over my vacation last week was Cordelia Fine’s A Mind of its Own(as mentioned before, co-blogger John Protevi and I are big fans of her work). One of the chapters of the book is ‘The Bigoted Brain’, and she discusses precisely some of the findings from experimental psychology (on the ways implicit biases operate) that Rebecca refers to. As she mentions, one of the surprising features of implicit biases is that, if you actively try to suppress them, they in fact re-emerge later on with additional strength. (In fact, it is not so surprising given that suppressing specific thoughts is likely to have a priming effect.) Here’s an excerpt from the book:
I’m just back from an extremely enjoyable family vacation in sunny Fuerteventura, which also means that I am swamped by a zillion work-related things that need to be attended to asap. I also want to resume blogging, and have a few posts already lined up in my head (in particular, one on the ‘climate for women’ discussion which has re-emerged), but where do I find time for all this? (One almost regrets going on holiday and forgetting about it all for a while, given the harsh conditions upon return!)
But anyway, today I came across two interesting links, via the New Scientist twitter feed, and thought it might be a good topic to resume blogging. As it turns out, Steven Pinker’s most recent interest is the history of violence, which he takes to be a privileged window for his long-standing interest in human nature (broadly construed). In his new book The Better Angels of our Nature, he claims that there has been a significant decrease in homicides and violent deaths over the centuries: ‘Humans are less violent than ever’. This becomes particularly clear if the death tolls of historical occurrences of horror are estimated on the basis of the human population at the time, and what the proportion would mean in terms of the current human population in the world. This was done by finding the per-capita death rate at the midpoint of the event's range of years, based on population estimates from McEvedy and Jones.
A few weeks ago, Helen reported on a wonderful conversation she had with her 7 year-old daughter on the ontological status of numbers. Helen also remarked that the children of scientists and researchers are often the subject of all kinds of ‘experiments’ unbeknownst to them. I must confess that I’ve performed a wide range of cognitive ‘tests’ on my kids, but before social services are called I can assure you all that they greatly enjoyed it and saw it all as a fun game. I have in particular done the false belief task with both, at different ages, and can report that they fall squarely within the expected results!
Now, as some readers may recall, I am working quite extensively on reasoning, deductive reasoning in particular, both from a philosophical and a psychological perspective. So I’ve been through most of the voluminous literature on the psychology of reasoning (my own account of the findings can be found in chapter 4 of my forthcoming book, draft available here), and as is well known, in experiments with deductive tasks, participants overwhelmingly fail to give the ‘right’ response from the point of view of the canons of deduction as traditionally construed. And yet, these studies were almost all conducted with participants having a fairly homogeneous educational background, namely undergraduates of North-American and Western European universities. My hypothesis is that even the modicum of ‘logical competence’ that does emerge from the experiments is by and large a product of the formal education they received. To test this hypothesis, one would have to isolate the education component and thus undertake the same or similar experiments with participants with a very different educational background, in particular unschooled subjects. Unfortunately, very few studies of this kind have been conducted, but the ones which have do suggest that unschooled participants tend to engage with the task materials in *very* different ways.
Every reasonably neurologically healthy person has some fear of public speaking. How much varies hugely from individual to individual. But I suspect that it is very common among philosophers. Why? Because the majority of people who enjoy receiving a lousy salary in return for an insane amount of work have got to have some very good reasons. One good reason, I believe, is that they enjoy working in the comfort of their own home and enjoy the solitude and the control they have over their own time and direction of their work. They are good old-fashioned introverts, who don't really truly enjoy large assemblies of people but who may have adjusted to them and who may even come across as extroverts on a good day. Do introverts fear public speaking more than extroverts? I don't know. But I believe that they do. If you dislike large groups of people or prefer your own company to that of other people, it is not likely that you by nature are super-comfortable speaking to a large group of people. That said, I don't want to rule out that some people went into the profession because of the possibility of fame and attention.
As for my own case, I started out with an extreme fear of public speaking. I recall taking a large lecture class in molecular biology the first year of college. Despite it being a large lecture class, we were all expected to do a presentation. I hadn't spoken in front of a lot of people before, so I had no idea that I had a fear of public speaking. I was assigned a topic, and over-prepared. I made about 50 slides. This was before the age of Powerpoint. So my slides were the old-fashioned transparent kind that you put on an overhead projector. They were all lying in my lap in the correct order when I was sitting in the lecture hall waiting for the professor to call my name. I felt my heart pump very fast and hard even before he called my name. When he called on me, I stumbled down the steps to the front of the lecture hall. My hands were shaking. My legs felt like rubber. Then as I was about to put the first slide on the overhead projector, I dropped all the slides on the floor. The 200 students in the lecture hall were not making a single noise. It was so quiet that I could hear my heart pound. I had no idea what to do. Like an idiot, I hadn't numbered the slides and now they were all lying in a big mess on the floor. No one said anything, not even the professor. I collected the slides from the floor in a big messy pile in my arms, mumbled that I just couldn't do this and then went back to my seat. No one said anything. The professor started lecturing like nothing had happened. I felt terrible.
In several of my posts, I mentioned the book on formal languages that I've been working on for the last few years. I now have a draft of the book ready for (moderate!) public consumption, which is now available here. The two final chapters are still missing, but the draft is already something of a coherent whole, or so I hope.
Many people have kindly expressed their interest in checking out the material, hence my decision to make it available online at this point, despite the fact that it is still a somewhat rough draft (references are still a mess). Needless to say, comments are always welcome :)
A new paper by Nieuwenhuis, Forstmann, & Wagenmakers in Nature argues that roughly half of all papers in five top neuroscience journals assert differences between the effects of interventions when at most they are entitled to is to assert that an intervention has had a statistically significant effect. Their argument is explained very well in a Guardianarticle by Ben Goldacre. The authors write in their introduction:
Are all these articles wrong about their main conclusions? We do not think so. First, we counted any paper containing at least one erroneous analysis of an interaction. For a given paper, the main conclusions may not depend on the erroneous analysis. Second, in roughly one third of the error cases, we were convinced that the critical, but missing, interaction effect would have been statistically significant (consistent with the researchers’ claim), either because there was an enormous difference between the two effect sizes or because the reported methodological information allowed us to determine the approximate significance level. Nonetheless, in roughly two thirds of the error cases, the error may have had serious consequences.
So the headline should not be: “Half of Neuroscience Papers are Wrong”, but rather “Half of Neuroscience Papers are Insufficiently Well Argued/One-Third Need Fixing”. We’ll see what the headline-writers do…
A sidelight: the authors, whose affiliations are Dutch, use “intuition” in more-or-less the philosopher’s sense. Is that use diffusing into world outside philosophy?