A new paper by Nieuwenhuis, Forstmann, & Wagenmakers in Nature argues that roughly half of all papers in five top neuroscience journals assert differences between the effects of interventions when at most they are entitled to is to assert that an intervention has had a statistically significant effect. Their argument is explained very well in a Guardian article by Ben Goldacre. The authors write in their introduction:
Are all these articles wrong about their main conclusions? We do not think so. First, we counted any paper containing at least one erroneous analysis of an interaction. For a given paper, the main conclusions may not depend on the erroneous analysis. Second, in roughly one third of the error cases, we were convinced that the critical, but missing, interaction effect would have been statistically significant (consistent with the researchers’ claim), either because there was an enormous difference between the two effect sizes or because the reported methodological information allowed us to determine the approximate significance level. Nonetheless, in roughly two thirds of the error cases, the error may have had serious consequences.
So the headline should not be: “Half of Neuroscience Papers are Wrong”, but rather “Half of Neuroscience Papers are Insufficiently Well Argued/One-Third Need Fixing”. We’ll see what the headline-writers do…
A sidelight: the authors, whose affiliations are Dutch, use “intuition” in more-or-less the philosopher’s sense. Is that use diffusing into world outside philosophy?
Recent Comments