I’ve written a few posts in the recent past questioning the whole idea of anonymous peer-review as a reliable guide to quality – in philosophy as well as elsewhere. In other disciplines, there have been numerous recent cases of ‘false positives’, i.e. papers which made it through the peer-review process but then were discovered to be fundamentally flawed after they were published (leading to a very large number of retractions).
The issue with false positives is well known, but as I’ve suggested in some of my previous posts, the issue of false negatives is equally serious, or perhaps even more serious, and yet it tends to be under-appreciated. A recent piece by JP de Ruiter, a psycholinguist at the University of Bielefeld, articulates very nicely why it is serious, and why it remains essentially invisible.
The two main goals of a review system are to minimize both the number of bad studies that are accepted for publication and the number of good studies that are rejected for publication. Borrowing terminology of signal detection theory, let’s call these false positives and false negatives respectively.
It is often implicitly assumed that minimizing the number of false positives is the primary goal of APR. However, signal detection theory tells us that reducing the number of false positives inevitably leads to an increase in the rate of false negatives. I want to draw attention here to the fact that the cost of false negatives is both invisible and potentially very high. It is invisible, obviously, because we never get to see the good work that was rejected for the wrong reasons. And the cost is high, because it removes not only good papers from our scientific discourse, but also entire scientists. […] The inherent conservatism in APR means that people with new, original approaches to old problems run the risk of being shut out, humiliated, and consequently chased away from academia. In the short term, this is to the advantage of the established scientists who do not like their work to be challenged. In the long run, this is obviously very damaging for science. This is especially true of the many journals that will only accept papers that receive unanimously positive reviews. These journals are not facilitating scientific progress, because work with even the faintest hint of controversy is almost automatically rejected.
With all this in mind, it is somewhat surprising that APR also fails to keep out many obviously bad papers.
In other words, it seems that the system is producing false positives as well false negatives, for reasons that are inherent to the system itself. (The piece proposes that peer-review should not be anonymous in the referee-author direction, i.e. the author should be able to know who the referee was.) What’s worse, the false negatives slip through the cracks, and so it is likely that there is a lot of good research being ‘lost’ in this way while nobody is paying attention (not to mention the careers of researchers with great potential).
Now, as any journal editor will tell you, it is exceedingly hard to get people to accept referee assignments these days: everybody is over-worked, and refereeing is the thankless, annoying job that we all try to avoid as much as possible. If moreover the reports should not be anonymous (as proposed by the article I quote above), then presumably it would become even harder to find willing referees. Now, one might think that academics should recognize that acting as a diligent referee from time to time simply comes with the job, on a par with other thankless activities such as willingness to be in committees, fill in for your sick colleague etc.: it would simply be a matter of collegiality. After all, we need good-willed referees to read the papers that we submit to journals ourselves. And yet, (understandably) refereeing is the first thing we try to get away from when other commitments and obligations are piling up on our desks.
Hence, and as has been acknowledged by many people before me, finding good referees, who write fair, informative reports in a timely fashion, is the Achilles’ heel of the whole system. How can we create incentives so that people feel more inclined to take up referee assignments, and do it in a conscientious way? In my experience, both as an editor and as a referee, usually one accepts such assignments out of respect for the editor (that’s why one important characteristic in a good editor is having many friends in the profession!), sometimes because the paper seems interesting and potentially one could learn something from it. And yet, this does not seem to be doing the trick, or not enough.
At the Review of Symbolic Logic, where I am one of the editors, we recently started working with an online editorial system (yes, you heard me: recently). So far the system is still not fully reliable, but it has a number of functions that I am just starting to discover. Today I discovered that I can see all current projects being handled by all editors, as well as the authors of the submissions. (I don’t think this is ideal: I much prefer the system of the other journal of which I am an editor, Ergo, which operates with triple anonymity.) Anyway, I noticed that among the submissions currently being processed, there are many by people who I recently asked to referee papers for the RSL, and who declined the assignment! (Some do not even bother replying to the request, which is even more annoying.) It just seems a bit too rich, doesn’t it?, to expect a journal to process your own submission – i.e. finding suitable referees etc. – while not being willing to act as a referee for the journal yourself…
So here’s an idea: journals could have a policy such that, by submitting an article to a given journal, one thereby commits to accepting at least two referee requests from the same journal, within a reasonable amount of time (say, 12 months). How exactly to enforce the principle, I am not sure: the journal might refrain from publishing the paper until the author has done their refereeing ‘pay-back duty’ (but this only works in case of accepted papers), or the journal might not accept new submissions by the same author in the meantime etc. The idea would simply be that each submission generates the need for at least two referee reports (usually), and so to keep the balance, for each paper one submits, one should do at least two referee reports (for the same journal or for different journals, but that would make the logistics much more complicated). Of course, very junior people will likely not be well placed to act as referees in the same way, so the system is not perfectly balanced; but it would at least introduce a certain level of accountability for submitting a paper. Every submission entails quite some work both for the handling editor and for the referees the editor will call upon, so it would be only natural that authors collectively share some of the burden.
What do readers think? Should we consider installing systems of formal incentives such as the tit-for-tat approach I am proposing here? Other ideas on how to ensure that everybody does their part so that the system runs more smoothly?
Recent Comments