My husband recently had lunch with the editor-in-chief of an international, peer-reviewed journal (not in philosophy), who says it's getting increasingly hard to find referees. When he began as editor, he needed to contact about 3 to 4 people to find 2 referees. Now he needs to contact on average *16* people to get 2 (or more and more, only 1) referee(s). One result of this is that authors for this journal now have to wait substantially longer to get a decision, even though the system is now fully automatized with an online submission system. Annoyingly, there is also an increasing tendency for referees to back out of commitments they made weeks ago, which again results in longer waiting times. If this trend can be generalized, it is increasingly hard to find referees. Perhaps motivation to referee is decreasing as faculty are confronted with higher teaching loads, more pressure to publish, etc. Also, while one may feel some sense of obligation to accept referee invitations, refereeing has limited payoffs. It doesn't count (or barely counts) on the cv, and one is not acknowledged for having refereed, except as one in a long list of referees published in the journal (though there are exceptions to this: I once refereed for a cognitive science journal that publishes the names of the reviewers together with the paper if it's accepted, and you even get the opportunity to briefly comment). Eric has suggested earlier that already publishing names of referees of accepted papers would help as well (it would also address concerns like potential cronyism in refereeing, and improve the quality of refereeing).
According to those who frequently referee, the quality of manuscripts is also going down (probably as the pressure to publish early is rising?). For junior authors, when one submits to a journal this is often the first time that an expert in the field other than the advisor looks at the paper. Papers that are submitted to journals are often not sufficiently pre-peer reviewed (by colleagues, commenters at conferences, etc.). Judging from the acknowledgment sections of papers, it seems that senior, well-known people get better and higher-quality informal peer-review, e.g., "I am very grateful to [famous philosopher X, Y, Z] for comments on this paper, and for comments from the audience at [conference venues A, B, C, D and E])". Famous folk have better informal peer-review networks and typically also more roomy budgets to present their papers on conferences and workshops than, say, an assistant professor from an unknown community college.
One possible way to address the problems of finding referees and increasing the quality of journal submissions is to provide more venues for pre-peer review (i.e., peer review prior to journal submission). In most fields of physics, and also mathematics, there is ArXiv, where authors can self-archive papers prior to journal submission. There is a system of moderators and endorsers. Although this system is not without criticism (for one thing, it made double blind peer review almost impossible), it has benefits in terms of pre-peer review, e.g., errors in mathematical proofs can be detected, such as that in Arenstorf's purported proof of the twin prime conjecture, which was retracted after several mathematicians looked at the proof on ArXiv and commented on a mistake, e.g., "J'ai malheureusement trouvé une erreur grave dans l'article d'Arenstorf. Le lemme 8, page 35 est manifestement faux, et il est fondamental. Il est possible que la démonstration puisse être réparé, mais c'est non trivial." Similarly, Penrose's claim of alleged pre-big bang activity was severely criticized by the astrophysical community following the publication of the paper on ArXiv. However, one clear limitation of this type of public peer review is that it seems to be concentrated on spectacular claims.
Another model has been recently proposed by three Finish ecologists, who aim to provide journals with already peer-reviewed papers. From a short news blurb in the current issue of Science: "the Web site accepts paper submissions—and matches them with potential reviewers: Scientists with a potentially publishable paper can upload it to the Web site, while other members with relevant expertise, alerted by keywords in the papers, can provide reviews that scientific journals can use to decide whether to offer to publish the work. Janne-Tuomas Seppänen, a postdoc at University of Jyväskylä, came up with the idea for Peerage of Science in 2010. Scientists receive one credit for every review they finish, whereas uploading a manuscript costs two credits divided by the number of authors. The author who uploads the paper must have a positive balance. “This formalises an unwritten rule: He who wants his manuscripts reviewed, reviews other manuscripts in return,” says Seppänen."
This seems like a nice model, since it is constructive rather than a pure gate-keeping exercice, helping authors to make better paper prior to submitting them to journals, and motivates people to referee (making a model of indirect reciprocity more direct - you have to referee if you want to get refereed). One could easily implement such a system, for instance in PhilPapers.
Recent Comments