On the Nature website, Richard Van Noorden reports that a French computer scientist, Cyril Labbé, has discovered over 120 computer-generated papers that have been published in conference proceedings between 2008 and 2013. Over 100 of these papers were published by the Institute of Electrical and Electronic Engineers (IEEE) and 16 others appeared in Springer publications.
The papers have been composed using SCIgen, which only requires the user to input author names, and automatically generates random papers that look like Computer Science, but which are actually meaningless. Cyril Labbé has written a program that is able to recognize papers that have been generated by SCIgen. (The program compares the vocabulary of a text to that of a reference corpus; in particular, it measures the inter-textual distance as the proportion of word-tokens shared by two texts. For details of the method, see Labbé's 2012 paper published in Scientometrics.)
The proceedings issues that appeared in Springer publications were (supposed to be) peer-reviewed; for the IEEE proceedings, it is less clear whether they underwent peer review. In any case, the former examples show that the peer review system is not always watertight, not just in the case of open-access journals (which was also discussed here at NewApps).
Most of the conferences took place in China and most of the authors have Chinese affiliations. Of course, it remains to be checked whether the author names correspond to real scholars and if so, whether they were aware of the submission in their name. Nature was able to contact one actual researcher: he does not know why his name appeared in the author list of such a computer-generated paper.
Below the fold, I offer a speculation on the motivation behind the submission of these fake papers.
(This post is the result of a facebook debate started by Eric Schliesser)
Given that what we are doing in philosophy might be footnotes to Plato all the way down, citation practices might not seem worth further discussion (that would be footnotes on footnotes in footnotes on Plato). But Kieran Healy’s data recently revealed the degree to which citation numbers cluster around certain big names. Citation practices seem to depend significantly on informal norms and expectations within the academic community. It is worth bringing these up for debate: more awareness of who is quoted, and why, could not only improve scholarship, but also help to make the hierarchies between the (perceived) centre and the (perceived) periphery of the academic community flatter.
Recently, I received two journal rejections within 4 days; it must be some kind of record. I could of course despair and take it personally, which is what I used to do at early stages of my career. But now, with sufficient publication success in the past to assure me that I am not a hopeless case when it comes to publications (or so I hope!), I try to look at rejections from a more positive, constructive angle. Readers who were interested in this post of mine of a few weeks ago, on how to go about selecting journals to submit your papers to, may find my current thoughts on how to deal with these two rejections useful.
The first of the two rejections was somewhat frustrating. It came from a very fine, highly selective journal, but it was based on only one referee report, and a referee who seemed to misunderstand the main claim of the paper quite severely. (S/he identified an equivocation that I’m pretty sure is simply not there.) But at the very least, the report suggested that I hadn’t been clear enough concerning the main claims of the paper. The truth is that this paper defends a somewhat controversial thesis; the referee commended the paper as well written and well structured, but seemed simply not to find the main thesis particularly appealing.
One of the skills philosophers-to-be must master is how to negotiate the ins and outs of getting their papers published in journals. Of course, the main thing is learning how to write good papers in the first place, but as we all know, writing a good paper is not a sufficient condition for achieving publication. As the years go by and I move steadily from ‘young, up-and-coming philosopher’ to someone with responsibilities for training other people, I’ve found it increasingly important to guide them in the process of finding the right home for their papers. Obviously, learning to do so is a never-ending process, and we ‘old people’ are still prone to making strategic mistakes; but there is a thing or two that we learn through experience regarding how to select the right journal(s) to submit a paper to. In this post, I’ll elaborate on some of the ‘strategies’ I’ve been passing on to the people I supervise; many of them will sound obvious to more experienced members of the profession, but I hope they can be useful to those still learning to navigate the seas of the publishing process.
One well-known heuristic is to follow the order of a certain ‘hierarchy’ of journals, from top to bottom. So you start aiming as high as you can, and then go one step down the ladder if your paper is rejected. Now, while this is generally speaking a sensible approach, there is much to be said against it. For starters, it may take a very long time until the paper is finally accepted somewhere, and if you are a young professional in the job market, this is definitely something to be avoided. Moreover, some of the so-called top journals are known for taking much too long before getting back to authors, and this is a luxury that many cannot afford.
As many of you will have seen by now, it looks like Elsevier -- not content with taking down papers from academia.edu -- is now also issuing takedown notices to individual universities. Nicole Wyatt, chair of the Philosophy Department at the University of Calgary, reported on having received such a notice in comments here. The Sauropod Vertebra Picture of the Week blog, from which I had learned about the academia.edu takedown, also reported on the note received by the University of Calgary and passed on to all their staff. (Btw, did they go after other universities as well? Or is it a case of ‘pilot harassment’, as well described by the SV-POW site? So far I only know of occurrences with Calgary.)
If anyone was still looking for reasons to boycott Elsevier, this is clearly a good one. Of course, it is not too difficult for most philosophers to boycott Elsevier, who does not publish major philosophy journals. But Elsevier is very strong in some adjacent areas, psychology in particular; it publishes for example the flagship journal Cognition, where a number of philosophers have published.
Very nice meditation on the "necessary of generous reading" by Joy HERE. I'm happy to let Joy have the last word* on the latest imbroglio over Nathan Brown's attempted polemic.** I found Joy's post to manifest what it preaches, but to be helpful to people like me who so often fall short of the explicated norms, and to be intellectually interesting in in its own right (especially given that many of the citations will be new to philosophy professors).
*Along with David Wallace.
**Which as a genre is generally imbecilic independent of Brown's attempt at engaging in it. I should say that my greatest professional regret involves the overly polemical nature of a couple of my earliest publications, and that I have friends with much better CVs than me that have the same feeling with respect to earlier pieces of their own. In a future blog post I'll expand on the imbecility of all polemic without mentioning any of the examples under current consideration.
When I was an undergraduate at the University of Texas in the late 80s there was the huge fad of philosophers making fun of professors in other departments who had appropriated philosophical thinking for their own projects.
Honestly, it's pretty easy work for people who spend their lives just studying philosophy to beat up on our brothers and sisters in humanities departments when they enter into conversation with a philosopher. The trick is to bracket the dialectical context of the appropriation as well as treat the norms relevant for engaging in those debates as if they are the same as writing good philosophy. With literarature department deconstructionism, this meant completely ignoring the context of New Criticism and the contribution that the appropriation of Derrida and De Man's writings made with respect to this background.
As a result of the kind of methodological stupidity the revolution very quickly began eating its own,* culminating perhaps in the 1992 petition against awarding Jacques Derrida an honorary Cambridge doctorate. By this point it was clear that American philosophy had completely squandered a very real chance of retaining a role as queen of the humanities. If during theory's heyday, a critical mass of us had actually taken the time (a couple of years hard work in each case) to actually immerse ourselves in the relevant history and canonical texts of other departments doing "theory," philosophy would today widely be viewed as a helpful discipline, as opposed to this weird thing where we spin our own wheels.**
One of the most depressing things to me as a student of continental philosophy is to see how the worst aspects of the the analytic/continental rift are now being replicated within continental philosophy.
Yet another interesting piece in the Guardian on academia: Nobel-prize winner (in medicine) Randy Schekman declares he will no longer submit papers to ‘luxury’ journals such as Nature, Science and Cell. His main argument:
These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called "impact factor" – a score for each journal, measuring the number of times its papers are cited by subsequent research. Better papers, the theory goes, are cited more often, so better journals boast higher scores. Yet it is a deeply flawed measure, pursuing which has become an end in itself – and is as damaging to science as the bonus culture is to banking.
In an earlier post, I suggested that the journal Food and Chemical Toxicology (FCT) should not have retracted a paper that purported to show toxic effects in rats fed GM corn. Now just over 100 scientists have signed a petition protesting the retraction, stating that the retraction violated the norms of the Committee on Publication Ethics (COPE), of which FCT is a member. The scientists note concerns about the impartiality of the process (e.g., the the appointment of ex-Monsanto employee Richard Goodman to the newly created post of associate editor for biotechnology at FCT) and assert, "The retraction is erasing from the public record results that are potentially of very great importance for public health. It is censorship of scientific research, knowledge, and understanding, an abuse of science striking at the very heart of science and democracy, and science for the public good."
The scientists are boycotting the journal's publisher, Elsevier; they will "decline to purchase Elsevier products, to publish, review, or do editorial work for Elsevier."
This is one more black eye for Elsevier that we can add to the removal of papers from Academia.edu, which Catarina recently blogged about, and the pre-existing boycott of Elsevier, discussed on this blog many times (see, e.g., here, here, here, and here).
It's very ugly (via many of my Twitter contacts). Go check the whole story, but here's the beginning:
Lots of researchers post PDFs of their own papers on their own web-sites. It’s always been so, because even though technically it’s in breach of the copyright transfer agreements that we blithely sign, everyone knows it’s right and proper. Preventing people from making their own work available would be insane, and the publisher that did it would be committing a PR gaffe of huge proportions.
Enter Elsevier, stage left. Bioinformatician Guy Leonard is just one of several people to have mentioned on Twitter this morning that Academia.edu took down their papers in response to a notice from Elsevier.
I thought I would make my inaugural post on NewAPPS a follow-up to Roberta's post about the retraction of the article in Food and Chemical Toxicology. I don't want to continue the debate about whether the retraction was justified; that debate can continue in the original thread. Here, I want to discuss one of the reasons why we should be paying vigilant attention to events such as these, and why their importance transcends the narrow confines of the particular scientific hypotheses being considered in the articles in question. What I worry most about is the extent to which pressures can be applied by commercial interests such as to shift the balance of “inductive risks” from producers to consumers by establishing conventional methodological standards in commercialized scientific research.
Inductive risk occurs whenever we have to accept or reject a hypothesis in the absence of certainty-conferring evidence. Suppose, for example, we have some inconclusive evidence for a hypothesis, H. Should we accept or reject H? Whether or not we should depends on our balance of inductive risks—on the importance we attach, in the ethical sense, of being right or wrong about H. In simple terms, if the risk of accepting H and being wrong outweighs the risk of rejecting H and being wrong, then we should reject H. But these risk are a function not only of the degree of belief we have in H, but also of negative utility we attach to each of those possibilities. In the appraisal of hypotheses about the safety of drugs, foods, and other consumables, these are sometimes called “consumer risk” (the risk of saying the item is safe and being wrong) and “producer risk” (the risk of saying the item is not safe and being wrong.)
My six year old Thomas is reading Star Wars books designed for six year olds. He's actually very good at it, but he does consistently misread the word "universe" as "university." Since it occurs quite a lot in these books, he's constantly telling me things like the following:
My name is Qui-Gon Jinn.
I am a Jedi.
The Jedi are a very special group of beings.
For many thousands of years, we have worked to promote peace and justice in the university.
This explains quite a bit (cf. Eric's post of earlier today).
Posted by Jon Cogburn on 21 November 2013 at 19:59 in "Austerity"? You mean class war, don't you?, #ows; Occupy Everything, Academic freedom, Academic freedom , Academic publishing, Adjunct faculty and hyper-exploitation, anti-fascism, Jon Cogburn | Permalink | Comments (4) | TrackBack (0)
| | | | |
Try to make as short as possible the temporal distance between announcing a new idea publicly (whether via conference presentation or even just via participating Q&As at conferences) and submitting a paper with that idea to a journal.
If you wait too long, you will likely find yourself being asked by reviewers to cite someone else's paper with the same or astonishingly similar idea published in the interregnum (between your making public the idea and submitting the paper), or will be informed by a reviewer the claim you are making is not very substantive because everyone already knows that.
There is no way to tell whether either of these responses is because you yourself put the idea out there and it's been digested by the relevant community. Surely this happens sometimes, but sometimes ideas just happen to be in the air, and it doesn't matter anyhow because there is nothing you can do. There is never anything to be gained by bringing it up to the journal editor who simply cannot be put into a "he said, she said" type of situation with respect to you and her reviewers. The main point is that if you had submitted the paper earlier, the reviewers would not have been able to respond that way. That is, if the idea was good enough to get accepted to a conference, or if noted scholars in the field encourage you to publish it after you share it, then you should submit it to a journal as soon as possible.
I am optimistic about the potential of the powers-based approach, but I see its major barrier to success to be bridging the gap between itself and other systems, or at least, clearly situating itself with respect to the dominant dialectic. Many advocates of more traditional approaches see the powers-based system as operating within its own philosophical universe and making little contact with the existing framework. This hurts both sides: powers-based theories are only taken seriously by those antecedently friendly to them, and prevailing approaches do not benefit from the theoretical resources of the powers approach. At the same time, using the tools of the more dominant strategies would benefit powers-based theories, as some of their key concepts (properties and substances, to name a few) remain underdeveloped. Clearly connecting powers-based theories to the traditional Humean framework will open up greater theoretical resources for both sides.--Sara Bernstein reviewing at NDPR. [Letters added to facilitate discussion.]
This quoted passage is the closing paragraph of Bernstein's very informative and stimulating review. (What follows is in no sense criticism of Bernstein.) I read Bernstein as identifying the "traditional Humean framework" (i.e., Lewisian metaphysics) as the more "dominant" approach to metaphysics at present. I read her as describing the "powers-based" (i.e., a neo-Lockean or, more accurately, neo-Aristotelian) approach as the weaker party. Let's stipulate that Bernstein's judgment on the relative strength of both parties in analytical metaphysics is accurate (see also Troy Cross's recent reviews, here and here). Even so, her review raises some uncomfortable questions about the state of the discipline. Here I focus on three features: (i) the existence of sub-disciplinary echo-chambers; (ii) who gets to decide who should respond to who; (iii) the benefits, if any, of philosophical engagement.
McKenzie's review of McGinn's book raises three distinct, larger 'issues:'
(1) How much incivility in reviewing is still acceptable?
(2) Do Oxford UP and other prestigious academic presses apply different 'rules' for 'senior' figures?
Let's ignore (3), really. When I mention him in what follows, it is only to discuss (1-2).
In my opinion (2) is the more important issue because it gets at the political economy of our profession. For, let's be clear; McGinn and OUP are not isolated cases. Here are some other examples: today I read a polite, albeit devastating review of a book by Strevens (Harvard UP) that recounts a whole host of problems, including "failure to reference properly the remarkably rich research tradition." All the reviews of McGinn's 'triple' (see the links in discussion) make clear that he was permitted to allow himself to systematically ignore ongoing discussions in pertinent areas of scholarship. Whatever one might think of Nagel's Mind and Cosmos, however enjoyable and provocative, one cannot accuse it of generous engagement with informed, alternative views.
Colin McGinn is a famously polemical reviewer. Yet, when he gets administered a taste of his own medicine (recall), he resorts to name-calling: "absolutely hysterical, ad hominem, and completely devoid of any sense of critical decency."[HT Wayne Myrvold] I leave it to readers to judge this "hysterical."
In his rage he seems to have misread McKenzie's review. He falsely claims (without evidence) that "She pours scorn on my contention that physics is epistemologically limited in important ways—that physicists (and everyone else) are deeply ignorant of the intrinsic nature of the material world. She contrives to make it sound as if this view is an eccentricity dreamt up by me alone." If true, this claim would be astounding because McKenzie is a product of Leeds philosophy--one of the strongholds for views that (in some of their guises) deny knowledge of intrinsic nature of the world (see here). Of course, McKenzie does no such thing. Rather, McKenzie claims that (a) McGinn does not engage with folk who anticipate versions of such views and (b) that he misrepresents the existing discussion; moreover, she reveals that (c) his sole citation to contemporary work in the area is mangled.
Obviously, one can debate the merits of extremely polemic, uncivil reviews. But when in glass houses...
Zachary Ernst, we're sorry to see you go. But you've left us with some important issues to mull over, here.
These even includes some issues that are under faculty control, like the following:
Is philosophy really so insular that we can't respect interdisciplinary work? That we can't recognize the extra effort (not less effort) that it takes to collaborate? I am afraid that I know the answers.
Furthermore, my department also considers single-authored work to be more significant than co-authored work. Frankly, I find this policy totally absurd, but it's not that uncommon. Because a lot of interdisciplinary work will appear in unfamiliar (to one's colleagues) venues, and be co-authored, that work is downgraded, not once but twice. The effect is that when it comes time to decide on salary raises, a faculty member with broad, interdisciplinary research interests is at a severe disadvantage. To put the point bluntly, interdisciplinary researchers get paid less.
Earlier this week I had a post on the 'sting' that was the topic of an article published in Science, purporting to show that many (most?) pay-to-publish open-access journals are not sufficiently selective on what they publish because they have an obvious motivation to accept as many papers for publication as they can. The presupposition seemed to be that this is not the case of subscription-only journals, which have good reasons to be selective, and thus presumably to publish higher quality articles.
Well, today I came across a blog post contesting exactly this presupposition. It's a great post, which you should read in its entirety, but the main point is that the key desideratum for subcription-only journals is to publish 'sexy' research, of the kind that generates a lot of citations, so as to increase the impact factor of the journal (and thus to increase subscription revenue). The author of the post, Michael Eisen (a biologist at Berkeley) even offers the example of a ludicrous paper published in -- yes, you guessed it right -- Science, which should have never passed any decent quality control process.
Below the fold you can read extracts from the post, which describes subscription publishers as "seasoned grifters playing a long con", among other spot-on observations. The general conclusion is again that peer-review is a flawed system, and even if for the moment there doesn't seem to be any real alternative to it, it is important to keep in mind that it is a deeply problematic system. (The author is refering in particular to his field of research, but as I've argued before, there are reasons to think that the problem is equally acute for philosophy.)
Earlier this week, Science published a much-discussed article entitled ‘Who’s afraid of peer review?’ The article, however, focuses specifically on peer-review in open-access journals that charge publication fees, as it reports on a little ‘experiment’ conducted by the author of the article: submitting multiple versions of the same spoof paper to a wide range of these open-access journals to see what would happen (sort of a Sokal hoax but then multiplied by 300). The hypothesis was that, given the multiplication of open-access journals following the ‘pay-to-publish’ model, the tendency is for the 'predatory journals' among those to accept pretty much anything that comes their way, and then charge authors exorbitant amounts of money as publication fees. And indeed, “by the time Science went to press, 157 of the journals had accepted the paper and 98 had rejected it.”
The article sparked a cascade of negative reactions, in particular by proponents of the open-access model who saw it as a vicious attack on the model by one of the flagship subscription journals (see here). Some have pointed out that, if the ‘study’ intended to be methodologically sound, it should have sent an equal number of spoof submissions to non-open-access journals in order to ascertain whether there really is a significant difference between the two models when it comes to letting bad papers go through the sieve of peer-reviewing. Ultimately, what the experiment seems to indicate is above all a failure of the peer-review model, not of the open-access model. As well put by Curt Rice (who often has incisive analyses of the politics of higher education, e.g. on gender issues):
Meena Krishnamurthy has a blog post about the relative absence of political philosophy in the The Philosopher’s Annual since 2002. This surprised me a bit because it seems to me a field that has been very fertile over the last decade. From afar it looks as if the grip of Rawls on the field has been loosened, and there is a lot of important and urgent work on legitimacy, international (and inter-generational) justice, democratic theory, and, of course, the role of religion today. (Of course, a lot of this is pursued in critical discussion with Rawlsian ideals.) Not to mention that the period has seen Libertarian ideals articulated and renewed with remarkable philosophical ability, and ongoing formal work in social choice theory. Anyway, go read her post.
UPDATE: Ryan Muldoon points out that formal work in political theory by Peter Vanderschraaf has been recognized!
Below the fold some further reflections by me.
One annoying feature of re-reading other people's scholarship, is the possibility of discovering that one's treasured ideas may well be anticipated by others. Memory and self-deception can be funny like that. So, it's probably not uncommon that folk really fail to attribute to others what is due to them without realizing they are in the wrong. Even when the mistakes are honest, they still involve injustices, and these may be quite large given that they may, say, reinforce gender related unfairness, too. Such injustices are not easy to excuse or forgive when one feels that one's work or presence has been silenced or unfairly ignored. Even so, we try to cope with this kind of injustice. Yet, faking data or copying (and pasting) texts without attribution is legitimately an unpardonable sin in the Academy, especially if it is part of a pattern of such (plagiarism/faking) cases. One might be willing to give a student a second chance, but recoil from letting a confirmed fraudulent senior scholar back into the fold. Paradoxically many of us treat such cases as worse sin than many crimes on the 'outside.' (Coetzee's Disgrace reflects on this.)
It is, thus, understandable that the good folk at Retractionwatch react with dismay that prominent scholars, including philosophy's very own Philip Pettit, are willing to endorse Marc Hauser's forthcoming book, Evilicious. What really rankles Retractionwatch is that Hauser has not owned up to his record of misconduct and "only acknowledged “mistakes.”" (As they write: "But we do prefer when those given a second chance acknowledge that they did something wrong. That might start with noting a retraction, instead of continuing to list the retracted paper among your publications.")
How common is this?
You write a paper that seems like the best thing you've ever worked you. You're bringing the thunder in the manner of Jack Black before Dave Grohl got his gum-chewing hands on him. You're channelling the gods, matter become spirit, a rock and roll hurricane, and it feels GREAT. Then the paper gets accepted to one or more conferences and even in the cold, hard light of day it still rocks not only your socks, but a fair number of the conference participants who are specialists in whatever genre of philosophy your paper lives. Then, after rewriting the thing based on conference discussion, you start sending it to journals. Very shorthly thereafter, you stop feeling so blessed.
Journal after journal rejects the piece. While you might not think it's the very best thing you've written any more, it is still among the best because rather than just raising a problem for someone else's view, it makes a non-trivial point that carves out an interesting new piece of dialectical space. Nonetheless it just can't find a published home. After six or so tries the period between submitting it gets longer and longer until (after ten or so) it's just languishing there on the dead letter pile with the other two non-trivial articles you never managed to get published.
I have no idea how common that is (it's happened to me at least three times), but I have heard from a lot of other people at roughly my level of institutional affiliation* that the less trivial the idea, the harder it is to get something published. Many of us end up chasing book contracts precisely because of this dynamic. Books give you enough room to say something interesting next to the far away possible world counterexamples of more famous people's theories that are your journal articles' bread and butter.
If there is any general validity to my experience, the next question is why it is the case? Why are relativily trivial ideas so much easier to get published?
Biochemical and Biophysical Research Communications (BBRC), a journal that proudly lists that it is "the fastest submission-to-print journal! Number 1 journal in the Thomson's JCR ranking for Biophysics in terms of Total Cites, Number of Articles and Eigen Factor ™ score." It is a "5-Year Impact Factor: 2.500." Apparently, sometimes speed does not pay because the journal has been victim of a spectacular hoax (recall). Nature reports:
Ghost writing is taking on an altogether different meaning in a mysterious case of alleged scientific fraud. The authors of a paper published in July (A. Vezyraki et al. Biochem. Biophys. Res. Commun. http://doi.org/nxb; 2013), which reported significant findings in obesity research, seem to be phantoms. They are not only unknown at the institution listed on the paper, but no trace of them as researchers can be found. [HT Stefan Heßbrüggen]
Oddly enough, so far there is no evidence that the hoax was perpetrated to expose the vulnerability of scientific refereeing practices. In fact, Nature quotes a scientists (the one who alerted the editors to the hoax), who "believes that the paper was intended to hurt him and his lab."
Let's stipulate that there is genuine bullshit (see Frankfurt 1986). Let's also stipulate there is bullshit in the Humanities, even in philosophy.
A lot of people I know in philosophy are pretty confident that much of what passes in Literary Theory and the philosophies that influence(d) it is bullshit. I have seen testimony people that ardently defend this view who have studied quite a bit of, say, Continental philosophy and reached this conclusion. (Of course, in reality, a lot more folk are dismissive on the basis of extremely slender personal, intellectual investment.) When pressed for evidence, the Sokal Hoax is trotted out as exhibit A. It made a great splash inside the academy and the popular media that covers it. Rather than interpreting the case as an instance of bad refereeing, editorial misjudgment, whole areas of thought got written off by quite a few people.
I just learned that a paper was retracted from Journal of Physics D: Applied Physics--a very fine physics journal published by a reputable institute. It frankly reports, "The Editorial Board has investigated this and found that the XPS spectra shown in figure 3 all exhibit an identical noise pattern that is unphysical." [HT Retractionwatch] In other words, the journal published artfully presented bullshit. (It recently announced that it "is now using ScholarOne Manuscripts for submission and peer-review management.") Undoubtedly, this incident is unpleasant for all the parties involved, but nobody in their right mind will draw any inferences about physics from it.
The moral: very good journals can publish bullshit, and the refereeing institutions of all disciplines need constant maintenance.
[Rachel McKinnon solicited this post from me. She should be blamed for any insights.--ES]
Refereeing a book manuscript for a university press can be a daunting enterprise. If you don't watch out, it can be very time-consuming (some of us should be kept off the streets--you know who you are). Crucially, the norms that apply are not entirely clear. For example, if you find an invalid argument on p. 275, it might, after all, be worth repairing given all the other riches. But what if you find lots of problems (lack of citations, garden path arguments, etc), yet judge that the book will make a major contribution? Now, book-refereeing is rarely masked--referees nearly always know the identity of the author. Is there something to be gained to see the -- let's stipulate, dead-wrong -- views of, say, an influential PhD supervisor in print, rather than propogated in the works of the students?
More subtly, the interests of presses and the discipline do not coincide. Here's a concrete example: whatever you think of the substance of Nagel's Mind and Cosmos, it is undeniable that it could have benefitted from more exacting refereeing. Leaving aside his engagement with the (philosophy of) sciences, it is undeniable that if Nagel had engaged with more recent analytic metaphysics he could have given a far better and more favorable account of the nature of the problem-space. But, of course, if it had been seriously revised in light of serious refereeing it would have been almost certainly less readable and, perhaps, less controversial; it might also not have read anymore as Nagel's last will to the profession.
Here follow some de-feasible considerations that might inform book refereeing:
1. [A] Your main job as a referee is to help an editor -- almost never a professional philosopher -- figure out the significance of the book and anticipate how people in the field might respond to it. [Bl Your duty to the profession is to uphold scholarly standards (quality, citation practices, etc.).