Brian Leiter criticizes the new Google Scholar Metrics, which uses h-index and various similar measures to assess journals. He writes that "since it doesn't control for frequency of publication, or the size of each volume, its results are worthless." Some of my friends on Facebook are wondering why he's saying this, so I'll try to offer a helpful toy example here.
Consider two journals: Philosophical Quality, which publishes 25 papers a year, all of which are good and well-cited; and the Journal of Universal Acceptance, which publishes 25 equally good and well-cited papers a year as well as 975 bad papers that nobody ever cites. Google gives both journals the same score along all its metrics. Since tacking extra uncited papers onto a journal doesn't affect the number of papers in it with at least that number of citations, JUA's additional bad papers make no impact on the h-index (or on Google's other measures defined in terms of h-index, like h-median or h-core). But if you're looking at someone's CV and they've got a paper in one of these journals, you should be more impressed by a Phil Quality paper than an JUA paper. The Phil Quality paper is likely to be good, while a JUA paper is likely to be bad. Still, Google will see them as equal.
Recent Comments