There have been lots of discussions on the PGR (e.g., here), especially on its leader, Brian Leiter, including a poll on whether the of 2014 should be produced. Regardless of the outcome of this, I think we can already start considering alternative ways, independent of the PGR, to provide information for prospective philosophy graduate students. 

  1. Ideally, such information should should not be primarily about rankings of quality. Quality is a complex concept that is vulnerable to biases and enforcing the status quo.  We should rather provide prospective grad students with clear measures of placement rates and places where they could study the topic of their choice. Perhaps any type of ranking will be problematic. We could just provide descriptive info on a wide range of topics, e.g., where are places to study experimental philosophy, continental French etc. One can give that info *without* giving an overall rank of perceived quality. 
  2. The methodology by which placement rates are made and by which assessments of strengths within departments are made should be empirically informed by the social sciences e.g., in its selection of experts who make these assessments
  3. Collecting and dessiminating this information shouldn't be in the hands of one individual but shared responsibility. I originally thought it was something the APA, or perhaps a task force consisting of people from the APA, the AAP etc could do, but I am now not so sure whether this is a good idea. PhilPapers+ seems like a good place to host the information, especially given that prospective graduate students will already be familiar with PhilPapers
  4. It would be nice to expand information for prospective graduate students to non-Anglosaxon departments. There are lots of grad students outside the English-speaking world who could benefit from lists of placement records and specializations of faculty members outside the US, UK etc.
Posted in ,

16 responses to “What information should prospective graduate students get?”

  1. John Schwenkler Avatar

    Hi Helen,
    I am just going to repost here what I’ve already said elsewhere in connection with your #1 and #2, namely that I would be very interested to hear input from social scientists on how to assess placement statistics, which many people seem to think should be an important element in an alternative to the PGR. As I’ve noted before in several different contexts, I think there are so many complicating factors in this area that it is hard to know what a sound quantitative assessment would look like. Here are a few:
    – incomplete or inconsistent reporting of the data;
    – the differing relative ability of students coming in (i.e., programs that enroll better students may have disproportionately good placement rates, but this doesn’t reflect on the quality of their training);
    – the question what to do with students who choose to leave professional philosophy even though they could perhaps have attained a TT job, but where their philosophical training benefited them in their new career; and
    – the (arguably) differing “value” of some TT jobs compared to others.
    Because of this, what I’d most like to see is a central clearinghouse, perhaps sponsored by the APA, with exhaustive descriptive records of what students have done after entering each program. If people want to compile these records into statistics or rankings, they can do it on their own time.

    Like

  2. David Mathers Avatar
    David Mathers

    ‘Quality is a complex concept that is vulnerable to biases and enforcing the status quo.’ Is your view that any concept which is vulnerable to racist or sexist or other biases should automatically be dropped, or at least given a highly subordinate position? Regardless of it’s importance or otherwise in our cognitive lives in a particular domain?

    Like

  3. Helen De Cruz Avatar

    David: How would we redefine quality without, for instance, the implicit judgments about which disciplines are more central to the discipline (metaphysics, epistemology, philosophy of mind, and philosophy of language), or what topics are worth thinking about (e.g., religion, feminism and race are not serious topics of philosophical investigation)? I’m not saying it can’t be done. Jennifer Saul (I think) recently said that Sheffield consistently got excellent rating from the REF. The reason is that the REF outputs are judged by people who know the work. Sheffield is also PGR ranked, but because they conduct a lot of work that is not deemed of central importance, it is less well ranked than its REF evaluations would suggest. I am wondering if much is lost if we just got rid of overall rankings of quality, comparing apples with oranges, of departments that have different strengths and weaknesses. I am not sure whether perhaps about specific topics (e.g., say I am a grad student interested in learning how do conduct work in experimental philosophy) could be still be ranked (as it is now). I suspect that even there, we could simply list departments that have significant outputs and research and supervision ongoing in experimental philosophy without ranking them. Students can then use that information, combine it with other quantitative measures such as placement data, to make their decisions of where to apply.

    Like

  4. Lisa Herzog Avatar
    Lisa Herzog

    There is this maxim that one should try to make one’s measurement as exact as needed for the subject matter under consideration, but not more exact (didn’t Aristotle say something along these lines?). I wonder whether what we should have is rankings from one to x, or whether we should rather have groups. Something like „here are ten good places (or: the ten best places) for studying sub discipline x“. And then there could be a second and third group for places that are also good, but not as good. One might either have fixed group sizes, or leave this open. I guess some social scientists can tell us about how best to operationalize this. In any case, it seems to be more appropriate for the degree of exactness that such judgments could ever have, while avoiding a wrong impression of exactness.

    Like

  5. Benny Goldberg Avatar
    Benny Goldberg

    I think our first job should be to seriously attempt to dissuade them from going to graduate school.

    Like

  6. Helen De Cruz Avatar

    Oh yes, definitely – or at least paint the picture in all its bleakness. Maybe the new information for prospective grad students in philosophy should start out with that, or have a section on it, a realistic picture of the placement rates (overall), depression in grad school, attrition rates, average debt accrued. Again, it would be good to be factual (there’s a whole literature on “Grad school just don’t go” that has become a literary form of itself; I still think it’s good to give a dry account of data on how good one’s chances are (e.g., Carolyn’s placement data) and then let them decide if they still want to go through with it.

    Like

  7. Kenny Easwaran Avatar

    It sounds to me the the issue you are raising about “quality” is about some supposed notion of “overall philosophy quality”. That definitely seems problematic, and reinforces issues about centrality of certain subfields. However, it seems to me that if we move to a model of specialty rankings only, it does make sense to make the judgments within that specialty be about “quality”. I don’t think just listing how many people at a department do work on some topic is that relevant – an undergrad has no way to know whether 4 people who have the topic as a secondary or tertiary area of work are going to be as useful as one person who has the topic as her primary focus. What an undergrad needs is for someone (or preferably, several people) who already work in the field they’re interested in to give them some sense of where work (and preferably new and interesting work) in this field is being done.
    We can’t hope to provide that at the scale of philosophy as a whole (especially since most of us can’t even agree about what philosophy is or what it should be), and probably it’s still problematic at moderate scale sizes. But when we’re looking at things with a narrower scope, expert judgment of quality does seem to me to be the best thing that we can have. (We can’t just use self-reports of whether an area is a focus of a department – that just means delegating judgments of quality to colleagues in different sub-fields, rather than specialists in the same sub-field in different departments.)

    Like

  8. John Schwenkler Avatar

    Yes, yes, a hundred times yes to what Lisa Herzog says above. (And I love the reference to Aristotle — the passage is Eth. Nic. I.3.) A huge problem with the PGR — and many other such rankings, of course — is the false conceit that we can be sure that #3 really is significantly superior to #4, #16 to #17, etc.

    Like

  9. David Wallace Avatar
    David Wallace

    I do find it surprising that various people find problematic the very concept of making an assessment or ranking of quality here. As a professional academic, it can seem as if I spend half my life making assessments and rankings of quality – grading undergrad work, assessing and ranking undergrad and graduate applications, reviewing papers, reviewing grant proposals, shortlisting and selecting for jobs, writing tenure and post-tenure letters… Why in this one case should it become (not merely imperfect and difficult but) impossible?

    Like

  10. Jacob Archambault Avatar

    I intended the following link as a comment on this post, but it grew a bit long, so I moved it to a post of my own. Those interested can see it here.

    Like

  11. Eric Winsberg Avatar
    Eric Winsberg

    Hi David,
    I’ll play devil’s advocate here (I’m not one of the people you find surprising):
    Most of the evaluations you spend your time doing are apples to apples. Rarely are you comparing the quality of a philosopher of physics to a political philosopher. The only exception is at the finalist stage of an open search, and in my experience, those decisions tend to be very contentious and dominated by personal, subjective preferences. But with the exception of a few departments that are incredibly strong in almost everything, ranking departments requires one to make those sorts of apples to oranges comparisons. Everyone knows by now, more or less, how the PGR happens to come down on these comparisons, and not everyone agrees with those sets of preferences.
    Second, its interesting to note that that all of the evaluation work you are remarking on has ALREADY been done, by the time the PGR surveys go out. Shouldn’t the quality of the deparments supervene on the quality of the individual pieces (people, papers, books, etc) that have already been ranked and evaluated, and ranked and evaluated more carefully and by folks more expert in the relevant areas, than they possibly could be by PGR evaluators looking at departments as a whole? How hard is it, in this day and age, to go look for yourself and how many philosophy of physics papers have been published by folks who work at Oxford, at the prestige of the venues in which they have published, at the number of citations those works have accrued, etc., and come up with your own judgment of the quality of Oxford for philosophy of physics? The burden of proof needs to be on the defender of department rankings to show that they provide value over and above all that work that we already do in evaluating the pieces and to show that we need AGGREGATE comparisons of quality–the kind that require me to say whether your X philosophy of physics papers are better or worse than his or her Y aesthetics papers.

    Like

  12. David Wallace Avatar
    David Wallace

    Picking up quickly on just one part of this, as I’m supposed to be on vacation:
    “How hard is it, in this day and age, to go look for yourself and how many philosophy of physics papers have been published by folks who work at Oxford, at the prestige of the venues in which they have published, at the number of citations those works have accrued, etc., and come up with your own judgment of the quality of Oxford for philosophy of physics?”
    For me: pretty trivial. For a prospective grad student: not so much.

    Like

  13. Eric Winsberg Avatar
    Eric Winsberg

    Sure. But a rating service could aggregate this information for prospective graduate students without having to make its own independent assessments of quality, which is what I took to be at issue.

    Like

  14. David Wallace Avatar
    David Wallace

    Okay, but now a lot of work is being done by the weighting algorithm. I’m unpersuaded that human judgement is inferior to algorithm here. (not that it’s hugely salient to my point: a ranking assembled algorithmically from other rankings is still a ranking).

    Like

  15. David Wallace Avatar
    David Wallace

    (Reposting – more or less – as the internet seems to have eaten my previous version)
    But then the algorithm used by the rating service to determine overall ranking is doing a lot of work; I don’t have more confidence here in an algorithm than in a human judgement. (Unless the intention is for the rating service not to produce an overall ranking? But then the prospective grad student has to do it themselves, and again, they’re not in a position to.

    Like

  16. Incredulous Avatar
    Incredulous

    If graduates in a discipline often self-conceiving as the most critical, deep and reflective of all disciplines (‘Just look how our majors do on the GRE!’ etc.) cannot understand e.g. that an opinion poll is an opinion poll, or that the difference between 1st and 2nd need not be the same as the difference between 2nd and 3rd, then they should not be going to graduate school; they should be going to the library to get introductory books to statistics and critical thinking.
    Like David Wallace, I’m astonished that a discipline that virtually trades in trying to pin down complex concepts, is shying away from the task now we actually have to do something concrete, like put a number on it – even a range of numbers – that has real implications for students’ life choices (unlike wrestling with whether or not Julius Ceaser is a number, or, 2 500 years into the task, what exactly counts as knowledge).

    Like

Leave a comment