NRC data on Philosophy graduate schools has been criticized, but in this thoughtful guest post, Carolyn Dicey Jennings suggests a useful way to combine them with reputational surveys such as the Gourmet Report. She writes:
Finding the right graduate school can be tricky. The reputational value of the faculty is important, but so is the support one is likely to receive as a graduate student, the ultimate prospects for getting a job, and other factors that are not necessarily tied to faculty reputation. When I was looking at graduate schools, I went to the Gourmet Specialty Rankings and looked at every department mentioned under the Philosophy of Physics specialty. For each department listed, I downloaded papers by those faculty members who specialized in the philosophy of physics and sent emails to faculty I was interested in working with. I found a great fit in Boston University. Unfortunately, after all that research, I decided to change specialties a couple of years into my degree. Luckily, I was in a supportive department with some true pluralists, such as my advisor, Dan Dahlstrom. But I can imagine having been much worse off if I wasn't in a department with good student support. And so it is important for everyone, I think, regardless of specialty and commitment, to know something about these other factors. This is where the National Research Council comes in.
Unfortunately, in their effort to make a point about the lack of precision in ranking systems and the difficulty of even choosing a single set of qualities with which to rank departments, the NRC has left us with an unwieldy amount of information. The chart below is an attempt to visualize some of this information, to make it more “user-friendly.” What I did was to take the two NRC ranking measures, the “S” and “R” rankings, and to plot them against each other for each department ranked.
The “S” ranking reflects the outcome of multiplying the actual features of each department by the relative value that experts in the field grant those features. For example, if a department has a relatively high level of faculty diversity and experts in the field think that faculty diversity is relatively important, then that department will have a higher S rank. I put the mean S rank on the Y-axis, so departments with higher S rank will be closer to the top of the graph.
The “R” ranking reflects the outcome of multiplying the actual features of each department by the value that experts in the field seem to grant those features when they rank a sample set of departments. For example, if experts in the field consistently give high rankings to those departments with high levels of faculty diversity, all those departments with relatively high faculty diversity will be given high R ranks. I put the mean R rank on the X-axis, so departments with higher R rank will be closer to the right-hand side of the graph.
I compare the outcomes of the two rankings because I think both have value, and because there are some notable discrepancies between these values for some departments. You will notice that I added a “best fit” line for each chart: this line is put there so that you can see which departments have higher S than R rank (those above the line) and which have higher R than S rank (those below the line), relative to other departments in the data set.
Students who would like to see how the NRC data interacts with faculty reputation can take a look at the two other charts below. These two charts look at departments in the United States that are ranked by both the NRC and the Gourmet Reports. I took the numerical rankings for each department from the 2006 Gourmet Report and placed it at the location of the department's name in one of the two charts (with the department's name in the other).
One can quickly see that some departments given high reputational rankings don't have high NRC (S or R) rankings, and vice versa. There are multiple reasons for this, but a major factor is that the NRC looked at mostly quantitative measures for research, favoring those departments with high-yield manuscript publication, whereas the Gourmet Report favors those departments with highly-cited or otherwise well-received research. Both systems have their potential points of weakness, which is why I think it is important for students to be aware of both sets of information.
How can anyone use this data? I would recommend using the NRC data as a back-up to the reputational data. That is, I would recommend starting with the Gourmet specialty rankings and then looking at how your favored departments stack up according to the NRC (for departments not covered by the NRC, look directly at the department's website to find the relevant data). I would then go to the departments' websites as well as to the full set of NRC data to check whether any points of weakness uncovered by the NRC a) are unimportant to you, b) are due to the inaccurate recording of data, or c) have changed since 2006. Once one has checked to see which departments are both likely to be a good fit for your favored specialties and to offer good student support, apply away!