I find Carolyn's post on the 2006 NRC data quite fascinating. (Remember to click on her diagrams to bring up a bigger version.)
I take it that the S-ranking reflects actual performance (however that might be measured) with regard to factors that are explicitly valued. And the R-ranking reflects not exactly reputation, but actual performance in factors that are found to influence reputation. So to some extent, R is appearance (actually predicted appearance: high performance in what makes for reputation) and S is reality. Of course, the measurements and analysis may be rubbish, so one has to take the whole thing with a grain of salt.
Still and all: there are some strange results . . .
What’s just as interesting is the discrepancy between what explains reputation and actual reputation (as measured by PGR). For example: CUNY (I think) which is 50th in reputation making factors turns in at 23rd in actual PGR rank. (And don't forget it is 80th in actually valued factors.) UCLA, which comes 30th in reputation-predicting factors, earns a 7th rank finish in PGR. USC, predicted at worse than 50th, actually comes in at 16th. The four departments all predicted to come in 10th (or so), actually come in at 5th, 7th, 16th, and 50th. Same prediction, very different results.
I would not mind seeing the R rank plotted against PGR rank to see how good the fit is. But to repeat, there are certainly some big discrepancies.
Recent Comments