Follow BritHereNow on Twitter

Recent Posts

The Bertrand Russell Show

Feminist Philosophers

fragments of consciousness

Gender, Race and Philosophy: The Blog

Knowability

Leiter Reports: A Philosophy Blog

Long Words Bother Me

semantics etc. highlights

Thoughts Arguments and Rants

Nostalgia

Nostalgia

Sunday, February 18, 2007

Rankings

In previous blog posts John Capps has argued that the PGR gives us a somewhat inaccurate picture of the quality of philosophy programs. He now appeals to the Faculty Scholarly Productivity Index to back up this claim. The FSPI measures productivity by the number of books and articles published and the number of citations. Given these parameters, the 10 highest ranking philosophy programs are:

1. Michigan State
2. CUNY
3. Princeton
4. U. of Virginia
5. Rutgers
6. UC San Diego
7. Penn State
8. UT Austin
9. SUNY Stony Brook
10. Rice

Capps notes that the list "doesn't look anything like the Leiter list". He also foreshadows an objection. The objection is that the PGR measures quality, whereas the FSPI measures quantity. Capps replies that citations are normally taken to be a good measure of quality. That's a good point. But I wonder whether the numbers cited by the FSPI are really accurate. According to the FSPI, only 20% of the faculty at Princeton and 28% of the faculty at Rutgers published an article in 2003-2005. Though I haven't checked, that seems unlikely. On the other hand, if there are mistakes, everyone is probably affected, and so that does not explain why the FSPI looks different from the PGR. So why does it look different? One possible answer is that there are numerous ways to measure the quality of a program. The FSPI clearly cannot measure originality or quality of articles as well as the PGR. Still, these data are quite interesting, and it might be a good idea for prospective Ph.D. students to compare the lists before deciding where to go. But students should also keep in mind that some prospective employers might consult the PGR rather than the FSPI before they decide whether or not to make an offer.

10 comments:

Richard Zach said...

That weird ranking is easily explained: the database they use for article publications includes only Open Access journals. Hence, it includes very few good philosophy journals. The list of journals classified as Arts and Humanities/Philosophy, is:
Journal of Mundane Behavior
Philosophy of the Social Sciences
Ethik in der Medizin
Journal of Consciousness Studies
Ethique et Sante
Library Philosophy and Practice
Journal of the History of Ideas
Social Philosophy and Policy
Studies in East European Thought
Review of Metaphysics
Kennedy Institute of Ethics Journal
Ethics
Philosophical Magazine
Journal for General Philosophy of Science
Journal of Value Inquiry
Pastoral Psychology
There are a couple philosophical journals classified in other areas, e.g., for some reason, the Southern Journal of Philosophy under Medicine.

khadimir said...

Would you please give justification for the statement "The FSPI clearly cannot measure originality or quality of articles as well as the PGR?" I find such an assertion to need more discussion in this context.

Aidan said...

A quick follow up on Richard's comment. It doesn't look like any provision has been made to take into account invited chapters of books; it's mentioned that book publications in that period are recorded via Amazon, and peer-reviewed journals using Scopus, but I couldn't see any suggestion that they were tracking book chapters. If I'm right, that would presumably also help to explain why some departments seem so strangely un-prolific.

Brit Brogaard said...

The fact that very few philosophy journals are tracked explains a lot. The fact that book chapters are not tracked explains less as that should affect everyone equally. When I said "the FSPI clearly cannot measure originality or quality of articles as well as the PGR", I wasn't aware that they tracked very few good journal articles. In fact, I was assuming that they had taken the mainstream journals into account. What I meant was this. The PGR is based on the opinions of a large number of experts in the field. Expert advice is usually more accurate than random stats.

Anonymous said...

On the other hand, as has frequently been mentioned as criticicism of the PGR, expert advice is limited to the expertise of the experts. If few of the people consulted work on Husserl or even Deleuze, then the PGR is simply not capable of giving any expert advice whatsoever on quality of departments where a good deal of work on those figures takes place. While quantity measures seem like a bad idea, the "quality" measure is just as questionable.

Brit Brogaard said...

Yes, the expert advice is indeed limited. But LOTS of experts are consulted, and so, only very few areas are under-represented. Of course, the ideal would be to expand FSPI to take account of mainstream journals, book chapters etc. We would then have two comparable lists. If they were still very different, we could create a super-list based on Leiter and the expanded version of FSPI.

Anonymous said...

Didn't the Penn State department only recently come out of receivership? Ranking such a program in the top 10 would seem to be prima facie evidence against the accuracy of the list.

Anonymous said...

anonymous, i don't see how. The ranking purports to tell us about faculty scholarly productivity by looking to specific publication venues. Accuracy of the list should be judged according to whether it accuratly does that, no?

Anonymous said...

anonymous, i don't see how. The ranking purports to tell us about faculty scholarly productivity by looking to specific publication venues. Accuracy of the list should be judged according to whether it accuratly does that, no?

Of course, ranking a department which was recently in receivership in the top 10 doesn't in itself undermine the accuracy of the ranking qua ranking of faculty scholarly productivity. But I take it that one of the chief reasons one might care about such a ranking is because one takes faculty scholarly productivity to correlate reasonably well with program quality. Including a department which was recently in receivership in the top 10 seems to me to cast serious doubt on either (i) this correlation or (ii) the accuracy of the ranking qua ranking of faculty scholarly productivity. The more one likes the correlation (and it does seem there's something to it), the more likely one will be to conclude that the ranking isn't really accurate. I guess I was too brief in my previous post. Apologies.

Anonymous said...

but I am still wondering why the fact that Penn State was recently in receivership should indicate that they are not now a good place to receive an education and training in at least some areas of philosophy? Something is missing. Maybe it is the consequences that having been in receivership has had on the program? or some continuity between the reasons the department went into receivership and what is going on in the program now?

I'm also skeptical about the strength of the correlation between the high ranking and the quality of education and training one receives. I think mentoring is a very important aspect of graduate education and none of these lists speak to that in the slightest.