In the coming weeks I hope to be updating you with more details and analyses, but for now I am simply announcing that the final report for APDA is complete. Feel free to ask questions or comment below.
*Update: we noticed an error in one of the charts and some potentially confusing language in that section, so we have updated the report at the link.
A few weeks ago I posted some details about a new project: Academic Placement Data and Analysis (APDA) here. Readers may be interested in some updates to that project. Note: We are sending out emails to program representatives over the next few hours with much of this information, including an extended collection goal date of July 22nd, 2015. The original blogpost is quoted below.
1) Total Placement Records
"There are approximately 2300 total entries, with several categories of data."
As of noon on July 13th, we had 3078 placement records for 2444 people--that is 573 more placed candidates than we had in the database on June 23rd. (In comparison, PhilJobs, the next most comprehensive database, had 2307 placement records for junior hires at that same time.)
Academic Placement Data and Analysis (APDA) is a new, collaborative research project on placement into academic jobs in philosophy. The current project members include myself, Patrice Cobb (psychology, UC Merced), Angelo Kyrilov (computer science, UC Merced), David Vinson (cognitive science, UC Merced), and Justin Vlasits (philosophy, UC Berkeley). This project is borne out of earlier work on placement that was posted here and elsewhere over the past few years. Funding for this project by the American Philosophical Association has so far provided for the development of a website and database that can host the data for this project (thanks to the work of Angelo Kyrilov over the past two months). There are approximately 2300 total entries, with several categories of data. Most of these categories of data have been made publicly available, whereas any categories that have not been made public (e.g. name, gender, race/ethnicity) will be provided to researchers with IRB approval from their home institutions. You can see the website and database so far here:
This is a moderated thread. So there can be no question that Leiter at least had to deliberately press ‘publish’ on this comment. It is less clear, as his own comment further down indicates, that he had fully thought through the implications of doing so.
Brian Leiter said...
Yes, I suppose I should not have approved #2, but I've been approving almost everything. On the other hand, Johnson is a very public and rather noxious presence in philosophy cyberspace, so I'm not surprised there is interest.
I’m sure we’re all glad to know that Brian has some standards (he didn’t approve everything, after all). Still, what he did approve seems to merit some comment.
The speculation about the reasons for Leigh’s ability to secure a second job in professional philosophy is untoward, given that she is a) non-tenured, b) not in any way credibly accused or even suspected of professional misconduct, and c) the characterization of her current position is inaccurate. Publishing this comment and thereby generating a public sense that Leigh does not deserve her current employment is at very least an obvious instance of bullying on Brian’s part (and fits his by now well established pattern of directing this sort of attention toward junior, precariously employed members of the profession).
In what has to be one of the great whoppers of his entire blogging career, Brian goes on to justify leaving such a comment up by validating a more general interest in the question of why someone who is, in his view, a "a very public and rather noxious presence in philosophy cyberspace” should have a job.
To say that the implicit standard in 2) risks implicating Brian himself is rather obvious. More interestingly, it seems to be perhaps as candid an admission as we are likely to get from Brian that he sees nothing wrong with harassing people he doesn’t like if he can possibly pull it off. And so we find him abusing the pretext of discussing ‘issues in the profession’ to pursue his own petty little vendetta.
When I first looked at placement statistics at the Philosophy Smoker I performed some analyses that I shouldn't have. First, I performed too many analyses. Second, I used the wrong kinds of analyses for some of the data. I did not imagine that these statistics would take off as they did and I was overworked*, which contributed to some mistakes on my part. One of these mistakes was running correlation analyses over gender:
I also found a negative correlation between PhD granting institution and number of publications (-.17: the lower your PhD granting institution is ranked the more peer-reviewed publications you have) and between gender and number of publications (-.21: if you are a man you likely have more publications than if you are a woman).
While at the time I suspected that this negative correlation had to do with the increased difficulty women have in publishing their work, others worried that women had an upper hand on the job market. I brushed off this latter worry because the proportion of women who found tenure-track jobs was about the same as the proportion of women who obtain PhDs in philosophy. In fact, in the 2011-2014 data set I found that there is not a significant difference between the proportion of women who graduate from each department and the proportion that find tenure-track jobs from each department (but there is a significant difference for postdoctoral/VAP/instructor positions, which are awarded to a smaller proportion of women relative to women graduates). But this worry regularly comes up in comments and I feel a responsibility for having possibly led people astray with analyses I shouldn't have used in the first place. For that reason, I want to provide some more appropriate analyses here, as clarification on the relationship between gender and publications in the placement data from 2011-2012 and 2012-2013. Those who want to check this work can use the spreadsheet at the bottom of the post here, which is the one I used. (I do not use the more recent data because I decided not to collect publication data in this last round, due to time constraints.)
I note here the existence of the October Statement, which 111 philosophers have signed to demonstrate their resistance to all ranking systems. (I have not signed this statement. As I say here, I favor a user-created ranking system.)
In addition, Brian Leiter released a list of those who will serve on the board of the PGR for 2014. I checked this list against the board of the 2011 PGR and an earlier announcement and found 7 missing names. I do not presume to know why all 7 of these people appear to have stepped down from the board, but Brian notes at his blog that "Five Board members resigned over the past two weeks, some because of the controversy, and some because of unrelated concerns about the PGR methodology." Here are the missing names: Alex Byrne, Craig Callender, Crispin Wright, David Brink, Graham Priest, Lisa Shapiro, and Samantha Brennan.
--for all of the departments in the top 50 of the 2011 worldwide PGR, a mean 17% faculty signed the document**. (I am attaching the Excel spreadsheet I used here.)
--there is little to no correlation between PGR rating and the percentage of faculty who signed for departments in the top 50 of the 2011 worldwide ranking (-.11). Of these departments, those with greater than 17% faculty signatures include: ANU, CUNY, Duke, Georgetown, Harvard, Indiana, King's College London, MIT, Northwestern, Rutgers, Syracuse, UCL, UCSD, Cambridge, Edinburgh, Leeds, U Mass Amherst, Michigan, Oxford, UPenn, Sheffield, USC, St Andrews/Stirling, UVA, Wisconsin.
*I updated the list at approximately 2:45 p.m. PDT , October 10th, 2014.
**I did not match the names of the signers to the names of members of faculty, but compared the number of people who signed the document claiming a particular affiliation to the number of faculty listed in the current PGR faculty lists. It is possible that persons not included in the PGR list for a department signed the document with that department's affiliation, which would potentially lower this percentage as well as the percentage for that particular department.
Update (October 6th, 2014): Sheffield is the second department to announce that it is not cooperating with the PGR this year. The October Statement has 111 signatures, as of October 4th. (I have not worked out how much overlap exists between these statements, so it would not be correct to say that these statements together constitute 745 signatures--the number is smaller than this, but I don't yet know by how much.)
Update (October 10th, 2014): Signatures on the September Statement have closed, and an announcement has been added, as below.
"The September Statement, signed by twenty-one philosophers on September 24, 2014, and its addendum, signed by six hundred twenty-four philosophers in the weeks following, was a pledge not to provide volunteer work for the Philosophical Gourmet Report under the control of Brian Leiter.
On October 10, Leiter publicly committed to stepping down from the PGR following the publication of the 2014 edition, which will be produced with Leiter and Berit Brogaard as co-editors. After its publication, Leiter will resign as editor, and become a member of the PGR's advisory board. (See Daily Nous's account here.)
The September Statement did not specify the conditions under which the PGR is considered to be "under the control of Brian Leiter". It is up to each individual signatory to decide whether it is consistent with the pledge to assist with the 2014 PGR with Leiter as a co-editor, or with future editions with Leiter as a board member.
We are grateful for the support of the philosophers who signed the September Statement, as well as that of those who worked in other ways to make clear that this kind of bullying behaviour is unacceptable in professional philosophy."
I have read in several places this description of my placement post and my response to Brian Leiter's criticisms of that post (most recently, in comments posted yesterday at Philosophical Comment):
"July 1: I posted a sharp critique of some utterly misleading rankings produced by Carolyn Jennings, a tenure-stream faculty member at UC Merced. She quickly started revising it after I called her out."
For the record, this does not strike me as an accurate representation of those events.
First, while I did post a ranking, I made it clear that I did this as an exercise: (from the original post, bold original) "As discussed here in the comments, one of the advantages of comparative data on placement is that they help fill in gaps left over by the PGR...To illustrate this, I below rank the top 50 departments by tenure-track placement rate**, providing for comparison these department's ranks from the 2011 "Ranking Of Top 50 Faculties In The English-Speaking World"by the Philosophical Gourmet Report. Please note that this placement ranking is provided only to demonstrate the potential utility of these data."
Second, while Brian Leiter did find the rankings misleading, many others did not, and even commended the clarity of language in my post. Take these quotes from David Marshall Miller, who has also worked on placement data: "Andrew Carson and, especially, Carolyn Dicey Jennings have developed analyses that now strike me as very robust." and "I will say, to again quote Leiter, that “all such exercises are of very limited value.” Nevertheless, they are of some use, and should be made available, so long as the methodology and limitations of the analysis are made clear. I think the PGR and the placement rankings by Jennings, Carson, and myself all meet this standard."
Third, Brian did post criticisms of the ranking, but I did not make any substantial revisions to the ranking based on his criticisms, since I did not find those criticisms to have merit. Brian's way of characterizing my response at the time was "Prof. Jennings digs in her heels."
Over the past three years I have collected and reported on placement data for positions in academic philosophy. (Interested readers can find past posts here at New APPS under the "placement data" category, two of which have been updated with the new data, severalpostsatProPhilosophy, or the very first post on placement at the Philosophy Smoker.) This year, placement data will be gathered, organized, and reported on by the following committee of volunteers (listed in alphabetical order):
Over the next academic year, we aim to create a website, which will be parked at placementdata.com. This website will include a form for gathering data, a searchable database, and reports on placement data. Until that time, I am suspending updates to the Excel spreadsheet, which contains much of the data used in the past few years, plus the updates I have received over the past few months. (Many thanks to Justin Lillge for incorporating the bulk of these updates into the spreadsheet!) When the website is ready, departments will be able to update their placement data through an embeddable form. Stay tuned for these links in the coming months!
Marcus Arvan, of The Philosophers' Cocoon, had the idea of running a graduate student survey. This was something that the five of us had already talked about (and Justin Lillge had some preliminary work on this), so we have invited Marcus to join us in this project. He has posted s0me initial ideas here. Please do contribute to the discussion if you have insight!
The following ideas and arguments were central to my dissertation work, and are now published as an article in Philosophical Studies. I include them below in a much shortened format for those readers short on time, but high on interest (but hopefully not literally).
The ultimate claim of this work is that top-down attention is necessary for conscious perception. (I argue elsewhere that attention is not necessary for conscious experience, in general.) That is, we might ask the question: what is the contribution of attention to perceptual experience? Within cognitive science, attention is known to contribute to the organization of sensory features into perceptual objects, or object-based organization. I argue something else: that attention enables the perceptual system to achieve the most fundamental form of perceptual organization: subject-based organization. That is, I argue that subject-based organization is brought about and maintained through top-down attention. Thus, top-down attention is necessary for conscious perception in so far as it is necessary for bringing about and maintaining the subject-based organization of perceptual experience.
New APPS readers probably remember Helen De Cruz's excellent post on the polarized debate surrounding evolutionary science (which was picked up by NPR), as well as Roberta Millstein's follow-up post on the perhaps equally polarized debate concerning climate change. Both posts cite the work of Dan Kahan, who has a distinct take on these issues:
"I study risk perception and science communication. I’m going to tell you what I regard as the single most consequential insight you can learn from empirical research in these fields if your goal is to promote constructive public engagement with climate science in American society. It's this: What people “believe” about global warming doesn’t reflect what they know; it expresses who they are."
I just attended a talk by Michael Ranney, who opposes Kahan's position. In Ranney's view, communicating the mechanism of global climate change is enough to change the minds of people on both sides of the political spectrum. (Check out the videos!) Ranney shows, surprisingly, that just about no one understands the mechanism of climate change (Study 1). Further, he shows that revealing that mechanism changes participants' minds about climate change (Study 2).
An excellent article about Mary Beard, the famous classicist, is in this week's New Yorker. It is informative to have a prominent academic give an account of her life experiences like this. I want to encourage others to read the original article, but will pull out one salient and topical point. Beard is not only a very capable scholar, she is also "an avid user of social media," including regular postings at a blog. Despite the sexist reactions to her online presence, Beard has reacted with surprising generosity and patience: "In another highly publicized incident, Beard retweeted a message that she had received from a twenty-year-old university student: 'You filthy old slut. I bet your vagina is disgusting.'...The university student, after apologizing online, came to Cambridge and took Beard out to lunch; she has remained in touch with him, and is even writing letters of reference for him. 'He is going to find it hard to get a job, because as soon as you Google his name that is what comes up,' she said. 'And although he was a very silly, injudicious, and at that moment not very pleasant young guy, I don’t actually think one tweet should ruin your job prospects.'" Beard is an admirable and remarkable person, and learning about this new side of her makes her all the more so, in my mind. Check it out!
After reading some discussion at the Daily Nous about the Ferguson situation (also addressed in this post by Leigh Johnson), it struck me that it might be helpful to open a forum dedicated to discussing steps for improvement and change. Some ideas for improvement and change may reasonably focus on specific issues at the intersection of race, law, and legal force. One article linked in the comments goes in a more general direction, targeting economic inequality and economic reparation:
But this story is neither old nor unfamiliar. Rather than asking “why,” let’s focus on the banal laws and policies needed to redirect the distribution of wealth — stolen from black Americans, such that whites can no longer summon police, law or politicians on their behalf to erase or suppress black Americans, and other minorities. That will require more than revealing the name of the police officer who shot Michael Brown; it will require asking who, in the next round of city council elections, state elections and, of course, presidential elections, is ready to compromise their political career in order to work toward redirecting wealth, jobs, opportunities toward black and Latino populations that constitute the majority of the United States. Only when wealth changes hands will black Americans have a fighting chance to resist police power and violence.
This is a powerful suggestion that leads me to wonder about how economic change might address the problems of racial injustice we have seen in Ferguson and elsewhere. Although racial injustice and economic inequality are no doubt related, the former is a distinct problem from the latter, as was noted during the Occupy Movement. In January of this year, the Pew Research Center presented data showing that not only has economic inequality worsened since 1967 but that "the black-white income gap in the U.S. has persisted" since that time. Thus, although it is possible that "narrowing the gap" of economic inequality may partially and indirectly improve the problem of racial injustice, we ought not forget the specific issue of racial inequality in seeking economic change. To improve economic inequality, Standard and Poor recommends investment in education. Here are some bullet points from the overview of a recent report:
A few days ago I posted a list of features that I take to be essential to an ideal report on placement, seeking comments and suggestions. One of the features I mention there is recency. All departments are likely to place more candidates given more time, but this slope is steeper for certain departments. Moreover, placement varies year to year. Thus, one's choice of time frame can substantially alter data on placement. This is the reason that Brian Leiter's numbers for NYU look better than mine (here and here)--I looked at the years 2012 to 2014 (3 years in the recent past), whereas he looked at the years 2005 to 2010 (6 years in the distant past).* Looking at NYU's placement page, one can easily see that the percentage of graduates placed in tenure-track jobs drops as one reaches the present. As I said, this is likely true for all departments. This means that if you look at data in the distant past, it might not matter what the length of the time frame is, but if you look at data ending in the recent past, the length of time frame makes an impact. That is, for NYU for the years starting in 2005, a 6-year time frame has 87% TT placement, a 5-year time frame has 90% TT placement, a 4-year time frame has 88% TT placement, and a 3-year time frame has 90% TT placement. But for the years ending in 2013, a 6-year time frame has 69% TT placement, a 5-year time frame has 65% TT placement, a 4-year time frame has 56% TT placement, and a 3-year time frame has 56% TT placement. Note that even the 6-year window ending in 2013 is associated with much lower placement than any of the windows starting in 2005. It seems obvious to me that we should favor more recent data, since they reveal which departments place students more quickly than others and since they are more relevant to students looking at graduate programs. Beyond that, it is not obvious just what length of time we should choose (3, 4, 5, or 6 years) or just which year we should use as the endpoint.
Yet, one's choice of time frame has a large impact on comparative placement data. Let's compare NYU's placement page to the placement pages of those departments that I found with these methods to have the highest tenure-track placement rates: Berkeley, Princeton, Pittsburgh HPS, and UCLA. If we look at NYU's worst time frame it comes out behind all the others (2010-2013: NYU 56%, UCLA 59%, Berkeley 63%, Princeton 65%, and Pittsburgh HPS 88%). If we look at NYU's best time frame it comes out ahead of all the others (2006-2009: NYU 94%, UCLA 67%, Berkeley 78%, Princeton 86%, and Pittsburgh HPS 93%). If, on the other hand, we look at multiple time frames then a new type of comparison is possible. We can determine, for example, which department has the least low value for tenure-track placement, given any time frame in the period from 2005 to 2013 (with a 3-year minimum time frame and a 6-year maximum time frame). In that case, Pittsburgh HPS comes out on top. It's lowest value is 85%. In comparison, the lowest value for Princeton is 65% (2010-2013), the lowest value for Berkeley is 59% (2009-2012), the lowest value for UCLA is 52% (2009-2012), and the lowest value for NYU is 56% (2010-2013). So if we look at the least low placement for all of these time frames, NYU comes out second to last. Finally, if we look at the full range, from 2005 to 2013, NYU comes out in the middle (Pittsburgh HPS 93%, Princeton 76%, NYU 74%, Berkeley 70%, UCLA 65%).
Suffice it to say, these decisions make a substantial impact on one's results. For that reason, one should attend carefully to justifications on recency and time frame. I will remove the links to Brian Leiter's two posts on placement data here, since I am concerned that they will mislead students. If I had written those posts, I would certainly take them down knowing what I have made clear in this post (i.e. that the numbers for NYU are inflated for the very time frame that Brian Leiter chose to look at, relative to other departments). I have emailed Brian a link to this post.
As for my data, I use the years 2012 to 2014 because those are the most recent years and the years for which I have large data sets. (ProPhilosophy was kind enough to email departments directly in 2012 and 2013, which substantially increased the number of reported hires for those two years.) To go prior to 2012 I would have to either look at individual placement pages for all 118 departments, many of which do not have data of the sort I need, or use what I know to be a skewed sample from the Leiter Reports blog. I have made clear that any rankings I produce are a work in progress and should not be taken as authoritative. (That is one reason I post them to blogs, and not an independent website.) But as time goes on and this process is improved I will have to start making decisions about which time frames matter. I may well follow the lead of David Marshall Miller in reporting multiple time frames, since this might be helpful for students. Suggestions on this point are welcome. (The data that I used for this post are after the break. Feel free to suggest corrections where needed.)
*I hope that this does not need saying, but I am not picking on NYU here. One of my dissertation advisors was at NYU and one of my best friends is currently a student there. I am looking at NYU because it appears to be a focal point in Brian Leiter's criticism of my work. If one were to look at other measures beyond just tenure-track placement, NYU may well fare better than it does here.
Update (7/14/14): In order to satisfy the worry that NYU is particularly burdened by graduates of the JD/PhD program in this measure (2 graduates from NYU left academia for law in this time period, compared to 1 from Princeton, 3 from Berkeley, and perhaps 2 from UCLA), I compared NYU to these other programs while leaving out all those graduates who left academia. In that case, as I point out in the comment below, it is still clear that time frame matters and, in particular, that the time frame of 2005-2010 overly inflates NYU's record (2008-2013 puts NYU in the middle of the group, at 80%, whereas 2005-2010 puts it at 95%, square with Berkeley and Pittsburgh HPS, ahead of UCLA and Princeton. It might be worth noting that with the same methods Fordham University placed 69% of its graduates into tenure-track jobs between 2008 and 2013). See my comment below for details.
I applaud Brian Leiter's efforts to examine placement data in the past few days *Update 6/13/14: I have removed these links because I think that Brian Leiter's posts have the potential to mislead students. See my new post here*, as well as the efforts of David Marshall Miller and Andy Carson over the past few years. All of this is effort to improve the profession and deserves recognition as such. I plan to continue reporting placement data next year and will likely post the report to an independent website. Below is a list of features that I take to be essential to an ideal report on placement, together with some ideas for improvement on my own work. Please comment below!
1) the original data: as far as I know this is missing from both Brian Leiter and Andy Carson's efforts. This is important because it keeps the analyses honest by opening them up to public scrutiny. I have provided links to my data and will continue to do so. Recommendations on format are welcome here.
2) the methods: key information is missing in Brian Leiter's presentation, such as the criteria for determining which placements are to "research universities and selective liberal arts colleges," but as far as I can tell David Marshall Miller and Andy Carson are clear and up front about their methods. I have tried to be clear about my methods, but I have received some emails that reveal shortcomings here. Recommendations welcome.
3) completeness: Brian Leiter's efforts, as of this moment, include only a few departments (that were not selected at random). An ideal report should include all the philosophy departments that have made placements of the type in question, which is something David Marshall Miller, Andy Carson, and myself have all tried to do. What is missing from all of our reports is complete placement data. PhilAppointments is not a complete source, for example, but neither are placement pages. Further, placement pages are often missing key data points on placement (such as names, which help to identify duplicate candidates). Next year I aim to cross-reference PhilAppointments with individual placement pages. Recommendations on how to efficiently improve completeness are welcome.
4) recency: since these efforts are in their infancy, it is currently unknown what time frames are relevant. Recent data are ideal, so long as recency is balanced with completeness. Brian Leiter chose a 5-year time frame between 2005 and 2010, which I see as a drawback of his report. Although David Marshall Miller, Andy Carson, and myself have all used the most up-to-date data, David Marshall Miller also looked at different time frames. In the future, with more data, the use of time frames should help us to determine how recent our data needs to be. Recommendations on how to proceed with time frames is welcome here, since next year the data set I have will be in its fourth year (2011-2015).
5) neutrality: Those collecting, analyzing, and reporting the data should be as neutral as possible with respect to hypotheses and results. I have concerns about this with respect to Brian Leiter's report, especially given the absence of 1 and 2. The fact that David Marshall Miller, Andy Carson, and I have performed this work on our own is also potentially problematic, even with the inclusion of the original data and methods. Over the next year I plan to form a task force to work on placement data, composed of several people who have reached out to me over the past week or so (but others are welcome). Having more people on the project should help with neutrality. Recommendations on this point are welcome.
When the NewAPPS bloggers first invited me to submit a guest post on my attention research as a graduate student, I decided to submit a post on the term "genius" instead. In the case that it was the only post I would write, I wanted the post to have maximum utility. After some thought, I decided to target the obsession with genius, thinking it a pernicious problem easily deflated. I am not alone in finding it to be a problem. In fact, I may well have been alerted to the problem by Eric Schwitzgebel's blog post on "seeming smart." Commentators on the problem have looked at everything from its impact on women and racial/ethnic minorities to its impact on child prodigies, some of whom have written against it in favor of work-based praise (and for good reason). So, I was half-right: I was right to think it is a problem, but I was wrong, of course, in thinking the problem could be easily deflated. I am going to give it another stab, this time aiming closer to the heart of what I find to be the problem--the way that the terms "genius" and "smart" are used to silence minorities. I know about this first hand--just last week Brian Leiter implied that I was not smart enough to understand a particular distinction that he felt I had overlooked.
Update (6/9/2014): I urge skeptical readers to examine these much more respectful posts, where there is no mention of intelligence, for sake of comparison: on David Marshall Miller, on Andy Carson, and again on Andy Carson. These job market analyses were perfomed after my first analysis in April 2012 and have many similar elements. Furthermore, the content of Brian Leiter's criticisms to these analyses is much the same, but without the damaging remarks about mental capacity, intention, etc.
I recently signed a pledge with the aim of being more respectful toward my colleagues and of trying to uphold a culture of respectfulness in our profession. Following conversation over a previous post, I have come to the belief that I should provide a safe space for people to discuss departmental rankings in philosophy. When I made critical comments at the Leiter Blog on the inclusion of women among the rankers of the PGR in 2011 as a graduate student, I felt shut down. My comments were edited without permission in a way that made me appear less reasonable, while the original post and other comments were edited to make my interlocutors appear more reasonable. I think that it is healthy to evaluate ranking methodologies critically and openly and I think that there must be a public space for this. Since I have already earned the ire of those who appear to be opposed to a public discussion, I am a good candidate for putting forward a post that will allow for discussion. I will thus allow anonymous postings and will aim to respect that anonymity both privately and publicly (except when required by law or conscience to do otherwise).
I will start with some of my own thoughts: I think that reputational information is helpful and important, but that it would be better to combine this information with data on placement, publications, and other such objective measures. (With this in mind, I sent my original findings on the job market to Brian Leiter and Kieran Healy in April 2012 without response.) An ideal ranking, in my mind, would be customizable. The viewer would have to choose metrics before a ranking would be created. I am open on what the relevant metrics might be. This is where you come in. Should we have rankings at all? What metrics do prospective graduate students care about (a variety of voices is of value here)? How should this work be completed, and by whom? Comments that appear to violate the norm of respectfulness will not be admitted as is, but anonymity is both welcomed and encouraged. Update: commentators should feel free to leave off their email addresses when posting comments.
Update: Creating (or updating) a ranking of this kind, with multiple objective values, is beyond my current capabilities. I fully and wholeheartedly welcome someone with more time and competence than me to take on this task. Better yet, I think, would be a task force involving those familiar with the PGR, since they already have lots of expertise. I am welcoming discussion here not because I plan to create a new ranking, but because I think it is important to have a discussion about all such rankings in the open. I am limiting my personal contribution to the placement data for now.
Most readers have probably been following the controversy involving Carolyn Dicey Jennings and Brian Leiter concerning the job placement data post where Carolyn Dicey Jennings compares her analysis of the data she has assembled with the PGR Rank. There have been a number of people reacting to what many perceived as Brian Leiter’s excessively personalized attack of Carolyn Dicey Jennings’s analysis, such as in Daily Nous, and this post by UBC’s Carrie Ichikawa Jenkins on guidelines for academic professional conduct (the latter is not an explicit defense of Carolyn Dicey Jennings, but the message is clear enough, I think). UPDATE: supportive post also at the Feminist Philosophers.
It goes without saying (but I’ll say it anyway) that we, NewAPPS bloggers, fully support Carolyn’s right to post her important analyses of job placement data, and deplore the tone and words adopted by Brian Leiter to voice his objections to her methodology. (This is not the first time that episodes of this kind involving Brian Leiter and junior, untenured colleagues occur; I for one deem such episodes to be inadmissible.)
As promised, here is the link to the data set I have been using in the placement posts. Most of you will probably be most interested in the "Department Trends" tab. If you find that data should be added, please email me with the following information, preferably in order and separated by commas OR add the relevant information to PhilAppointments, which I will use to update this data set from time to time:
1) Year of placement
2) Name of placed candidate
3) PhD-granting institution of placed candidate (and department, where relevant)
4) Type of placement and name of hiring institution
As discussed here in the comments, one of the advantages of comparative data on placement is that they help fill in gaps left over by the PGR. That is, the PGR aims to measure the collective reputation of a department's faculty, but faculty reputation does not necessarily predict the likelihood of placement by that department, perhaps because it does not necessarily predict the overall quality of education in that department nor the quality of preparation for the job market by that department. Comparative data on placement has the potential to provide insight on these factors. To illustrate this, I below bracket the top 50 departments by tenure-track placement rate** (Note: I removed three universities from the top 50 that reported fewer than 2 graduates per year, since small numbers may yield misleading placement rates), providing for comparison these department's ranks from the 2011 "Ranking Of Top 50 Faculties In The English-Speaking World"by the Philosophical Gourmet Report. Please note that placement brackets are provided only to demonstrate the potential utility of these data. Since the data set is not yet complete, I do not recommend viewing these as authoritative brackets.Update:Please see this post for an idea of how I envision this project developing.I have released the spreadsheet containing the raw data and methods I have been using to compute these results, and welcome any/all corrections. As a reminder, I do not have data on the yearly graduates from many departments, listed below. (Those departments are welcome to send me their data, if available.)
Update 7/1/2014: It has come to my attention that Brian Leiter has aired some criticisms of this post on his blog and has publicly suggested that it (this post, not his blog) be taken down. I respond to these criticisms below.
I changed some wording above from "ranking" to "brackets" and added a link to the spreadsheet. I have also changed the numbers in the below ranking to a grouping by bracket (where departments are listed in alphabetical order within brackets). This was a suggestion of Ned Block's. We have been corresponding on statistical significance and I decided that his suggestion would help avoid making small differences between placement rates appear more important than they are. I have left in the PGR rank for comparison, although the difference in rank has been omitted for the reasons provided above.
I have also added updates to my responses to Brian, based on some new statistical tests.
I am adding a link to a chart that will help readers to visualize the total number of reported tenure-track placements and estimated graduates from each department, rather than just percentage of tenure-track placements.
Update 7/6/2014: I ran a completeness test for 5 departments selected at random using a random number generator. The tenure-track numbers for these 5 departments appears to be accurate. More below.
As discussed in the comments at a previous post, I have been looking at department-specific placement rates. "Placement rate" is the number of reported placements*** divided by the number of graduates. I looked at reported placements between 2011 and 2014 and graduates between 2009 and 2013. I do not have data on many departments that reported placements in this time frame**, but of those 94 departments for which I do have data, 32 appear to have placement rates higher than 50% for tenure-track jobs and 51 appear to have placement rates higher than 50% for a combination of tenure-track, postdoctoral, VAP, and instructor jobs (both sets are listed below).****
Update: I have removed the following departments from both lists because I do not have updated graduation data from them: University of Chicago, University of Pennsylvania, and Yale University. These departments may well have placement rates as high as these others, but the graduation data I have from them comes from the 2012 APA Graduate Guide, since they did not complete the 2013 APA Graduate Guide. If the department chairs respond to my email of June 10th with updated information, I will update their status.
In two previous posts I have provided data on gender and AOS for placements reported at ProPhilosophy (2011-2012 and 2012-2013) and PhilAppointments (2013-2014). As of today, I have data on 729 placed candidates. In this post I aim to use this and other data to estimate the total number of candidates seeking employment and to calculate an approximate overall placement rate.
In keeping with the earlier post on gender, this is an overview post on the distribution of (first-listed) areas of specialization among placed candidates. I now have data on 722 candidates who have been placed in tenure-track, postdoctoral, VAP, or instructor positions between late 2011 and mid 2014 (ending today), drawn from ProPhilosophy (2011-2012 and 2012-2013) and PhilAppointments (2013-2014). I aim to make the spreadsheet with this data available by around July 1st (I will continue to add new data until that date).
I have data on 715 candidates who have been placed in tenure-track, postdoctoral, VAP, or instructor positions between late 2011 and mid 2014 (ending today), drawn from ProPhilosophy (2011-2012 and 2012-2013) and PhilAppointments (2013-2014). I aim to make the spreadsheet with this data available by around July 1st (I will add any new data available by that date). Until then, I will report some initial findings, starting with gender.
Yesterday I posted data for tenure-track placement from this past year. The data below include postdoctoral, VAP, and instructor hires sourced from PhilAppointments. Please check the data and make corrections in comments or by email (cjennings3 at ucmerced dot edu).
Last year I posted some statistics on tenure-track, postdoctoral, and VAP placements between 2011 and 2013. I aim to continue these analyses for a third year. Along the way, I will post progress on data collection, in the case that corrections are in order. The data below include tenure-track or equivalent hires sourced from PhilAppointments (I will provide a new post with postdoctoral and VAP data soon). Please check the data and make corrections in comments or by email (cjennings3 at ucmerced dot edu).
Readers of the Brains blog might know about a symposium there concerning a paper by Philipp Koralus. In his commentary on the paper, Felipe de Brigard mentions the problem of captured attention:
"I have a hard time understanding how ETA may account for involuntary attention. Suppose you are focused on your task—reading a book at the library, say—and you hear a ‘bang’ behind you. A natural way of describing the event is to say that one’s attention has been involuntarily captured by the sound. Now, how does ETA explain this phenomenon?"
"So, you might have been asking, as part of your task of reading the blog, 'What does the blog say?' Now, you are getting the incongruent and irrelevant answer 'There’s a loud noise behind you.' There are now two possibilities, similar to what happens in the equivalent case in a conversation. One possibility is that you accommodate the answer, adopting a new question (and thereby a new task) to which 'There’s a loud noise behind you' would be a congruent answer, maybe, 'what sort of thing going on behind me?...You could also refuse to be distracted and then exercise some top-down control on your focus assignment to bring it back to something that’s relevant to your task.'
When I coined "the problem of captured attention" in my 2012 Synthese paper, "The Subject of Attention" (not cited by Koralus/de Brigard), I took a similar line, but focused on the activity of the subject, rather than on questions and answers:
In my role as intructional faculty, I aim to grade everything anonymously, which is a provision I enjoyed as an undergraduate. My current method is to ask students to write their names on the back of their papers and exams, which also helps me to return them. One of my students remarked that I must do this because I am particularly biased. She may be right. But there is reason to believe that we are all biased against minority groups in our grading practices. Take this publication on the perception of grammatical and spelling errors by partners at 22 law firms: "The exact same memo averaged a 3.2/5.0 rating under our hypothetical 'African American' Thomas Meyer and a 4.1/5.0 rating under hypothetical 'Caucasian' Thomas Meyer. The qualitative comments on memos, consistently, were also more positive for the 'Caucasian' Thomas Meyer than our 'African American' Thomas Meyer." It seems obvious to me that these effects could have an impact on the grading of philosophy papers and exams. (It may be worth noting that the gender/race/ethnicity of the partner did not affect these findings, although "female partners generally found more errors and wrote longer narratives"). And take this publication on faculty assessment of a student applicant, mentioned a couple of years ago here at NewAPPS: "Our results revealed that both male and female faculty judged a female student to be less competent and less worthy of being hired than an identical male student, and also offered her a smaller starting salary and less career mentoring." The difference in mean rated competence, hireability, and mentor-worthiness was of the order of 10%. Again, it seems obvious to me that these effects could have an impact on the grading of philosophy papers and exams, which could be a grade-letter difference (i.e. the difference between a B and a C). Since perceived differences in grading standards could have an impact on whether students choose to stay in philosophy, it seems to me that anonymous grading would both be more just and would encourage a more diverse range of participants in philosophy (see other suggestions on this over at Daily Nous). What does everyone else think? Do you grade anonymously? If not, why not?
Update: Other posts on this topic are here and here.
This blog officially has 16 authors, 6 of whom are women. A quick glance to the category cloud will show you that one of the most prolific authors is a woman. So then why does a commentator at Philosophy MetaBlog characterize the blog as run by men? This is the comment linked to by Brian Leiter:
“Anonymous May 4, 2014 at 8:48 AM I can't speak for others' use of the term, but in my case the behavior over the last few years of Protevi, Schliesser, Matthen, Lance, Kazarian, et al. is what makes the term 'Nudechapps' so fitting. The boys have made a habit of prancing around in condescending moral superiority over so many things that one is reminded of a person engaging in a shameless display of self-aggrandizement. What's worse, the Nudechapps consistently treat dissenters with derision and disgust. So the echochamber these nincompoops have created for themselves has allowed them to spread a view within their little clique that is grotesque in many of its details. And the handful of hangers-on that support their shenanigans are like nothing so much as the stupefied populace trying so hard to convince themselves that the emperor is wearing the glorious raiment of moral superiority. But of course the emperor is wearing no clothes, and he is shameless about how good he looks. Thus, Nudechapps.”
This description, and others in the comments at Philosophers’ Anonymous, seems to me an ignoble attempt to take down individuals without recourse to evidence or argument. For the most part, I do not find such expressions worthy of consideration. But this one is interesting, I think, because of what is left out. Is it the case that the commentator thinks that none of the women at NewAPPS fit the description he or she finds so apt for its men? I doubt it. A more reasonable reading of this comment is that the author has simply forgotten the women of NewAPPS, or finds them relatively unimportant. Such forgetting, together with so much vitriol about feminism in the comment stream at that blog is striking, if not all that surprising. As one recent study found, "hostile sexists and feminists were more and less likely, respectively, to show implicit prejudice against female authorities." In this case, gender bias serves to spare our blushes, but not without reminding us that we have to work harder to be heard, especially by those who start from further away.
Update: I added text above to distance the gender bias claim for the comment in question from the claim about vitriol toward feminism found in the overall comment stream.
I noted in another post the apparent difference in impact of the Philosophical Gourmet ranking of one's PhD granting institution on tenure-track placement according to gender, following up on posts elsewhere (here, here, and here). In this post I want to follow up on a speculation that I made in comments that the apparent difference in impact is due not to a difference in the way prestige impacts women and men on the job market, but due to a difference in the way that the Philosophical Gourmet tracks prestige for areas that have a higher proportion of men versus areas that have a higher proportion of women.
You may already be familiar with work by Kieren Healy that shows that the Philosophical Gourmet ranking especially favors particular specialties: "It's clear that not all specialty areas count equally for overall reputation... Amongst the top twenty departments in 2006, MIT and the ANU had the narrowest range, relatively speaking, but their strength was concentrated in areas that are very strongly associated with overall reputation---in particular, Metaphysics, Epistemology, Language, and Philosophy of Mind."