Over the past three years I have collected and reported on placement data for positions in academic philosophy. (Interested readers can find past posts here at New APPS under the "placement data" category, two of which have been updated with the new data, severalpostsatProPhilosophy, or the very first post on placement at the Philosophy Smoker.) This year, placement data will be gathered, organized, and reported on by the following committee of volunteers (listed in alphabetical order):
Over the next academic year, we aim to create a website, which will be parked at placementdata.com. This website will include a form for gathering data, a searchable database, and reports on placement data. Until that time, I am suspending updates to the Excel spreadsheet, which contains much of the data used in the past few years, plus the updates I have received over the past few months. (Many thanks to Justin Lillge for incorporating the bulk of these updates into the spreadsheet!) When the website is ready, departments will be able to update their placement data through an embeddable form. Stay tuned for these links in the coming months!
New APPS readers probably remember Helen De Cruz's excellent post on the polarized debate surrounding evolutionary science (which was picked up by NPR), as well as Roberta Millstein's follow-up post on the perhaps equally polarized debate concerning climate change. Both posts cite the work of Dan Kahan, who has a distinct take on these issues:
"I study risk perception and science communication. I’m going to tell you what I regard as the single most consequential insight you can learn from empirical research in these fields if your goal is to promote constructive public engagement with climate science in American society. It's this: What people “believe” about global warming doesn’t reflect what they know; it expresses who they are."
I just attended a talk by Michael Ranney, who opposes Kahan's position. In Ranney's view, communicating the mechanism of global climate change is enough to change the minds of people on both sides of the political spectrum. (Check out the videos!) Ranney shows, surprisingly, that just about no one understands the mechanism of climate change (Study 1). Further, he shows that revealing that mechanism changes participants' minds about climate change (Study 2).
An excellent article about Mary Beard, the famous classicist, is in this week's New Yorker. It is informative to have a prominent academic give an account of her life experiences like this. I want to encourage others to read the original article, but will pull out one salient and topical point. Beard is not only a very capable scholar, she is also "an avid user of social media," including regular postings at a blog. Despite the sexist reactions to her online presence, Beard has reacted with surprising generosity and patience: "In another highly publicized incident, Beard retweeted a message that she had received from a twenty-year-old university student: 'You filthy old slut. I bet your vagina is disgusting.'...The university student, after apologizing online, came to Cambridge and took Beard out to lunch; she has remained in touch with him, and is even writing letters of reference for him. 'He is going to find it hard to get a job, because as soon as you Google his name that is what comes up,' she said. 'And although he was a very silly, injudicious, and at that moment not very pleasant young guy, I don’t actually think one tweet should ruin your job prospects.'" Beard is an admirable and remarkable person, and learning about this new side of her makes her all the more so, in my mind. Check it out!
After reading some discussion at the Daily Nous about the Ferguson situation (also addressed in this post by Leigh Johnson), it struck me that it might be helpful to open a forum dedicated to discussing steps for improvement and change. Some ideas for improvement and change may reasonably focus on specific issues at the intersection of race, law, and legal force. One article linked in the comments goes in a more general direction, targeting economic inequality and economic reparation:
But this story is neither old nor unfamiliar. Rather than asking “why,” let’s focus on the banal laws and policies needed to redirect the distribution of wealth — stolen from black Americans, such that whites can no longer summon police, law or politicians on their behalf to erase or suppress black Americans, and other minorities. That will require more than revealing the name of the police officer who shot Michael Brown; it will require asking who, in the next round of city council elections, state elections and, of course, presidential elections, is ready to compromise their political career in order to work toward redirecting wealth, jobs, opportunities toward black and Latino populations that constitute the majority of the United States. Only when wealth changes hands will black Americans have a fighting chance to resist police power and violence.
This is a powerful suggestion that leads me to wonder about how economic change might address the problems of racial injustice we have seen in Ferguson and elsewhere. Although racial injustice and economic inequality are no doubt related, the former is a distinct problem from the latter, as was noted during the Occupy Movement. In January of this year, the Pew Research Center presented data showing that not only has economic inequality worsened since 1967 but that "the black-white income gap in the U.S. has persisted" since that time. Thus, although it is possible that "narrowing the gap" of economic inequality may partially and indirectly improve the problem of racial injustice, we ought not forget the specific issue of racial inequality in seeking economic change. To improve economic inequality, Standard and Poor recommends investment in education. Here are some bullet points from the overview of a recent report:
A few days ago I posted a list of features that I take to be essential to an ideal report on placement, seeking comments and suggestions. One of the features I mention there is recency. All departments are likely to place more candidates given more time, but this slope is steeper for certain departments. Moreover, placement varies year to year. Thus, one's choice of time frame can substantially alter data on placement. This is the reason that Brian Leiter's numbers for NYU look better than mine (here and here)--I looked at the years 2012 to 2014 (3 years in the recent past), whereas he looked at the years 2005 to 2010 (6 years in the distant past).* Looking at NYU's placement page, one can easily see that the percentage of graduates placed in tenure-track jobs drops as one reaches the present. As I said, this is likely true for all departments. This means that if you look at data in the distant past, it might not matter what the length of the time frame is, but if you look at data ending in the recent past, the length of time frame makes an impact. That is, for NYU for the years starting in 2005, a 6-year time frame has 87% TT placement, a 5-year time frame has 90% TT placement, a 4-year time frame has 88% TT placement, and a 3-year time frame has 90% TT placement. But for the years ending in 2013, a 6-year time frame has 69% TT placement, a 5-year time frame has 65% TT placement, a 4-year time frame has 56% TT placement, and a 3-year time frame has 56% TT placement. Note that even the 6-year window ending in 2013 is associated with much lower placement than any of the windows starting in 2005. It seems obvious to me that we should favor more recent data, since they reveal which departments place students more quickly than others and since they are more relevant to students looking at graduate programs. Beyond that, it is not obvious just what length of time we should choose (3, 4, 5, or 6 years) or just which year we should use as the endpoint.
Yet, one's choice of time frame has a large impact on comparative placement data. Let's compare NYU's placement page to the placement pages of those departments that I found with these methods to have the highest tenure-track placement rates: Berkeley, Princeton, Pittsburgh HPS, and UCLA. If we look at NYU's worst time frame it comes out behind all the others (2010-2013: NYU 56%, UCLA 59%, Berkeley 63%, Princeton 65%, and Pittsburgh HPS 88%). If we look at NYU's best time frame it comes out ahead of all the others (2006-2009: NYU 94%, UCLA 67%, Berkeley 78%, Princeton 86%, and Pittsburgh HPS 93%). If, on the other hand, we look at multiple time frames then a new type of comparison is possible. We can determine, for example, which department has the least low value for tenure-track placement, given any time frame in the period from 2005 to 2013 (with a 3-year minimum time frame and a 6-year maximum time frame). In that case, Pittsburgh HPS comes out on top. It's lowest value is 85%. In comparison, the lowest value for Princeton is 65% (2010-2013), the lowest value for Berkeley is 59% (2009-2012), the lowest value for UCLA is 52% (2009-2012), and the lowest value for NYU is 56% (2010-2013). So if we look at the least low placement for all of these time frames, NYU comes out second to last. Finally, if we look at the full range, from 2005 to 2013, NYU comes out in the middle (Pittsburgh HPS 93%, Princeton 76%, NYU 74%, Berkeley 70%, UCLA 65%).
Suffice it to say, these decisions make a substantial impact on one's results. For that reason, one should attend carefully to justifications on recency and time frame. I will remove the links to Brian Leiter's two posts on placement data here, since I am concerned that they will mislead students. If I had written those posts, I would certainly take them down knowing what I have made clear in this post (i.e. that the numbers for NYU are inflated for the very time frame that Brian Leiter chose to look at, relative to other departments). I have emailed Brian a link to this post.
As for my data, I use the years 2012 to 2014 because those are the most recent years and the years for which I have large data sets. (ProPhilosophy was kind enough to email departments directly in 2012 and 2013, which substantially increased the number of reported hires for those two years.) To go prior to 2012 I would have to either look at individual placement pages for all 118 departments, many of which do not have data of the sort I need, or use what I know to be a skewed sample from the Leiter Reports blog. I have made clear that any rankings I produce are a work in progress and should not be taken as authoritative. (That is one reason I post them to blogs, and not an independent website.) But as time goes on and this process is improved I will have to start making decisions about which time frames matter. I may well follow the lead of David Marshall Miller in reporting multiple time frames, since this might be helpful for students. Suggestions on this point are welcome. (The data that I used for this post are after the break. Feel free to suggest corrections where needed.)
*I hope that this does not need saying, but I am not picking on NYU here. One of my dissertation advisors was at NYU and one of my best friends is currently a student there. I am looking at NYU because it appears to be a focal point in Brian Leiter's criticism of my work. If one were to look at other measures beyond just tenure-track placement, NYU may well fare better than it does here.
Update (7/14/14): In order to satisfy the worry that NYU is particularly burdened by graduates of the JD/PhD program in this measure (2 graduates from NYU left academia for law in this time period, compared to 1 from Princeton, 3 from Berkeley, and perhaps 2 from UCLA), I compared NYU to these other programs while leaving out all those graduates who left academia. In that case, as I point out in the comment below, it is still clear that time frame matters and, in particular, that the time frame of 2005-2010 overly inflates NYU's record (2008-2013 puts NYU in the middle of the group, at 80%, whereas 2005-2010 puts it at 95%, square with Berkeley and Pittsburgh HPS, ahead of UCLA and Princeton. It might be worth noting that with the same methods Fordham University placed 69% of its graduates into tenure-track jobs between 2008 and 2013). See my comment below for details.
I applaud Brian Leiter's efforts to examine placement data in the past few days *Update 6/13/14: I have removed these links because I think that Brian Leiter's posts have the potential to mislead students. See my new post here*, as well as the efforts of David Marshall Miller and Andy Carson over the past few years. All of this is effort to improve the profession and deserves recognition as such. I plan to continue reporting placement data next year and will likely post the report to an independent website. Below is a list of features that I take to be essential to an ideal report on placement, together with some ideas for improvement on my own work. Please comment below!
1) the original data: as far as I know this is missing from both Brian Leiter and Andy Carson's efforts. This is important because it keeps the analyses honest by opening them up to public scrutiny. I have provided links to my data and will continue to do so. Recommendations on format are welcome here.
2) the methods: key information is missing in Brian Leiter's presentation, such as the criteria for determining which placements are to "research universities and selective liberal arts colleges," but as far as I can tell David Marshall Miller and Andy Carson are clear and up front about their methods. I have tried to be clear about my methods, but I have received some emails that reveal shortcomings here. Recommendations welcome.
3) completeness: Brian Leiter's efforts, as of this moment, include only a few departments (that were not selected at random). An ideal report should include all the philosophy departments that have made placements of the type in question, which is something David Marshall Miller, Andy Carson, and myself have all tried to do. What is missing from all of our reports is complete placement data. PhilAppointments is not a complete source, for example, but neither are placement pages. Further, placement pages are often missing key data points on placement (such as names, which help to identify duplicate candidates). Next year I aim to cross-reference PhilAppointments with individual placement pages. Recommendations on how to efficiently improve completeness are welcome.
4) recency: since these efforts are in their infancy, it is currently unknown what time frames are relevant. Recent data are ideal, so long as recency is balanced with completeness. Brian Leiter chose a 5-year time frame between 2005 and 2010, which I see as a drawback of his report. Although David Marshall Miller, Andy Carson, and myself have all used the most up-to-date data, David Marshall Miller also looked at different time frames. In the future, with more data, the use of time frames should help us to determine how recent our data needs to be. Recommendations on how to proceed with time frames is welcome here, since next year the data set I have will be in its fourth year (2011-2015).
5) neutrality: Those collecting, analyzing, and reporting the data should be as neutral as possible with respect to hypotheses and results. I have concerns about this with respect to Brian Leiter's report, especially given the absence of 1 and 2. The fact that David Marshall Miller, Andy Carson, and I have performed this work on our own is also potentially problematic, even with the inclusion of the original data and methods. Over the next year I plan to form a task force to work on placement data, composed of several people who have reached out to me over the past week or so (but others are welcome). Having more people on the project should help with neutrality. Recommendations on this point are welcome.
When the NewAPPS bloggers first invited me to submit a guest post on my attention research as a graduate student, I decided to submit a post on the term "genius" instead. In the case that it was the only post I would write, I wanted the post to have maximum utility. After some thought, I decided to target the obsession with genius, thinking it a pernicious problem easily deflated. I am not alone in finding it to be a problem. In fact, I may well have been alerted to the problem by Eric Schwitzgebel's blog post on "seeming smart." Commentators on the problem have looked at everything from its impact on women and racial/ethnic minorities to its impact on child prodigies, some of whom have written against it in favor of work-based praise (and for good reason). So, I was half-right: I was right to think it is a problem, but I was wrong, of course, in thinking the problem could be easily deflated. I am going to give it another stab, this time aiming closer to the heart of what I find to be the problem--the way that the terms "genius" and "smart" are used to silence minorities. I know about this first hand--just last week Brian Leiter implied that I was not smart enough to understand a particular distinction that he felt I had overlooked.
Update (6/9/2014): I urge skeptical readers to examine these much more respectful posts, where there is no mention of intelligence, for sake of comparison: on David Marshall Miller, on Andy Carson, and again on Andy Carson. These job market analyses were perfomed after my first analysis in April 2012 and have many similar elements. Furthermore, the content of Brian Leiter's criticisms to these analyses is much the same, but without the damaging remarks about mental capacity, intention, etc.
I recently signed a pledge with the aim of being more respectful toward my colleagues and of trying to uphold a culture of respectfulness in our profession. Following conversation over a previous post, I have come to the belief that I should provide a safe space for people to discuss departmental rankings in philosophy. When I made critical comments at the Leiter Blog on the inclusion of women among the rankers of the PGR in 2011 as a graduate student, I felt shut down. My comments were edited without permission in a way that made me appear less reasonable, while the original post and other comments were edited to make my interlocutors appear more reasonable.* Some professional consequences appeared to follow, and even one of my friends suggested that I was simply seeking attention. On the contrary, I think that it is healthy to evaluate ranking methodologies critically and openly and I think that there must be a public space for this. Since I have already earned the ire of those who appear to be opposed to a public discussion, I am a good candidate for putting forward a post that will allow for discussion. I will thus allow anonymous postings and will aim to respect that anonymity both privately and publicly (except when required by law or conscience to do otherwise).
I will start with some of my own thoughts: I think that reputational information is helpful and important, but that it would be better to combine this information with data on placement, publications, and other such objective measures. (With this in mind, I sent my original findings on the job market to Brian Leiter and Kieran Healy in April 2012 without response.) An ideal ranking, in my mind, would be customizable. The viewer would have to choose metrics before a ranking would be created. I am open on what the relevant metrics might be. This is where you come in. Should we have rankings at all? What metrics do prospective graduate students care about (a variety of voices is of value here)? How should this work be completed, and by whom? Comments that appear to violate the norm of respectfulness will not be admitted as is, but anonymity is both welcomed and encouraged. Update: commentators should feel free to leave off their email addresses when posting comments.
Update: Creating (or updating) a ranking of this kind, with multiple objective values, is beyond my current capabilities. I fully and wholeheartedly welcome someone with more time and competence than me to take on this task. Better yet, I think, would be a task force involving those familiar with the PGR, since they already have lots of expertise. I am welcoming discussion here not because I plan to create a new ranking, but because I think it is important to have a discussion about all such rankings in the open. I am limiting my personal contribution to the placement data for now.
*Update (7/8/14): I should note that these events occurred under the editorial hand of Rebecca Kukla. I believe that Rebecca is one of very few philosophers who were junior faculty members at the age of 24, and so I wouldn't be surprised if it is she who uses the cloak of anonymity to criticize me here. Since this anonymous person also sees fit to question my intelligence in a public forum, I refer readers to this post, where I discuss why I think these sorts of charges are problematic, whether made by a man or by a woman, whether under the cloak of anonymity or under the gaze of public scrutiny.
Most readers have probably been following the controversy involving Carolyn Dicey Jennings and Brian Leiter concerning the job placement data post where Carolyn Dicey Jennings compares her analysis of the data she has assembled with the PGR Rank. There have been a number of people reacting to what many perceived as Brian Leiter’s excessively personalized attack of Carolyn Dicey Jennings’s analysis, such as in Daily Nous, and this post by UBC’s Carrie Ichikawa Jenkins on guidelines for academic professional conduct (the latter is not an explicit defense of Carolyn Dicey Jennings, but the message is clear enough, I think). UPDATE: supportive post also at the Feminist Philosophers.
It goes without saying (but I’ll say it anyway) that we, NewAPPS bloggers, fully support Carolyn’s right to post her important analyses of job placement data, and deplore the tone and words adopted by Brian Leiter to voice his objections to her methodology. (This is not the first time that episodes of this kind involving Brian Leiter and junior, untenured colleagues occur; I for one deem such episodes to be inadmissible.)
As promised, here is the link to the data set I have been using in the placement posts. Most of you will probably be most interested in the "Department Trends" tab. If you find that data should be added, please email me with the following information, preferably in order and separated by commas OR add the relevant information to PhilAppointments, which I will use to update this data set from time to time:
1) Year of placement
2) Name of placed candidate
3) PhD-granting institution of placed candidate (and department, where relevant)
4) Type of placement and name of hiring institution
As discussed here in the comments, one of the advantages of comparative data on placement is that they help fill in gaps left over by the PGR. That is, the PGR aims to measure the collective reputation of a department's faculty, but faculty reputation does not necessarily predict the likelihood of placement by that department, perhaps because it does not necessarily predict the overall quality of education in that department nor the quality of preparation for the job market by that department. Comparative data on placement has the potential to provide insight on these factors. To illustrate this, I below bracket the top 50 departments by tenure-track placement rate** (Note: I removed three universities from the top 50 that reported fewer than 2 graduates per year, since small numbers may yield misleading placement rates), providing for comparison these department's ranks from the 2011 "Ranking Of Top 50 Faculties In The English-Speaking World"by the Philosophical Gourmet Report. Please note that placement brackets are provided only to demonstrate the potential utility of these data. Since the data set is not yet complete, I do not recommend viewing these as authoritative brackets.Update:Please see this post for an idea of how I envision this project developing.I have released the spreadsheet containing the raw data and methods I have been using to compute these results, and welcome any/all corrections. As a reminder, I do not have data on the yearly graduates from many departments, listed below. (Those departments are welcome to send me their data, if available.)
Update 7/1/2014: It has come to my attention that Brian Leiter has aired some criticisms of this post on his blog and has publicly suggested that it (this post, not his blog) be taken down. I respond to these criticisms below.
I changed some wording above from "ranking" to "brackets" and added a link to the spreadsheet. I have also changed the numbers in the below ranking to a grouping by bracket (where departments are listed in alphabetical order within brackets). This was a suggestion of Ned Block's. We have been corresponding on statistical significance and I decided that his suggestion would help avoid making small differences between placement rates appear more important than they are. I have left in the PGR rank for comparison, although the difference in rank has been omitted for the reasons provided above.
I have also added updates to my responses to Brian, based on some new statistical tests.
I am adding a link to a chart that will help readers to visualize the total number of reported tenure-track placements and estimated graduates from each department, rather than just percentage of tenure-track placements.
Update 7/6/2014: I ran a completeness test for 5 departments selected at random using a random number generator. The tenure-track numbers for these 5 departments appears to be accurate. More below.
As discussed in the comments at a previous post, I have been looking at department-specific placement rates. "Placement rate" is the number of reported placements*** divided by the number of graduates. I looked at reported placements between 2011 and 2014 and graduates between 2009 and 2013. I do not have data on many departments that reported placements in this time frame**, but of those 94 departments for which I do have data, 32 appear to have placement rates higher than 50% for tenure-track jobs and 51 appear to have placement rates higher than 50% for a combination of tenure-track, postdoctoral, VAP, and instructor jobs (both sets are listed below).****
Update: I have removed the following departments from both lists because I do not have updated graduation data from them: University of Chicago, University of Pennsylvania, and Yale University. These departments may well have placement rates as high as these others, but the graduation data I have from them comes from the 2012 APA Graduate Guide, since they did not complete the 2013 APA Graduate Guide. If the department chairs respond to my email of June 10th with updated information, I will update their status.
In two previous posts I have provided data on gender and AOS for placements reported at ProPhilosophy (2011-2012 and 2012-2013) and PhilAppointments (2013-2014). As of today, I have data on 729 placed candidates. In this post I aim to use this and other data to estimate the total number of candidates seeking employment and to calculate an approximate overall placement rate.
In keeping with the earlier post on gender, this is an overview post on the distribution of (first-listed) areas of specialization among placed candidates. I now have data on 722 candidates who have been placed in tenure-track, postdoctoral, VAP, or instructor positions between late 2011 and mid 2014 (ending today), drawn from ProPhilosophy (2011-2012 and 2012-2013) and PhilAppointments (2013-2014). I aim to make the spreadsheet with this data available by around July 1st (I will continue to add new data until that date).
I have data on 715 candidates who have been placed in tenure-track, postdoctoral, VAP, or instructor positions between late 2011 and mid 2014 (ending today), drawn from ProPhilosophy (2011-2012 and 2012-2013) and PhilAppointments (2013-2014). I aim to make the spreadsheet with this data available by around July 1st (I will add any new data available by that date). Until then, I will report some initial findings, starting with gender.
Yesterday I posted data for tenure-track placement from this past year. The data below include postdoctoral, VAP, and instructor hires sourced from PhilAppointments. Please check the data and make corrections in comments or by email (cjennings3 at ucmerced dot edu).
Last year I posted some statistics on tenure-track, postdoctoral, and VAP placements between 2011 and 2013. I aim to continue these analyses for a third year. Along the way, I will post progress on data collection, in the case that corrections are in order. The data below include tenure-track or equivalent hires sourced from PhilAppointments (I will provide a new post with postdoctoral and VAP data soon). Please check the data and make corrections in comments or by email (cjennings3 at ucmerced dot edu).
Readers of the Brains blog might know about a symposium there concerning a paper by Philipp Koralus. In his commentary on the paper, Felipe de Brigard mentions the problem of captured attention:
"I have a hard time understanding how ETA may account for involuntary attention. Suppose you are focused on your task—reading a book at the library, say—and you hear a ‘bang’ behind you. A natural way of describing the event is to say that one’s attention has been involuntarily captured by the sound. Now, how does ETA explain this phenomenon?"
"So, you might have been asking, as part of your task of reading the blog, 'What does the blog say?' Now, you are getting the incongruent and irrelevant answer 'There’s a loud noise behind you.' There are now two possibilities, similar to what happens in the equivalent case in a conversation. One possibility is that you accommodate the answer, adopting a new question (and thereby a new task) to which 'There’s a loud noise behind you' would be a congruent answer, maybe, 'what sort of thing going on behind me?...You could also refuse to be distracted and then exercise some top-down control on your focus assignment to bring it back to something that’s relevant to your task.'
When I coined "the problem of captured attention" in my 2012 Synthese paper, "The Subject of Attention" (not cited by Koralus/de Brigard), I took a similar line, but focused on the activity of the subject, rather than on questions and answers:
In my role as intructional faculty, I aim to grade everything anonymously, which is a provision I enjoyed as an undergraduate. My current method is to ask students to write their names on the back of their papers and exams, which also helps me to return them. One of my students remarked that I must do this because I am particularly biased. She may be right. But there is reason to believe that we are all biased against minority groups in our grading practices. Take this publication on the perception of grammatical and spelling errors by partners at 22 law firms: "The exact same memo averaged a 3.2/5.0 rating under our hypothetical 'African American' Thomas Meyer and a 4.1/5.0 rating under hypothetical 'Caucasian' Thomas Meyer. The qualitative comments on memos, consistently, were also more positive for the 'Caucasian' Thomas Meyer than our 'African American' Thomas Meyer." It seems obvious to me that these effects could have an impact on the grading of philosophy papers and exams. (It may be worth noting that the gender/race/ethnicity of the partner did not affect these findings, although "female partners generally found more errors and wrote longer narratives"). And take this publication on faculty assessment of a student applicant, mentioned a couple of years ago here at NewAPPS: "Our results revealed that both male and female faculty judged a female student to be less competent and less worthy of being hired than an identical male student, and also offered her a smaller starting salary and less career mentoring." The difference in mean rated competence, hireability, and mentor-worthiness was of the order of 10%. Again, it seems obvious to me that these effects could have an impact on the grading of philosophy papers and exams, which could be a grade-letter difference (i.e. the difference between a B and a C). Since perceived differences in grading standards could have an impact on whether students choose to stay in philosophy, it seems to me that anonymous grading would both be more just and would encourage a more diverse range of participants in philosophy (see other suggestions on this over at Daily Nous). What does everyone else think? Do you grade anonymously? If not, why not?
Update: Other posts on this topic are here and here.
This blog officially has 16 authors, 6 of whom are women. A quick glance to the category cloud will show you that one of the most prolific authors is a woman. So then why does a commentator at Philosophy MetaBlog characterize the blog as run by men? This is the comment linked to by Brian Leiter:
“Anonymous May 4, 2014 at 8:48 AM I can't speak for others' use of the term, but in my case the behavior over the last few years of Protevi, Schliesser, Matthen, Lance, Kazarian, et al. is what makes the term 'Nudechapps' so fitting. The boys have made a habit of prancing around in condescending moral superiority over so many things that one is reminded of a person engaging in a shameless display of self-aggrandizement. What's worse, the Nudechapps consistently treat dissenters with derision and disgust. So the echochamber these nincompoops have created for themselves has allowed them to spread a view within their little clique that is grotesque in many of its details. And the handful of hangers-on that support their shenanigans are like nothing so much as the stupefied populace trying so hard to convince themselves that the emperor is wearing the glorious raiment of moral superiority. But of course the emperor is wearing no clothes, and he is shameless about how good he looks. Thus, Nudechapps.”
This description, and others in the comments at Philosophers’ Anonymous, seems to me an ignoble attempt to take down individuals without recourse to evidence or argument. For the most part, I do not find such expressions worthy of consideration. But this one is interesting, I think, because of what is left out. Is it the case that the commentator thinks that none of the women at NewAPPS fit the description he or she finds so apt for its men? I doubt it. A more reasonable reading of this comment is that the author has simply forgotten the women of NewAPPS, or finds them relatively unimportant. Such forgetting, together with so much vitriol about feminism in the comment stream at that blog is striking, if not all that surprising. As one recent study found, "hostile sexists and feminists were more and less likely, respectively, to show implicit prejudice against female authorities." In this case, gender bias serves to spare our blushes, but not without reminding us that we have to work harder to be heard, especially by those who start from further away.
Update: I added text above to distance the gender bias claim for the comment in question from the claim about vitriol toward feminism found in the overall comment stream.
I noted in another post the apparent difference in impact of the Philosophical Gourmet ranking of one's PhD granting institution on tenure-track placement according to gender, following up on posts elsewhere (here, here, and here). In this post I want to follow up on a speculation that I made in comments that the apparent difference in impact is due not to a difference in the way prestige impacts women and men on the job market, but due to a difference in the way that the Philosophical Gourmet tracks prestige for areas dominated by men versus areas that have a higher proportion of women.
You may already be familiar with work by Kieren Healy that shows that the Philosophical Gourmet ranking especially favors particular specialties: "It's clear that not all specialty areas count equally for overall reputation... Amongst the top twenty departments in 2006, MIT and the ANU had the narrowest range, relatively speaking, but their strength was concentrated in areas that are very strongly associated with overall reputation---in particular, Metaphysics, Epistemology, Language, and Philosophy of Mind."
Marcus Arvan at the Philosophers' Cocoon posted sample data from the new appointments site at PhilJobs, which is discussed in a great post by Helen de Cruz here at New APPS. In comments at de Cruz's post and in a new post Arvan discusses the impact of Gourmet ranking on women and men seeking tenure-track jobs. I wanted to follow up on Arvan's post by looking at the full set of data currently available at PhilJobs. I did this in part because I knew that the sample Arvan collected was skewed on gender, due to an earlier analysis on gender I performed for a comment on a post at the Philosophy Smoker. With that convoluted introduction aside, here is a summary of the findings, in keeping with the findings by Arvan: the gourmet rank of one's PhD granting institution appears to have a greater impact on men seeking tenure-track jobs than on women seeking tenure-track jobs. Although I cannot yet speak to the source of this discrepancy, I (like Arvan) find the difference troubling. I welcome comments on the source of the difference below, although any comments will be subject to moderation. Let's look more closely at the data (Note: the linked spreadsheet was updated on May 14th):
The much anticipated appointments page at PhilJobs is now live (see this announcement from the APA). To encourage the use of this service, we will be suspending the hiring thread on NewAPPS. I want to commend this effort by the APA, David Bourget, and David Chalmers, which will certainly be a helpful addition to the profession.
Following an excellent post on cochlear implants by Teresa Blankmeyer Burke over at Feminist Philosophers is a comment pointing the reader to this interview, which may be of interest to NewAPPS readers. Of particular interest is William Mager's attempt to describe his experience of sound with the new implants. Here are a few key passages:
“It’s not sound. It’s beeping. But It doesn’t feel like sound. It feels like some kind of electronic trigger is going on in your brain.” (at around 4:27)
For my graduate seminar on attention last night we read papers outside my usual range of expertise, on the intersection of attention and culture. We read Nisbett et al.'s Culture and Systems of Thought and Hedden et al.'s Cultural Influences on the Neural Substrates of Attentional Control. Both are fascinating and worth a read. But the Nisbett et al. article, in particular, is full of ideas that may be interesting to readers of New APPS. Here are some of what I found to be salient points:
The article maintains that different cultural groups have different, opposed styles of argument. Specifically, "Westerners" are committed to avoiding the appearance of contradiction as part of an analytic style of argumentation, but "East Asians" embrace contradiction as part of "naive dialecticism." They give an example of one study that tests this claim:
This is just to note that the links for reporting tenure-track, postdoctoral, and VAP hires from 2013-2014 have been placed in the upper-right sidebar of this blog. This should facilitate the reporting and monitoring of this information. Further, both Daily Nous and ProPhilosophy have plans to integrate the information into their sites in an easier to view format. Thank you to all of the commenters at the original posting and to all those who have already stepped up to help with this effort.
If you would like to report hiring information from 2013-2014, please fill in the form at this link; the data entered there feeds into a spreadsheet available here. Quite a bit of hiring information is already available at Leiter Reports, here.
UPDATE 8 March 10:30 am CDT: This form and spreadsheet need not be limited to this NewAPPS post. If any other blog would like to link to it, they are welcome. In that case, I would be happy to make the relevant bloggers co-owners of the Google documents in question. Ideally, the information would be available in a neutral location, but having the links posted to several different blogs would come close to that.
The Gendered Conference Campaign "aims to raise awareness of the prevalence of all-male conferences (and volumes, and summer schools), of the harm that they do." In keeping with that aim, I call your attention to a (so-far) gendered speaker series that raises awareness for this issue in a different way. The University of California at Merced (disclaimer: my place of work) started a Philosophy Speaker Series this year that has so far organized talks for three speakers, all of whom are women (see the calendar and archive here). This was not intentional, but the fact that it is striking to have this sort of line-up reveals that we have some way to go to reach gender parity. Has anyone else come across conferences, speaker series, or summer schools with all-woman line-ups?
By clicking the link posted below you can download an Excel spreadsheet with placement data from the last two years, 2011-2012 and 2012-2013, together with a “how-to” guide for future years of data gathering and analysis. The data is sourced from ProPhilosophy, the link for which you can find here.
Many readers of this blog are probably aware that I ran some basic hiring statistics last year, both at The Philosophy Smoker and ProPhilosophy. I first ran those analyses out of personal curiosity, but soon found that others considered them needful. I thus aim to run them each year, so long as placement directors are willing to supply the required information. I will cross-post the results here and at ProPhilosophy.
A Plea: ProPhilosophy has sent out emails to a number of departments, so far hearing back from just 19 of them. (You can find the collected responses here.) Last year ProPhilosophy heard back from 64 departments. I hope that many more placement directors/DGSs/department chairs will find time to respond to ProPhilosophy with the available information (by sending an email to email@example.com).