APDA has released this year's APA report and has added an application to the website (but we are still working on its auto-update feature, so the data it represents is a few days old as of today). In keeping with our program-specific reports released in April, here are some basic charts on the programs we covered at that time. These are raw numbers for graduates between 2012 and 2016, with only the APDA database numbers reflected in the first two graphs (here and here), whereas the third graph (here) makes use of external graduation data in its "unknown" category (see the note on the 4th graph for details). At the bottom of the page is a sortable chart with percentages for these categories. We have not yet started checking new data (program representatives have added over 400 graduates since August 15th), so there may be some errors (including those noted here). We are currently working on writing up some results from the survey into one or more papers, which should be available sometime in early 2017. Feedback is welcome!
In order to update my post from January, I contacted Mark Fiegener of the NSF (National Center for Science and Engineering Statistics) who was kind enough to supply me with information from the Survey of Earned Doctorates on gender for graduates of doctoral programs in philosophy using a shorter time scale: 2004-2014. Using this information, I can now provide a new list of programs with an above average percentage of women graduates in philosophy. Only 86 programs had sufficient data in this time period, and 35 had an above average percentage of women graduates between 2004 and 2014 (information from the other programs was suppressed by the NSF for reasons of small numbers/privacy). Comparing these 35 to the previous list of 39 programs with an above average percentage of women graduates 1973-2014, 11 of the 39 do not make the more recent list (CUNY, Emory, Harvard, Illinois-Chicago, Maryland, NYU, Pittsburgh, Rice, Rutgers, Stanford, and UMass Amherst), and an additional 2 did not have sufficient data to be included (Claremont and Tennessee), but 26 of the 39 show up on this new list. Update: Note that some of these 11 do have above average percentages of women in the APDA data between 2012 and 2015 (namely, Emory, Harvard, Illinois-Chicago, Maryland, and Pittsburgh). I will aim to do a full comparison with the APDA data soon. Of the 11 programs that became a focal point for my previous post (because of what I took to be an unwarranted call for their closure), 1 did not have sufficient data to be included, but the other 10 had an average 36.93% women graduates (compared to an overall average of 29.31% women graduates for the 86 programs included). Note: I did not attempt to obtain shorter time scale data for racial and ethnic minorities simply because of the small numbers involved, which would have meant suppressed information for most programs. Here is the list of 35 programs with a greater than mean percentage of women graduates for 2004-2014:
Eric Schwitzgebel alerted me to a post at the Leiter Reports blog on the work of Jonathan Strassfeld (University of Rochester), who has compiled a document with philosophers appointed at 11 doctoral programs in the United States between 1930 and 1979: Berkeley, Chicago, Columbia, Cornell, Harvard, Michigan, Princeton, Stanford, UCLA, U Penn, and Yale. I was curious whether appointments in this period could predict present day diversity for these programs. My prediction was that a higher percentage of women among those appointed in this period would predict a higher percentage of women among faculty and graduate students today. I also wondered, given my work with Eric Schwitzgebel, whether area of specialization would interact with this effect (in that work, women were shown to be more likely to specialize in Value Theory). Although this is not a formal analysis, it appears as though programs that appointed a higher percentage of women in this period do have a higher percentage of women and non-white graduates today, and that there is some interaction with area of specialization such that programs with more faculty in LEMM/analytic fields tend to correspond with lower percentages of women, and historical fields tend to correspond with higher percentages of women. Given this first pass look at Strassfeld’s data, I think it would be useful to attempt to collect this data for a larger set of programs, to more formally explore these connections. More details on my first pass look at Strassfeld's data below. (Numbers updated on 5/29/16 to reflect a change made to Strassfeld's data. Namely, I had incorrectly removed one woman faculty member from the analysis, which Strassfeld pointed out to me.)
As noted in the APDA update posted over a week ago, we are in the middle of two important projects:
We are adding individual editing to the website in May 2016. Up to March 2016, placement data were edited by project personnel, placement officers, or department chairs. In the future, individual graduates will have the option to claim their entry. To do this, we require a contact email for the graduates in our database. We currently have email addresses for roughly one quarter of the database. For graduates: to ensure that you are included among those who have access to individual editing, please provide your email address here: http://goo.gl/forms/mXUbpeH5ic
Along with individual editing, in May 2016 we will add a brief qualitative survey for graduates. We will use linguistic analysis to compare these responses across graduates, connecting them to metadata on graduating institution, gender, graduation year, area of specialization, and placement type. Participants will be compensated for their time. Again, to do this, we require the contact email for the graduates in our database. For graduates:To ensure that you are sent the qualitative survey, please provide your email address here: http://goo.gl/forms/mXUbpeH5ic
Please feel free to send the form to past philosophy graduates you know who may want to be included! As it says above, time they spend filling out the qualitative survey in May will be compensated (by a $50 Amazon gift card raffle for every 50 participants). And note: it is our policy to treat the email addresses as private and accessible only by project personnel.
This is a brief notice that APDA has finalized its update for the 2015 report. Here is the report from 2015 and here is the update. Please contact me (cjennings3 at ucmerced.edu) with feedback or leave comments and suggestions below.
Update: I replaced one of the links as I noticed that the AOS table had been mismatched to the gender table.
Update (4/15/16): I will list here errors that are discovered in the data/report:
University of Washington--4 grads (2 2012, 2 2014) should be listed as temporary academic, but are currently permanent (but non TT) academic.
University of Texas, Austin--placement records are missing several graduates and should be checked against the placement page (the placement page was down when we attempted to check it in November).
University of Arkansas--this program was not contacted for data and should be included in future reports.
As I noted in a previous post, APDA is in the middle of finalizing data for a new report. This will be a follow up to the report released in August 2015. We hope to include data on graduates with no listed placements and Carnegie Classifications, among other improvements. It is our aim to release the new report by April 15th, so that it can be useful to those who have applied to graduate programs this year. (Until that time, editing on the site has been turned off so that we can verify and analyze the data. We will turn back on editing in May when we turn on a new feature to allow for individual editing.)
In preparation for that report, I have been trying to determine the best way of displaying our data. I am attaching four DRAFT images that present data for 104 universities using pie charts (on gender, AOS, job type, and graduation year: gender and AOS use data from APDA alone, whereas job type and graduation year also uses graduation information from outside APDA, discussed in this post). I used pie charts because they are visually intuitive and I want the data to be as accessible as possible. I used suggestions from this post to help avoid some common criticisms of pie charts. (Note: I tend to analyze data in R, using ggplot2 for graphs, which is the language I provide below for anyone with expertise in this area.) At the top left of each image are the data for the full set of 104 universities. (Universities are included only if we have both an external source of graduation data and placement records for that university with recorded graduation years in this time period.)
I am looking for feedback on these charts. Are these easy to understand? Are there alterations that would be beneficial? Two other options, with images below: 1) Replace pie charts with bar graphs (one sample version below). 2) Make university-specific sets of charts. (This is more time-intensive than 1.)
Note also: We aim to release tables and regression analyses, as we did last time, and any images we release will be in addition to that work. Your input is welcome!
The Academic Placement Data and Analysis project (APDA) hopes to release program specific placement rates in the next week or two (before April 15th). These placement rates compare placement data to graduation data, so good graduation data are crucial. Yet, finding consistent graduation data is surprisingly tricky. The project currently uses the following external sources:
We gather data from multiple sources because each data set is incomplete, and for different reasons. For instance, the Survey of Earned Doctorates gathers data from programs in the United States alone, while the American Philosophical Association collects data from programs in the United States and Canada. Since the Review of Metaphysics publication supplied names we were able to integrate this information into APDA. For the other three sources we compiled the number of graduates for the years 2012-2015 into a single spreadsheet, assuming the later of the two years when a range was provided (e.g. 2011-2012). If I remove the programs that had missing data from all three remaining sources (SED, PhilJobs, APA) then we have data on 105 universities. How do these sources compare to one another and to the data contained in APDA?
When I first took philosophy of mind at St Andrews in 2002 as an undergrad, we discussed the mind-body problem, behaviorism, identity theory, functionalism, modularity, and qualia. I wrote my term paper on anomalous monism and strong supervenience, entitled: "Is it possible for someone to be in a particular mental state without having any propensity to manifest this in behaviour?" I answered "yes" (!) citing 16 articles, including works by Armstrong, Child, Crane, Davidson, Heil, Kim, Moore, and Quine. I argued that Davidson's arguments for strong supervenience ignored the possibility of circular causality and of acausal mental events, which I admitted might be undetectable. My closing remarks: "Anomalous Monists hold that it is impossible for a person to be in a particular mental state without having any propensity to manifest it in behaviour...I have shown that it is possible, where possibility includes unobservables, that a person be in a mental state without having any propensity to manifest this in behaviour. Whether the person in question can discover this mental state is a question of practicality: a matter for psychologists."
Now, almost 15 years later, I am planning to teach my first course in philosophy of mind. But the field has changed, as have my intellectual leanings. In 2008 Joshua Knobe and Shaun Nichols published their "Experimental Philosophy Manifesto." In that same year I presented my first poster at the Association for the Scientific Study of Consciousness meeting in Taipei, which officially changed my research trajectory from philosophy of physics to empirically-informed philosophy of mind. In 2010 I presented a poster at my first Vision Science Society meeting, a meeting only rarely attended by philosophers. I now teach in an interdisciplinary program with neuroscientists, psychologists, linguists, computer scientists, and philosophers. In short, I have come a long way from boundary policing. Moreover, the field has come a long way from the metaphysical debates that caused so much excitement in the early aughts. So where is philosophy of mind now? What should we be teaching our students in philosophy of mind courses? This is where you come in.
Eric Schwitzgebel and Carolyn Dicey Jennings
This article brings together lots of data that we have been gathering and posting about over the past several years, here and at The Splintered Mind. Considered jointly, these data tell a very interesting story about the continuing gender disparity in the discipline.
Here's the abstract:
We present several quantitative analyses of the prevalence and visibility of women in moral, political, and social philosophy, compared to other areas of philosophy, and how the situation has changed over time. Measures include faculty lists from the Philosophical Gourmet Report, PhD job placement data from the Academic Placement Data and Analysis project, the National Science Foundation’s Survey of Earned Doctorates, conference programs of the American Philosophical Association, authorship in elite philosophy journals, citation in the Stanford Encyclopedia of Philosophy, and extended discussion in abstracts from the Philosopher’s Index. Our data strongly support three conclusions: (1) Gender disparity remains large in mainstream Anglophone philosophy; (2) ethics, construed broadly to include social and political philosophy, is closer to gender parity than are other fields in philosophy; and (3) women’s involvement in philosophy has increased since the 1970s. However, by most measures, women’s involvement and visibility in mainstream Anglophone philosophy has increased only slowly; and by some measures there has been virtually no gain since the 1990s. We find mixed evidence on the question of whether gender disparity is even more pronounced at the highest level of visibility or prestige than at more moderate levels of visibility or prestige.
Full paper here.
As always, comments, corrections, and objections welcome, either on this post or by email.
Due to the suggestion of Lionel McPherson in comments at this post, I am disaggregating the non-white category of this previous post into three lists: "Hispanic," "Asian or Pacific Islander," and "Black" graduates of PhD programs in philosophy, per graduating institution. Importantly, the data only cover permanent residents and citizens of the United States (thanks to Brian Weatherson for pointing this out). Because of that fact I use data from the United States census as a point of comparison above each list.
Note that the data on graduates was provided by the National Center for Science Engineering Statistics thanks to Eric Schwitzgebel's efforts (see here and here). Specifically, the NCSES supplied the number of racial and ethnic minority graduates from doctoral philosophy programs in the United States between the years 1973 and 2014 (but not broken down by year).
Below, I list the programs in the United States with a higher than average (mean) percentage of graduates from each of these categories, where the mean is taken for 96 programs in the United States (I omitted institutions from the NCSES data that no longer offer doctoral degrees in philosophy)...
Most of us know about efforts to sort philosophy programs according to placement rate or prestige, but what of the percentage of PhD graduates from each program who are women or other underrepresented minorities? Thanks to Eric Schwitzgebel's efforts in contacting the National Center for Science Engineering Statistics (see here and here), we have access to some numbers on this issue. Specifically, the NCSES supplied the number of women and minority graduates from doctoral philosophy programs in the United States between the years 1973 and 2014 (but not broken down by year). Below, I provide the top programs in the United States from this list of 96 programs in terms of % of women graduates in this period, as well as the top programs in terms of % of non-white graduates, where for "non-white" I am aggregating the NCSES categories of "Hispanic," "Asian," "Asian or Pacific Islander," "Black" and "two or more races." (I omitted institutions from the NCSES data that no longer offer doctoral degrees in philosophy.) One striking feature of these lists is how many of the programs show up on Brian Leiter's list of PhD programs "whose existence is not easy to explain." A provocative rhetorical question follows: Should we be closing PhD programs that better serve women and minorities in philosophy? I welcome discussion below.
I recently joined Twitter and uploaded some quick attempts to sum up what has been happening with job ads on PhilJobs this year compared to a couple of past years. I noticed, first, that there are fewer job ads this year so far than in previous years, at least on PhilJobs (with some nice caveats provided in comments here). Second, looking at first AOS, the most sought-out area of specialization this year differs from previous years. While in my initial tweet on this I said that value theory appeared better off than other areas of specialization this year, that was based on a mistake. (You can check out the Excel file I used for 2 and 3 if you want to help me identify other potential mistakes. 1 is based on PhilJobs searches, not a csv file.) In terms of percentages, all areas of specialization are down this year since open searches are up, relative to last year. I take this increase in open searches to be a good thing, in terms of potentially increasing the intellectual diversity of philosophy, but I am interested in what others think about this. Third, if you look at the full AOS listing for job ads, certain words are more frequent this year than you might expect, given the first AOS listing, such as "science." Finally, if you look at the first-reported AOS of the bulk of the placed candidates in the APDA database, the AOS balance is different yet again (favoring LEMM over history and traditions, for instance). (In the future, I can break this down by TT placement year, but I didn't have time to do that for this post.) These are initial numbers, and the season just started, but I think this is a space worth watching. Here are some numbers and images (with 2015 highlighted in yellow):
In the coming weeks I hope to be updating you with more details and analyses, but for now I am simply announcing that the final report for APDA is complete. Feel free to ask questions or comment below.
*Update: we noticed an error in one of the charts and some potentially confusing language in that section, so we have updated the report at the link.
A few weeks ago I posted some details about a new project: Academic Placement Data and Analysis (APDA) here. Readers may be interested in some updates to that project. Note: We are sending out emails to program representatives over the next few hours with much of this information, including an extended collection goal date of July 22nd, 2015. The original blogpost is quoted below.
1) Total Placement Records
"There are approximately 2300 total entries, with several categories of data."
As of noon on July 13th, we had 3078 placement records for 2444 people--that is 573 more placed candidates than we had in the database on June 23rd. (In comparison, PhilJobs, the next most comprehensive database, had 2307 placement records for junior hires at that same time.)
Academic Placement Data and Analysis (APDA) is a new, collaborative research project on placement into academic jobs in philosophy. The current project members include myself, Patrice Cobb (psychology, UC Merced), Angelo Kyrilov (computer science, UC Merced), David Vinson (cognitive science, UC Merced), and Justin Vlasits (philosophy, UC Berkeley). This project is borne out of earlier work on placement that was posted here and elsewhere over the past few years. Funding for this project by the American Philosophical Association has so far provided for the development of a website and database that can host the data for this project (thanks to the work of Angelo Kyrilov over the past two months). There are approximately 2300 total entries, with several categories of data. Most of these categories of data have been made publicly available, whereas any categories that have not been made public (e.g. name, gender, race/ethnicity) will be provided to researchers with IRB approval from their home institutions. You can see the website and database so far here:
This is a moderated thread. So there can be no question that Leiter at least had to deliberately press ‘publish’ on this comment. It is less clear, as his own comment further down indicates, that he had fully thought through the implications of doing so.
Brian Leiter said...
Yes, I suppose I should not have approved #2, but I've been approving almost everything. On the other hand, Johnson is a very public and rather noxious presence in philosophy cyberspace, so I'm not surprised there is interest.
I’m sure we’re all glad to know that Brian has some standards (he didn’t approve everything, after all). Still, what he did approve seems to merit some comment.
The speculation about the reasons for Leigh’s ability to secure a second job in professional philosophy is untoward, given that she is a) non-tenured, b) not in any way credibly accused or even suspected of professional misconduct, and c) the characterization of her current position is inaccurate. Publishing this comment and thereby generating a public sense that Leigh does not deserve her current employment is at very least an obvious instance of bullying on Brian’s part (and fits his by now well established pattern of directing this sort of attention toward junior, precariously employed members of the profession).
In what has to be one of the great whoppers of his entire blogging career, Brian goes on to justify leaving such a comment up by validating a more general interest in the question of why someone who is, in his view, a "a very public and rather noxious presence in philosophy cyberspace” should have a job.
To say that the implicit standard in 2) risks implicating Brian himself is rather obvious. More interestingly, it seems to be perhaps as candid an admission as we are likely to get from Brian that he sees nothing wrong with harassing people he doesn’t like if he can possibly pull it off. And so we find him abusing the pretext of discussing ‘issues in the profession’ to pursue his own petty little vendetta.
When I first looked at placement statistics at the Philosophy Smoker I performed some analyses that I shouldn't have. First, I performed too many analyses. Second, I used the wrong kinds of analyses for some of the data. I did not imagine that these statistics would take off as they did and I was overworked*, which contributed to some mistakes on my part. One of these mistakes was running correlation analyses over gender:
I also found a negative correlation between PhD granting institution and number of publications (-.17: the lower your PhD granting institution is ranked the more peer-reviewed publications you have) and between gender and number of publications (-.21: if you are a man you likely have more publications than if you are a woman).
While at the time I suspected that this negative correlation had to do with the increased difficulty women have in publishing their work, others worried that women had an upper hand on the job market. I brushed off this latter worry because the proportion of women who found tenure-track jobs was about the same as the proportion of women who obtain PhDs in philosophy. In fact, in the 2011-2014 data set I found that there is not a significant difference between the proportion of women who graduate from each department and the proportion that find tenure-track jobs from each department (but there is a significant difference for postdoctoral/VAP/instructor positions, which are awarded to a smaller proportion of women relative to women graduates). But this worry regularly comes up in comments and I feel a responsibility for having possibly led people astray with analyses I shouldn't have used in the first place. For that reason, I want to provide some more appropriate analyses here, as clarification on the relationship between gender and publications in the placement data from 2011-2012 and 2012-2013. Those who want to check this work can use the spreadsheet at the bottom of the post here, which is the one I used. (I do not use the more recent data because I decided not to collect publication data in this last round, due to time constraints.)
I note here the existence of the October Statement, which 111 philosophers have signed to demonstrate their resistance to all ranking systems. (I have not signed this statement. As I say here, I favor a user-created ranking system.)
In addition, Brian Leiter released a list of those who will serve on the board of the PGR for 2014. I checked this list against the board of the 2011 PGR and an earlier announcement and found 7 missing names. I do not presume to know why all 7 of these people appear to have stepped down from the board, but Brian notes at his blog that "Five Board members resigned over the past two weeks, some because of the controversy, and some because of unrelated concerns about the PGR methodology." Here are the missing names: Alex Byrne, Craig Callender, Crispin Wright, David Brink, Graham Priest, Lisa Shapiro, and Samantha Brennan.
--for all of the departments in the top 50 of the 2011 worldwide PGR, a mean 17% faculty signed the document**. (I am attaching the Excel spreadsheet I used here.)
--there is little to no correlation between PGR rating and the percentage of faculty who signed for departments in the top 50 of the 2011 worldwide ranking (-.11). Of these departments, those with greater than 17% faculty signatures include: ANU, CUNY, Duke, Georgetown, Harvard, Indiana, King's College London, MIT, Northwestern, Rutgers, Syracuse, UCL, UCSD, Cambridge, Edinburgh, Leeds, U Mass Amherst, Michigan, Oxford, UPenn, Sheffield, USC, St Andrews/Stirling, UVA, Wisconsin.
*I updated the list at approximately 2:45 p.m. PDT , October 10th, 2014.
**I did not match the names of the signers to the names of members of faculty, but compared the number of people who signed the document claiming a particular affiliation to the number of faculty listed in the current PGR faculty lists. It is possible that persons not included in the PGR list for a department signed the document with that department's affiliation, which would potentially lower this percentage as well as the percentage for that particular department.
Update (October 6th, 2014): Sheffield is the second department to announce that it is not cooperating with the PGR this year. The October Statement has 111 signatures, as of October 4th. (I have not worked out how much overlap exists between these statements, so it would not be correct to say that these statements together constitute 745 signatures--the number is smaller than this, but I don't yet know by how much.)
Update (October 10th, 2014): Signatures on the September Statement have closed, and an announcement has been added, as below.
"The September Statement, signed by twenty-one philosophers on September 24, 2014, and its addendum, signed by six hundred twenty-four philosophers in the weeks following, was a pledge not to provide volunteer work for the Philosophical Gourmet Report under the control of Brian Leiter.
On October 10, Leiter publicly committed to stepping down from the PGR following the publication of the 2014 edition, which will be produced with Leiter and Berit Brogaard as co-editors. After its publication, Leiter will resign as editor, and become a member of the PGR's advisory board. (See Daily Nous's account here.)
The September Statement did not specify the conditions under which the PGR is considered to be "under the control of Brian Leiter". It is up to each individual signatory to decide whether it is consistent with the pledge to assist with the 2014 PGR with Leiter as a co-editor, or with future editions with Leiter as a board member.
We are grateful for the support of the philosophers who signed the September Statement, as well as that of those who worked in other ways to make clear that this kind of bullying behaviour is unacceptable in professional philosophy."
I have read in several places this description of my placement post and my response to Brian Leiter's criticisms of that post (most recently, in comments posted yesterday at Philosophical Comment):
"July 1: I posted a sharp critique of some utterly misleading rankings produced by Carolyn Jennings, a tenure-stream faculty member at UC Merced. She quickly started revising it after I called her out."
For the record, this does not strike me as an accurate representation of those events.
First, while I did post a ranking, I made it clear that I did this as an exercise: (from the original post, bold original) "As discussed here in the comments, one of the advantages of comparative data on placement is that they help fill in gaps left over by the PGR...To illustrate this, I below rank the top 50 departments by tenure-track placement rate**, providing for comparison these department's ranks from the 2011 "Ranking Of Top 50 Faculties In The English-Speaking World"by the Philosophical Gourmet Report. Please note that this placement ranking is provided only to demonstrate the potential utility of these data."
Second, while Brian Leiter did find the rankings misleading, many others did not, and even commended the clarity of language in my post. Take these quotes from David Marshall Miller, who has also worked on placement data: "Andrew Carson and, especially, Carolyn Dicey Jennings have developed analyses that now strike me as very robust." and "I will say, to again quote Leiter, that “all such exercises are of very limited value.” Nevertheless, they are of some use, and should be made available, so long as the methodology and limitations of the analysis are made clear. I think the PGR and the placement rankings by Jennings, Carson, and myself all meet this standard."
Third, Brian did post criticisms of the ranking, but I did not make any substantial revisions to the ranking based on his criticisms, since I did not find those criticisms to have merit. Brian's way of characterizing my response at the time was "Prof. Jennings digs in her heels."
Over the past three years I have collected and reported on placement data for positions in academic philosophy. (Interested readers can find past posts here at New APPS under the "placement data" category, two of which have been updated with the new data, severalpostsatProPhilosophy, or the very first post on placement at the Philosophy Smoker.) This year, placement data will be gathered, organized, and reported on by the following committee of volunteers (listed in alphabetical order):
Over the next academic year, we aim to create a website, which will be parked at placementdata.com. This website will include a form for gathering data, a searchable database, and reports on placement data. Until that time, I am suspending updates to the Excel spreadsheet, which contains much of the data used in the past few years, plus the updates I have received over the past few months. (Many thanks to Justin Lillge for incorporating the bulk of these updates into the spreadsheet!) When the website is ready, departments will be able to update their placement data through an embeddable form. Stay tuned for these links in the coming months!
Marcus Arvan, of The Philosophers' Cocoon, had the idea of running a graduate student survey. This was something that the five of us had already talked about (and Justin Lillge had some preliminary work on this), so we have invited Marcus to join us in this project. He has posted s0me initial ideas here. Please do contribute to the discussion if you have insight!
The following ideas and arguments were central to my dissertation work, and are now published as an article in Philosophical Studies. I include them below in a much shortened format for those readers short on time, but high on interest (but hopefully not literally).
The ultimate claim of this work is that top-down attention is necessary for conscious perception. (I argue elsewhere that attention is not necessary for conscious experience, in general.) That is, we might ask the question: what is the contribution of attention to perceptual experience? Within cognitive science, attention is known to contribute to the organization of sensory features into perceptual objects, or object-based organization. I argue something else: that attention enables the perceptual system to achieve the most fundamental form of perceptual organization: subject-based organization. That is, I argue that subject-based organization is brought about and maintained through top-down attention. Thus, top-down attention is necessary for conscious perception in so far as it is necessary for bringing about and maintaining the subject-based organization of perceptual experience.
New APPS readers probably remember Helen De Cruz's excellent post on the polarized debate surrounding evolutionary science (which was picked up by NPR), as well as Roberta Millstein's follow-up post on the perhaps equally polarized debate concerning climate change. Both posts cite the work of Dan Kahan, who has a distinct take on these issues:
"I study risk perception and science communication. I’m going to tell you what I regard as the single most consequential insight you can learn from empirical research in these fields if your goal is to promote constructive public engagement with climate science in American society. It's this: What people “believe” about global warming doesn’t reflect what they know; it expresses who they are."
I just attended a talk by Michael Ranney, who opposes Kahan's position. In Ranney's view, communicating the mechanism of global climate change is enough to change the minds of people on both sides of the political spectrum. (Check out the videos!) Ranney shows, surprisingly, that just about no one understands the mechanism of climate change (Study 1). Further, he shows that revealing that mechanism changes participants' minds about climate change (Study 2).
An excellent article about Mary Beard, the famous classicist, is in this week's New Yorker. It is informative to have a prominent academic give an account of her life experiences like this. I want to encourage others to read the original article, but will pull out one salient and topical point. Beard is not only a very capable scholar, she is also "an avid user of social media," including regular postings at a blog. Despite the sexist reactions to her online presence, Beard has reacted with surprising generosity and patience: "In another highly publicized incident, Beard retweeted a message that she had received from a twenty-year-old university student: 'You filthy old slut. I bet your vagina is disgusting.'...The university student, after apologizing online, came to Cambridge and took Beard out to lunch; she has remained in touch with him, and is even writing letters of reference for him. 'He is going to find it hard to get a job, because as soon as you Google his name that is what comes up,' she said. 'And although he was a very silly, injudicious, and at that moment not very pleasant young guy, I don’t actually think one tweet should ruin your job prospects.'" Beard is an admirable and remarkable person, and learning about this new side of her makes her all the more so, in my mind. Check it out!
After reading some discussion at the Daily Nous about the Ferguson situation (also addressed in this post by Leigh Johnson), it struck me that it might be helpful to open a forum dedicated to discussing steps for improvement and change. Some ideas for improvement and change may reasonably focus on specific issues at the intersection of race, law, and legal force. One article linked in the comments goes in a more general direction, targeting economic inequality and economic reparation:
But this story is neither old nor unfamiliar. Rather than asking “why,” let’s focus on the banal laws and policies needed to redirect the distribution of wealth — stolen from black Americans, such that whites can no longer summon police, law or politicians on their behalf to erase or suppress black Americans, and other minorities. That will require more than revealing the name of the police officer who shot Michael Brown; it will require asking who, in the next round of city council elections, state elections and, of course, presidential elections, is ready to compromise their political career in order to work toward redirecting wealth, jobs, opportunities toward black and Latino populations that constitute the majority of the United States. Only when wealth changes hands will black Americans have a fighting chance to resist police power and violence.
This is a powerful suggestion that leads me to wonder about how economic change might address the problems of racial injustice we have seen in Ferguson and elsewhere. Although racial injustice and economic inequality are no doubt related, the former is a distinct problem from the latter, as was noted during the Occupy Movement. In January of this year, the Pew Research Center presented data showing that not only has economic inequality worsened since 1967 but that "the black-white income gap in the U.S. has persisted" since that time. Thus, although it is possible that "narrowing the gap" of economic inequality may partially and indirectly improve the problem of racial injustice, we ought not forget the specific issue of racial inequality in seeking economic change. To improve economic inequality, Standard and Poor recommends investment in education. Here are some bullet points from the overview of a recent report:
A few days ago I posted a list of features that I take to be essential to an ideal report on placement, seeking comments and suggestions. One of the features I mention there is recency. All departments are likely to place more candidates given more time, but this slope is steeper for certain departments. Moreover, placement varies year to year. Thus, one's choice of time frame can substantially alter data on placement. This is the reason that Brian Leiter's numbers for NYU look better than mine (here and here)--I looked at the years 2012 to 2014 (3 years in the recent past), whereas he looked at the years 2005 to 2010 (6 years in the distant past).* Looking at NYU's placement page, one can easily see that the percentage of graduates placed in tenure-track jobs drops as one reaches the present. As I said, this is likely true for all departments. This means that if you look at data in the distant past, it might not matter what the length of the time frame is, but if you look at data ending in the recent past, the length of time frame makes an impact. That is, for NYU for the years starting in 2005, a 6-year time frame has 87% TT placement, a 5-year time frame has 90% TT placement, a 4-year time frame has 88% TT placement, and a 3-year time frame has 90% TT placement. But for the years ending in 2013, a 6-year time frame has 69% TT placement, a 5-year time frame has 65% TT placement, a 4-year time frame has 56% TT placement, and a 3-year time frame has 56% TT placement. Note that even the 6-year window ending in 2013 is associated with much lower placement than any of the windows starting in 2005. It seems obvious to me that we should favor more recent data, since they reveal which departments place students more quickly than others and since they are more relevant to students looking at graduate programs. Beyond that, it is not obvious just what length of time we should choose (3, 4, 5, or 6 years) or just which year we should use as the endpoint.
Yet, one's choice of time frame has a large impact on comparative placement data. Let's compare NYU's placement page to the placement pages of those departments that I found with these methods to have the highest tenure-track placement rates: Berkeley, Princeton, Pittsburgh HPS, and UCLA. If we look at NYU's worst time frame it comes out behind all the others (2010-2013: NYU 56%, UCLA 59%, Berkeley 63%, Princeton 65%, and Pittsburgh HPS 88%). If we look at NYU's best time frame it comes out ahead of all the others (2006-2009: NYU 94%, UCLA 67%, Berkeley 78%, Princeton 86%, and Pittsburgh HPS 93%). If, on the other hand, we look at multiple time frames then a new type of comparison is possible. We can determine, for example, which department has the least low value for tenure-track placement, given any time frame in the period from 2005 to 2013 (with a 3-year minimum time frame and a 6-year maximum time frame). In that case, Pittsburgh HPS comes out on top. It's lowest value is 85%. In comparison, the lowest value for Princeton is 65% (2010-2013), the lowest value for Berkeley is 59% (2009-2012), the lowest value for UCLA is 52% (2009-2012), and the lowest value for NYU is 56% (2010-2013). So if we look at the least low placement for all of these time frames, NYU comes out second to last. Finally, if we look at the full range, from 2005 to 2013, NYU comes out in the middle (Pittsburgh HPS 93%, Princeton 76%, NYU 74%, Berkeley 70%, UCLA 65%).
Suffice it to say, these decisions make a substantial impact on one's results. For that reason, one should attend carefully to justifications on recency and time frame. I will remove the links to Brian Leiter's two posts on placement data here, since I am concerned that they will mislead students. If I had written those posts, I would certainly take them down knowing what I have made clear in this post (i.e. that the numbers for NYU are inflated for the very time frame that Brian Leiter chose to look at, relative to other departments). I have emailed Brian a link to this post.
As for my data, I use the years 2012 to 2014 because those are the most recent years and the years for which I have large data sets. (ProPhilosophy was kind enough to email departments directly in 2012 and 2013, which substantially increased the number of reported hires for those two years.) To go prior to 2012 I would have to either look at individual placement pages for all 118 departments, many of which do not have data of the sort I need, or use what I know to be a skewed sample from the Leiter Reports blog. I have made clear that any rankings I produce are a work in progress and should not be taken as authoritative. (That is one reason I post them to blogs, and not an independent website.) But as time goes on and this process is improved I will have to start making decisions about which time frames matter. I may well follow the lead of David Marshall Miller in reporting multiple time frames, since this might be helpful for students. Suggestions on this point are welcome. (The data that I used for this post are after the break. Feel free to suggest corrections where needed.)
*I hope that this does not need saying, but I am not picking on NYU here. One of my dissertation advisors was at NYU and one of my best friends is currently a student there. I am looking at NYU because it appears to be a focal point in Brian Leiter's criticism of my work. If one were to look at other measures beyond just tenure-track placement, NYU may well fare better than it does here.
Update (7/14/14): In order to satisfy the worry that NYU is particularly burdened by graduates of the JD/PhD program in this measure (2 graduates from NYU left academia for law in this time period, compared to 1 from Princeton, 3 from Berkeley, and perhaps 2 from UCLA), I compared NYU to these other programs while leaving out all those graduates who left academia. In that case, as I point out in the comment below, it is still clear that time frame matters and, in particular, that the time frame of 2005-2010 overly inflates NYU's record (2008-2013 puts NYU in the middle of the group, at 80%, whereas 2005-2010 puts it at 95%, square with Berkeley and Pittsburgh HPS, ahead of UCLA and Princeton. It might be worth noting that with the same methods Fordham University placed 69% of its graduates into tenure-track jobs between 2008 and 2013). See my comment below for details.
I applaud Brian Leiter's efforts to examine placement data in the past few days *Update 6/13/14: I have removed these links because I think that Brian Leiter's posts have the potential to mislead students. See my new post here*, as well as the efforts of David Marshall Miller and Andy Carson over the past few years. All of this is effort to improve the profession and deserves recognition as such. I plan to continue reporting placement data next year and will likely post the report to an independent website. Below is a list of features that I take to be essential to an ideal report on placement, together with some ideas for improvement on my own work. Please comment below!
1) the original data: as far as I know this is missing from both Brian Leiter and Andy Carson's efforts. This is important because it keeps the analyses honest by opening them up to public scrutiny. I have provided links to my data and will continue to do so. Recommendations on format are welcome here.
2) the methods: key information is missing in Brian Leiter's presentation, such as the criteria for determining which placements are to "research universities and selective liberal arts colleges," but as far as I can tell David Marshall Miller and Andy Carson are clear and up front about their methods. I have tried to be clear about my methods, but I have received some emails that reveal shortcomings here. Recommendations welcome.
3) completeness: Brian Leiter's efforts, as of this moment, include only a few departments (that were not selected at random). An ideal report should include all the philosophy departments that have made placements of the type in question, which is something David Marshall Miller, Andy Carson, and myself have all tried to do. What is missing from all of our reports is complete placement data. PhilAppointments is not a complete source, for example, but neither are placement pages. Further, placement pages are often missing key data points on placement (such as names, which help to identify duplicate candidates). Next year I aim to cross-reference PhilAppointments with individual placement pages. Recommendations on how to efficiently improve completeness are welcome.
4) recency: since these efforts are in their infancy, it is currently unknown what time frames are relevant. Recent data are ideal, so long as recency is balanced with completeness. Brian Leiter chose a 5-year time frame between 2005 and 2010, which I see as a drawback of his report. Although David Marshall Miller, Andy Carson, and myself have all used the most up-to-date data, David Marshall Miller also looked at different time frames. In the future, with more data, the use of time frames should help us to determine how recent our data needs to be. Recommendations on how to proceed with time frames is welcome here, since next year the data set I have will be in its fourth year (2011-2015).
5) neutrality: Those collecting, analyzing, and reporting the data should be as neutral as possible with respect to hypotheses and results. I have concerns about this with respect to Brian Leiter's report, especially given the absence of 1 and 2. The fact that David Marshall Miller, Andy Carson, and I have performed this work on our own is also potentially problematic, even with the inclusion of the original data and methods. Over the next year I plan to form a task force to work on placement data, composed of several people who have reached out to me over the past week or so (but others are welcome). Having more people on the project should help with neutrality. Recommendations on this point are welcome.
When the NewAPPS bloggers first invited me to submit a guest post on my attention research as a graduate student, I decided to submit a post on the term "genius" instead. In the case that it was the only post I would write, I wanted the post to have maximum utility. After some thought, I decided to target the obsession with genius, thinking it a pernicious problem easily deflated. I am not alone in finding it to be a problem. In fact, I may well have been alerted to the problem by Eric Schwitzgebel's blog post on "seeming smart." Commentators on the problem have looked at everything from its impact on women and racial/ethnic minorities to its impact on child prodigies, some of whom have written against it in favor of work-based praise (and for good reason). So, I was half-right: I was right to think it is a problem, but I was wrong, of course, in thinking the problem could be easily deflated. I am going to give it another stab, this time aiming closer to the heart of what I find to be the problem--the way that the terms "genius" and "smart" are used to silence minorities. I know about this first hand--just last week Brian Leiter implied that I was not smart enough to understand a particular distinction that he felt I had overlooked.
Update (6/9/2014): I urge skeptical readers to examine these much more respectful posts, where there is no mention of intelligence, for sake of comparison: on David Marshall Miller, on Andy Carson, and again on Andy Carson. These job market analyses were perfomed after my first analysis in April 2012 and have many similar elements. Furthermore, the content of Brian Leiter's criticisms to these analyses is much the same, but without the damaging remarks about mental capacity, intention, etc.
I recently signed a pledge with the aim of being more respectful toward my colleagues and of trying to uphold a culture of respectfulness in our profession. Following conversation over a previous post, I have come to the belief that I should provide a safe space for people to discuss departmental rankings in philosophy. When I made critical comments at the Leiter Blog on the inclusion of women among the rankers of the PGR in 2011 as a graduate student, I felt shut down. My comments were edited without permission in a way that made me appear less reasonable, while the original post and other comments were edited to make my interlocutors appear more reasonable. I think that it is healthy to evaluate ranking methodologies critically and openly and I think that there must be a public space for this. Since I have already earned the ire of those who appear to be opposed to a public discussion, I am a good candidate for putting forward a post that will allow for discussion. I will thus allow anonymous postings and will aim to respect that anonymity both privately and publicly (except when required by law or conscience to do otherwise).
I will start with some of my own thoughts: I think that reputational information is helpful and important, but that it would be better to combine this information with data on placement, publications, and other such objective measures. (With this in mind, I sent my original findings on the job market to Brian Leiter and Kieran Healy in April 2012 without response.) An ideal ranking, in my mind, would be customizable. The viewer would have to choose metrics before a ranking would be created. I am open on what the relevant metrics might be. This is where you come in. Should we have rankings at all? What metrics do prospective graduate students care about (a variety of voices is of value here)? How should this work be completed, and by whom? Comments that appear to violate the norm of respectfulness will not be admitted as is, but anonymity is both welcomed and encouraged. Update: commentators should feel free to leave off their email addresses when posting comments.
Update: Creating (or updating) a ranking of this kind, with multiple objective values, is beyond my current capabilities. I fully and wholeheartedly welcome someone with more time and competence than me to take on this task. Better yet, I think, would be a task force involving those familiar with the PGR, since they already have lots of expertise. I am welcoming discussion here not because I plan to create a new ranking, but because I think it is important to have a discussion about all such rankings in the open. I am limiting my personal contribution to the placement data for now.
Most readers have probably been following the controversy involving Carolyn Dicey Jennings and Brian Leiter concerning the job placement data post where Carolyn Dicey Jennings compares her analysis of the data she has assembled with the PGR Rank. There have been a number of people reacting to what many perceived as Brian Leiter’s excessively personalized attack of Carolyn Dicey Jennings’s analysis, such as in Daily Nous, and this post by UBC’s Carrie Ichikawa Jenkins on guidelines for academic professional conduct (the latter is not an explicit defense of Carolyn Dicey Jennings, but the message is clear enough, I think). UPDATE: supportive post also at the Feminist Philosophers.
It goes without saying (but I’ll say it anyway) that we, NewAPPS bloggers, fully support Carolyn’s right to post her important analyses of job placement data, and deplore the tone and words adopted by Brian Leiter to voice his objections to her methodology. (This is not the first time that episodes of this kind involving Brian Leiter and junior, untenured colleagues occur; I for one deem such episodes to be inadmissible.)