Become a Fan

« The Canadian Papabile | Main | The Davidsonian bust has arrived »

10 March 2013


TrackBack URL for this entry:

Listed below are links to weblogs that reference Are philosophers more biased than other academics?:


Feed You can follow this conversation by subscribing to the comment feed for this post.

Neil Levy

Are you committed to the prediction that reading this very post will tend to lead to worse behavior? Consciousness raising more generally?

John Protevi

Hello Helen, I agree with your "structure the environment" suggestion as a way to overcome philosophers' assumption that solitary reflection is enough. (It's especially important here, as not only is solitary reflection not enough, it actually makes things worse!)

Here's a question about the state of the research: are we better at detecting implicit bias in the behavior of others? If so, then simply getting in the habit of asking other folks to check your work might be very helpful. Perhaps impractical in the case of grading, given the volume. But in the case of invites to conferences and essay collections and reference works, and in the case of admissions and hiring, then this should be helpful, I think.

Helen De Cruz

Hi Neil: I do not think that becoming aware of biases is necessarily a bad thing. It depends how we react. If we react by saying "Well, others may have such biases, but not me", that is clearly a bad situation (perhaps worse than if we hadn't been aware of the biases in the first place). But if we react by taking measures that structure our environment, such as the Gendered Conference Campaign, the effects will be more positive.

Helen De Cruz

Hi John: It turns out that people can learn to spot biases very easily in others. In the work I cited, participants who learn about biases can learn to spot them easily in others (e.g., the above average bias). And indeed, it seems easy for us to spot bias in others (e.g., the LoR writer who asks people to forgive his or her political incorrectness prompted a burst of outrage). But it remains difficult to look at it in ourselves. This is why vetting of letters of recommendation by placement directors is so important - things like doubt raisers (which are more prevalent in LoRs for women than those for men) can be spotted more easily by others. But if the biases are the same in the person who does the grading etc as in the person who double-checks, it will remain difficult to counter them.

Neil Levy

Here was the thought. I read, say, the post about the now infamous reference letter. I get outraged. Unconsciously, I congratulate myself on how enlightened I am. This makes it more likely that I act on my implicit biases (in the short term). Couldn't an analogous phenomenon occur on reading your post?

Helen De Cruz

I see what you're saying, Neil. It is quite possible, and I have seen such things in action. It depends on what you do, subsequent to your reaction. You can be outraged (I think most people are), congratulate yourself, and go on as usual. Or you can be outraged and conduct some measures to make sure you don't fall victim to a similar thing, for instance, by letting someone else go over your letters of reference for male and female applicants.
One reason I wrote the post is that I hope we can recognize that just recognizing biases is not enough to overcome them. I don't think they are insurmountable, but just knowing about them is not enough (hence the piece about structuring our environment). In fact, I thought the author of the blogpost on What's it like exhibited a rare instance of self-knowledge by acknowledging that s/he found her/himself biased by the letter of recommendation.

Neil Levy

So consciousness raising makes things worse unless it leads to the kinds of alterations in practices that John advocates. Would that be fair?

Helen De Cruz

While I would like to see more empirical work before I would argue for such a strong conclusion, I think that just raising awareness by itself, without action, can indeed lead to worse outcomes, given that people are bad at spotting their own biases. However, what it *can* accomplish is show that philosophy as a discipline has a problem. Before the What's it like blog, I think few of us had any idea that sexual harassment is so prevalent in our discipline. I've witnessed (indirectly) a few incidents, but I thought they were isolated and I thought such cases were the exception. Now I think we realize that there is a problem in the profession. But again, our awareness of this will accomplish little if we do not couple it with action (e.g., policies in schools that make sure sexual harassment will be penalized without too much collateral damage for the victims).

Eric Schliesser

Helen, before the "What's it like blog" many (of us) were aware of the prevalence of the problem. The blog just gave us a place to point to when questioned by others.

Neil Levy

I must say, prior to "what it's like" I was well aware that philosophy had a problem, but did not seriously entertain the idea that harassment was a big part of the cause. I thought it was likely to be far more subtle kinds of behavior. Now I take much more seriously the possibility that crude harassment is a big part of the story.

Eric Schliesser

Fair enough.

Tenured Associate Professor

While my department was involved in a very ugly tenure case (mine), they were sure to have Sally Haslanger out to give a talk. They were very excited to have her on campus and they were very moved by her talk. Thus, in the course of our two job searches this year, they were careful to interview and bring to campus a number of very accomplished female candidates. *Unintentionally,* all the female candidates were invited to campus during the week and all the male candidates were invited to visit during our regular colloquium series on Fridays. This meant that faculty and graduate students were guaranteed to have the time to meet with all and only the male candidates. As a result, the male candidates talks were much better attended as were the meetings with faculty and graduate students. Of course, this was a mere oversight. But when I mentioned this to the department, it was quickly dismissed by one of my colleagues since -- after all -- we [had moral licensing] by having offered positions to several female candidates. I should add that we have been turned down by two female candidates thus far.

Ingrid Robeyns

Helen, Eric and Neil:

I wrote on Crooked Timber a few days back that I've not seen cases of Sexual Harrasment in academia, nor can I recall that anyone has every mentioned anything to me (students, colleagues, etc.). I am with both Helen and Neil that I was badly surprised and shocked at the stories at 'WHat it is like'. I try to understand why I can't remember any cases, especially since it may entail a gross naivety on my part. Yet I've also wondered whether my definition if SH may be narrower than what it should be. I understand SH as only harassment whereby one makes sexual advances, or remarks about looks, beauty, sex appeal, etc. General remarks that are about women only (e.g. statements of statistical discrimination, such as 'surely women will not be interested/capable in doing XYZ'), or remarks that for example relate to female workers as being (bad) mothers I have not included under sexual harassment. Should these be included under 'sexual harassment' too? Thanks for clarifying.

Helen De Cruz

Hi Ingrid: the cases in What's it like I was referring to could really be classified as sexual harassment in the narrow sense of the term. The other cases you refer to are clear examples of bias, (e.g., statements about women in general) but I wouldn't classify them as SH.

Philip Kremer

In dealing with your or your teaching assistants' potential biases against students (based on gender, ethnicity, how they dress, who their friends are, etc.), anonymous grading should be standard. It is easy simply to instruct your students to put their student numbers, but not their names, on their papers. This won't always work: sometimes, if a student has repeatedly come to office hours to discuss an assignment, you can tell straight away who wrote some particular finished product. But, in my experience, that's the exception rather than the rule. I've often found myself surprised, when handing the assignments back, at which student wrote the best essay. A related trick is to require the same font, line spacing, and margins for the paper, with a uniform citation style. I explain to the students that I have no special affection for Times Roman 12 pt double-spaced with one-inch margins, but that it helps me to evaluate a paper based on the content if it is submitted anonymously in a completely standardized format.

It occurs to me now that one way to address bias in job candidate files -- this is obviously much harder -- would be to have someone anonymize the writing samples, which would be read by a committee who does not know anything about the authors. (I'd have to think about how actually to implement this.)

Tenured Associate Professor

I have had undergraduates, graduate students and faculty confide in me over the years. I myself was told repeatedly that I didn't need a raise because I didn't have children. I was told that I didn't need a raise because I didn't have a family. I was told that I would not 'grow up,' 'act like an adult' or 'dress appropriately' until I had real responsibilities -- children. One of my colleagues was told that my teaching evaluations should be discounted because the students thought that I was 'good looking' NOT that I was a good teacher. But my male colleagues were not told such things. I consider these examples to be part of a long pattern of gender discrimination, and whether they are is determined, in part, by whether a reasonable person would count these things as such.

Tenured Associate Professor

Reply above is to Ingrid Robeyns.

Taylor Murphy

It may even be worthwhile to consider preprocessing applicant documents. Thus phrases that call attention to the gender of the applicant (or other such issues) could be modified. "She is smart and pretty" —> "The applicant is smart" etc.

Now, it appears to me that this would be able to help in some cases, and would remove issues with stereotypes being flagged in the process. Would there be anything objectionable to such a procedure?

Recent Job Marketeer

A centralized system (akin to the common application for colleges) could serve the standardizing function that Phil Kremer is talking about (you paste the text into a window instead of uploading a document) as well as saving applicants the ridiculous trial of filling out 50 to 100 separate applications. I realize this would take a significant concerted effort across departments, the APA, and HR folks, but this is the kind of structured change that could actually make a big difference. The current system is a tremendous waste of everyone's time, and is structured so as to allow the cues for implicit biases to persist.

Ed Kazarian

I want to second John Protevi's point re: the importance of Helen's emphasis on 'structuring the environment' and maybe extend it a little bit (apologies if all of this turns out to have been obvious to everyone but me).

In my own experience with relevant sorts of circumstances (not necessarily involving philosophers, and not involving bias directed at me), the observations re: cognitive sophistication having not decreased but rather increased bias blind spots seems dead on. One way that I have described what seems to be happening is to say that people (and I think in many cases well intentioned, sincere people) have convinced themselves that they have become 'enlightened' on issues of bais, and thus felt that their evaluations cannot be biased but rather must be purely motivated by (what they take to be) real, substantive reasons that they put forward in order to justify them.* All of this while their evaluations and their actions may well be replaying common patterns of bias in ways that are easily recognizable to someone who does not share those biases. Unfortunately, because of the conviction I refer to above, pointing this out does not always or easily lead to a recognition of the problem on the part of the person in question.

By contrast, I think that any success that I've had in learning to see past my own biases (and I take that extent to be, at best, very limited and provisional) has been the result of a number of often very uncomfortable and difficult conversations with people who have sufficiently different perspectives than mine to show me what I'm missing. I emphasize that these conversations have been difficult and uncomfortable. They have not, at least not in my case and I suspect not in the case of anyone, been easy or felt good. And the net effect of such conversations certainly isn't to have convinced me that I'm now 'enlightened,' but rather that I'm likely to still be missing a lot, and to reinforce for me the need to keep seeking out different, balancing perspectives. So that, if I've gotten any better at avoiding bias, I would say that it's almost entirely because of this reinforced tendency to seek other views -- thus I take the point that it cannot be an individual and solitary project to be absolutely correct.

And this is why I think the point that the progress of the field as a whole towards a less biased state depends on the cultivation of environmental structures that keep a plurality of voices in play is crucial. But I also want to add that I fear functional inclusiveness is going to be a much harder thing to achieve than representation. In fact, and granting that we're a long way from even having adequate representation, I think attempts to go beyond anything but bare representation will often meet a great deal of resistance because of the discomfort that they are likely to provoke.** Given that, it seems to me that a crucial part of the project of 'structuring the environment' against bias will probably be working toward redistributing positions of power in a way that will make us rather more compelled to listen to a variety perspectives than we are at present.

*I think the situation I'm describing is probably just a longer-winded version of what Helen meant in the example in her reply to Neil Levy, but I'd add that it may well take the form of someone recognizing that they once had biases, but have gotten over them.

**I think some of the things that Helen says in the original post about 'moral self-licensing' and that 'Tenured Associate Professor' says in her replies illustrate the kinds of resistance I have in mind: narrowly 'justified' (or at least reasoned) dismissal, deemphasis or evasion of the concerns or interests of those on the 'wrong side' of bias—for instance, the 'retreat to proceduralism' wherein we are told that a certain procedural 'non-bias' is the best that one can do, since one doesn't want to 'lower' one's 'standards' and so on. This kind of thing is why I would say that any effort at 'structuring the environment' needs to have pretty robust aims. Or in other words, as long as the point of view of institutional authority remains largely structured by the kinds of biases currently found there, major problems will persist.

Bruno Verbeek

I read somewhere that the level of confidence in one's own judgments, including the belief that one's judgments are free of bias, are highest among...
white middle class males between 35-55. Given that the hiring committees of most departments to a large degree consist exactly of that constituency and you can predict the results.


What is the basis for your hypothesis that philosophers have higher cognitive sophistication than, say Biologists/Physicists/social scientists (especially sociologists, psychologists, or political scientists, who think about these things all the time)? The old philosophical "it seems to me that" is not quite sufficient....

Helen De Cruz

Hi Tverb: As I mentioned in the blogpost, philosophy majors seem to score well in verbal and analytic skills, and those skills are precisely needed to solve the tasks that measure cognitive sophistication. Perhaps the determinant is not so much cognitive sophistication, but epistemic humility. The idea that others, but not oneself, are subject to bias could be more easily exhibited by those who are in the business of thinking. I agree we need more empirical data, and I'm very curious about all the empirical work that will come out about implicit biases in philosophy, by Jennifer Saul and others.

Neil Levy

Old data, but one study found that while 84% of physicians believed that their colleagues were influenced by commercial samples and gifts, 61% believed that they were themselves immune. I wonder whether we would really find a similarly large bias blind spot in philosophers. Here's a hypothesis that might (also) play a part in explaining why philosophy is stuck in the past, while almost every other discipline has made substantial progress on gender: it is due to the low status of philosophy. Highly motivated women rightly saw that changing social norms required changing perceptions of the law and medicine and, to a lesser extent, science. They therefore set out to conquer these areas despite the strong resistance they encountered. They succeeded, to a greater or lesser extent. At least they succeeded in changing the face (literally) of these disciplines. Due to the low status of philosophy, women did not have this additional motivation to succeed against resistance.

Helen De Cruz

It might be, but that doesn't explain why gender balance is substantially better in other low-prestige disciplines. I'm looking at these figures here collected by Kieran Healy on the percentage of doctorates awarded in different disciplines:
There is nothing particularly prestigious about, say art education or nursing science, yet women flock to these disciplines to get PhDs. We could say that the high percentage of women in those fields can be explained by the fact that they historically were attracting more women, whereas philosophy was not, but this does not explain why disciplines like art criticism (originally quite male dominated) have improved more in terms of gender balance.

Neil Levy

Art criticism (say) may be less identified with maleness in the minds of people, so that women feel less stereotype threat when they study it. So the modified hypothesis is this: relative prestige might help to explain why women overcome internal and external resistance to conquer some male-identified areas and not others.

Eric Schwitzgebel

Interesting post! I wonder if there's a way to test this idea empirically. First pass thought: Are departments more likely to hire a man after having made an offer to a woman than after having made an offer to a man? And if so, do philosophy departments show a larger effect size in this direction than do other departments in the humanities or social sciences?

Neil Levy

If it is a dispositional question - if philosophers, due to their cognitive sophistication, are more prone to the bias blind spot - it should be an easy matter to compared PhD philosophers against people in other disciplines. Of course blinding them to hypothesis is going to be the hard part. If, though, it is not a question of a stable characteristic, and it might not, then it will have to be some kind of discourse analysis that will have to be used. My thought about how it might not be dispositional is this: we might get philosophers giving themselves moral permission more frequently because they more frequently reflect on relevant questions, and as a result of so reflecting give themselves permission to be biased. Some kind of beeper study might be done to get data on the content of what philosophers think about, as compared to people in other disciplines.

Michael Brownstein

Great post! I agree that the emphasis on "structuring the environment" is crucial. Alongside this, there are useful and effective attitude change strategies that individuals can use to help regulate the activation and expression of their own implicit biases. While changing the world is paramount, we can "work on" ourselves too.

For example, you can adopt an "implementation intention," which simply involves committing to an “if-then” plan that specifies a cue and a response. This form of planning is much more powerful that merely committing to a goal. It works for dieting, exercising, recycling, fighting addiction, maintaining focus (e.g., in sports), avoiding binge drinking, performing well on memory and Stroop tasks, and regulating implicit biases. See, for instance, Gollwitzer et al., 2005; Gollwitzer and Sheeran, 2006 meta-analysis; Stewart and Payne, 2008; Mendoza et al., 2010; and Webb et al., 2010.

There is also evidence to suggest that merely being exposed to counter-stereotypical images of members of socially stigmatizes groups has strong effects on attitudes and behavior. See Dasgupta and Greenwald, 2001; Wittenbrink et al., 2001; and Gawronski et al., 2008.

And while the controlled experiments involve a lot of trials, Kerry Kawakami's work on "approach training" is also very promising. Kawakami et al., 2005, 2007, 2008; Phills et al., 2011.

These are valuable techniques that don't take away anything from the effort to change one's environment. But they are things we can do with virtually no cost to ourselves while try to change our environments. I don't know of any evidence about the use of these or other attitude change strategies specifically with philosophers, but I would love to see the results of that! And while there isn't a lot of evidence yet about the durability of the effectiveness of these interventions, there is some promising evidence that the effects are in fact durable (Webb et al., 2010; Dasgupta and Asgari, 2004; Devine et al., 2012), and more research is (hopefully, pending funding!) on the way.

Helen De Cruz

Hi Neil: that looks like a promising idea. Rather than being more cognitively sophisticated, philosophers might have more reasons to believe (or at least think they have reasons to believe) that they are unbiased, and this in effect could increase the bias blind spot. Perhaps being regularly involved in the reflective practice of philosophy might facilitate the sort of moral self-licensing I described in the post. In fact, I think there is some empirical evidence for this within subdiscipline, with Eric Schwitzgebel's work on moral ethicists, who do not do better in actual ethical practice than non-ethicists (one of the study even suggested they do worse, e.g., more ethics books go missing in libraries compared to other philosophical disciplines).

Neil Levy

Michael, JD Trout has given persuasive arguments why the kind of debiasing strategies you recommend have very significant limitations. You need to identify when these strategies should be implemented, and to what degree, and that seems as difficult as the original question they are designed to solve. See

Helen De Cruz

Hi Michael: That is an interesting angle. I think it is important to contextualize West et al.'s conclusion that there is "no evidence whatsoever for the notion that people who are more aware of their own biases are better able to overcome them". That conclusion, by itself, would be overwhelmingly pessimistic, and it would feed into Neil's earlier worry that calling attention to biases might be detrimental because we they can increase as a result of moral self-licensing.
Rather, I think their conclusion should be phrased as follows: "being aware of bias, *by itself* does not attenuate bias", just like being aware that I should exercise more by itself does not lead me to exercise more or being aware that refined sugar is unhealthy does not, by itself, help me to avoid it. However, the awareness is an important precondition for implementing effective strategies can help to lessen it.
The techniques you hint at have proven to be very effective in many domains. I think that such self-training techniques in effect are also a form of extending cognition, but in this case, the extension is not achieved by structuring the environment, but by restructuring our minds. We know from lots of other contexts that this form of restructuring (e.g., learning to play a musical instrument) effect significant changes in cognition over the long term.

Michael Brownstein

Hi Neil,

Thanks for the Trout paper. I'll look forward to reading it more carefully, but from a quick read, I couldn't find any arguments against attitude change strategies that are very persuasive in the context of implicit racial or gender biases. From what I could tell, Trout argues:

(1) So-called "inside" strategies for debiasing (i.e. attitude change interventions, I presume) are very demanding. One must be (a) motivated to change; (b) aware of one's biases; and (c) aware of the influence and magnitude of the effects of those biases on one's judgment and behavior. Inside strategies are costly because one "must also invest effort in generating specific alternative outcomes, and in order to do so they must have the cognitive capacity, attentional focus, and undistracting environment to carry it out." (419)

(2) Inside strategies are also socially costly, because deploying them effectively would require "that the government unleash a veritable army of teachers into our schools and businesses, in the hopes of reaping even the modest benefits that inside debiasing strategies offer." (420) So Trout concludes, "anyone who assumes the adequate efficiency of debiasing through individual training is either ignoring the magnitude of institutional intervention required for such educational programs, or ignoring the cognitive costs to the individual of correcting such biases" (433).

And finally, you add a third claim:

(3) Debiasing strategies are significantly limited because one needs to identify when to use them, and to what degree.

Just to keep things simple, lets focus on using implementation intentions.

(1) is not persuasive because using implementation intentions is not demanding in any of these senses. One of the main problems with implicit biases is that they persist in and continue to affect the behavior of individuals who are (a) motivated to be egalitarian; (b) aware of being biased; and (c) aware of the impact of their biases. So while it is true that (a), (b) and (c) are conditions for effectively using an attitude change strategy, it is also true that in many cases these conditions are already met. If we are talking about philosophers who know about their own biases and want to change them, it is no argument against using an "inside" strategy to say that individuals can only use these strategies if they know about and want to change their biases! Trout's other claim about the costs of these kinds of interventions--that they require the investment of effort and attention--is, at least in the case of if-then planning, false. If-then planning is not cognitively depleting and is effective even when individuals are already depleted or distracted (Cohen et al., 2008; Gollwitzer et al., 2008).

(2) does not look persuasive to me compared with the kinds of resources that would be required to restructure our environments. I also think it's important to keep two things in mind here. First, I never proposed an either/or. We should try to restructure our environments AND regulate our own biased attitudes "from the inside." Second, many universities and governments are ALREADY putting tons of resources into anti-discrimination programs. The problem is that there is little evidence that these programs work. (See Paluck and Green, 2009 for a good review.) But there is lots of promising evidence that attitude strategies do work.

(3) is right, though I do not think the problem of knowing when and how to use an attitude change strategy is tantamount to the problem of combating implicit biases. The point, I think, is that one must have a fair degree of practical wisdom w/r/t knowing how to use the strategy. In the case of if-then planning, empirical research can help with this concern. For example, in order to adopt a plan that is applicable to a broad array of situations, one can use one's own feelings as the critical cue specified in the if-then plan (e.g. "if I feeling anxious in an interracial interaction, I will think 'friendly'"). It is an open question, for example, how specific or general the critical cue must be in an implementation intention. But there's no evidence that I know of to suggest that the cues cannot be very broad, and thus applicable across a wide range of situations.

Michael Brownstein

Hi Helen,

I agree! I think awareness is analogous to motivation in this context. Being motivated to change one's biases by itself does not attenuate bias. But being motivated is a condition for self-training, as is awareness.

I would be curious to know more about how you think self-training is a form of "extending" cognition.

Neil Levy

Hi Michael,

I agree that for some purposes implementation intentions can bypass these worries. But I need to be able to form an implementation intention with the right content, such that by acting on that content I act in ways that avoid my implicit biases. So far as I can tell, though I can automatise behaviours, I can't automatise bypassing implicit biases. Implementation intentions trigger overlearned behaviours; the associations involved in implicit attitudes are overlearned.

Michael Brownstein

Thanks for the reply Neil. I'm not sure I'm totally following your concern.

Is it that specifying the right if-then plan is hard/impossible because we do not know which features of our situations act as cues for our implicit biases? This strikes me as an empirical question. And it seems to me that we do know something about which features of our situations act as cues for our implicit biases.

Is it that if-then planning has limited value because one would have to adopt ad hoc if-then plans for each anticipated self-regulation demanding situation? I agree that this limits the scope of if-then planning. One plan won't cover all situations, and many self-regulation demanding situations are unanticipated. But some plans will have effects across a variety of situations, even if the particular situation isn't anticipated.

Is it that there is an important difference between self-regulating the activation of implicit biases and eliminating/changing the biases themselves? I agree that this is an important difference. It's hard to tell from the literature whether any of the interventions I mentioned are actually changing one's biases themselves. From a practical point of view, though, I don't think this speaks against using any of these self-regulatory techniques.

Or, finally, is it that one can't automatize one's use of attitude change strategies? One can't become a "habitual egalitarian," in other words. I think this is a key question, but I don't see why, with practice, agents can't automatize their desired egalitarian responses to others, or why using psychological tricks like if-then planning can't be part of their way of practicing. FWIW, Gollwitzer calls implementation intentions "instant habits," though perhaps this is too strong.

Neil Levy

It's a worry about the content of the II. What do I resolve to do, when given the it cue? Be even-handed? I don't know how to do that! Or rather, I don't know how to make an II which will help. I might resolve to think about my implicit biases given cue c, or to try to be evenhanded given cue c. But I doubt that resolving these things will help. I'm really not seeing what helpful II to form.

Dan Dennis

It is worth noting that that some STEM subjects have significantly increased the proportion of women taking PhDs, and have a higher proportion of women taking PhDs than Philosophy, does not necessarily mean that those in power in those subjects are less biased against women than those in philosophy. It may simply be that in STEM subjects there is less scope for bias to operate – because it is more common for there to be right and wrong answers. If a woman takes a Chemistry exam which is marked blind and she gets better marks than her male peers – well when deciding who to admit to your PhD program you would have to be very heavily biased to admit those male peers ahead of her. You would have a hard time justifying your decision to colleagues etc. In comparison a biased person reading a woman’s writing sample in a PhD application or job application might be able to simply claim he/she prefers the male applicant’s essay, and it would be tough to prove this preference was down to bias.

Perhaps the most interesting comparison is between psychology and philosophy. One would imagine psychology is a subject where there is scope for a fair amount of judgement, and is the subject where those in power are most aware of implicit bias etc. So if the bias blind spot had a significant effect on the distribution of women students between subjects, one would expect psychology to have a low proportion of women. However Psychology has a large proportion of women students.

Of course it might simply be that the reason that a much higher proportion of women study psychology than philosophy is because a much higher proportion of talented women are interested in psychology than philosophy. Just as that there is a much higher proportion of women studying ethics than metaphysics is most likely due to the preferences of women.

So whilst we should clearly employ methods such as those mentioned above (marking anonymously, sifting PhD and job applications anonymously etc) it may be that the greatest thing we could do would be to enthuse women about our subject.

Helen De Cruz

Hi Michael: There are several ways in which cognition can be scaffolded or extended. The best-known way is by restructuring the environment, as in e.g., the use of measuring devices, calculators etc (the active externalism Clark & Chalmers talk about). I think policies that clearly aim at results, such as trying to avoid all-male conferences (in the GCC) are a good example of this.
But second, there is also extended cognition in the sense that, for instance, Merlin Donald uses it. According to Donald, we can through simple routines and strategies alter normal patterns of cognition. Playing a musical instrument is a good example: by instilling good habits, our motoric and musical abilities improve. I think Jonathan Haidt says something similar about our moral behavior, in the sense that he uses Aristotelian virtue ethics as a guideline to think about how we could cultivate our moral beliefs. (Obviously, you and Neil know more about this domain than me.)

Michael Brownstein

Got it. Here I think you would have to go on a case-by-case basis, though again, I think some cases might have pretty wide applications.

For example, if you are worried that implicit gender biases are affecting your class lectures (e.g. the way you assess student comments), you might try the plan, "if a female student speaks in class, I will think 'competent!'" (This is one I have tried.) Or if you are worried that you might interrupt female colleagues more than male colleagues, you might try a plan like, "if a woman is talking, I won't!" (I heard this one from Louis Antony.)

Other plans are tailored for other goals. If you were a cop and were worried that implicit associations between black men and violence might affect your impulsive reactions in high-pressure situations, you might adopt a plan like the ones that have been used to combat shooter bias in controlled studies, like "if I see a black face, I will think 'safe!'" Jack Glaser at Berkeley has been working with police departments along these lines.

These are just a couple examples from the (now pretty voluminous) literature. Also, while I specifying the right plan (or the content of the right plan) is not always easy, I don't think this concern arises in the case of other attitude change techniques (like increasing one's exposure to counter-stereotypes).

Michael Brownstein

I hadn't thought of that second sense of "extending" cognition. Thanks Helen. I don't know Donald's work, but I'll check it out.

I'm not sure where I stand on the Clark/Chalmers view of extended cognition, but I certainly agree that some cases in this context will effectively blur the line between restructuring our minds and restructuring our environments. Over time, at least, I agree that the GCC is a good example of this.

Neil Levy

It seems I was suffering from a failure of imagination. The suggestions you make wrt the content of II's are plausible. Effectiveness will be limited to recurring situations, but there are lots of these. Thanks!

Michael Brownstein

Sure! There are lots of open questions, of course, but I think it's a pretty exciting area of research.

Eric Schwitzgebel

Neil: Interesting suggestion about beeping philosophers. Very time intensive, though. And in my experience, philosophers don't report a lot of thinking about philosophy, or other academic matters, when you beep them -- even when you beep them during philosophy talks!

Eric Schliesser

Dan Dennis, what is your evidence for your claim that "Just as that there is a much higher proportion of women studying ethics than metaphysics is most likely due to the preferences of women."


Is it possible that women tend naturally to be more interested in art history or nursing than analytic epistemology or whatever? After all, that might just be what explains the fact that these subjects "historically were more attracting to women". Or are we to assume that the sexes must be naturally exactly alike in every relevant way -- exactly alike in terms of motivations, interests and abilities, and so on? It would be nice to know what evidence there is for this very unobvious psychological claim. If that's what motivates all this posturing and denigration of philosophers (or some of them). Don't we need to first establish that the sexes really are inter-changeable in some respect before trying to figure out how to eliminate the largely invisible "biases" that are supposed to make them behave differently?

Neil Levy

"Men make their own history, but they do not make it just as they please; they do not make it under circumstances chosen by themselves, but under circumstances directly encountered, given and transmitted from the past".
We know a lot about preferences and the causal mechanisms involved in shaping them. We also know a lot about the differences, actual and potential, between male and female brains. So we can answer your question. No, it is not possible (in the relevant sense of possible). Thank you for your contribution.

Dan Dennis

Hi Eric

The women philosophers that I know well are working in areas of philosophy that they have chosen to work in - rather than sexual discrimination having pressured them into working in those areas of philosophy. And I am not aware of studies showing sexual discrimination to vary between disciplines within philosophy.

However I am happy to be corrected if you have evidence to the contrary.

Of course now there are a significant proportion of women philosophers working in ethics that might affect the amount of sexual discrimination there now is in ethics (and affect things like the atmosphere of group discussions, the likelihood of students studying the texts of women philosophers etc) – however that does not explain why it has come about that there are a significant proportion of women philosophers working in ethics rather than in metaphysics, or rather than women philosophers being distributed evenly across disciplines within philosophy.

I think ethics is the most important and interesting discipline within philosophy, so don’t see why it would be problematic that a higher proportion of women philosophers than men philosophers think likewise.

Neil Levy

Would you think it problematic if women chose ethics, in part, because they have been told (implicitly or explicitly) that metaphysics/logic/epistemology/what have you is too difficult or somehow inappropriate for women? There is a large empirical literature on how preferences are shaped (recall the OP is about what shapes perceptions). In the light of this literature, and what we know about gender differences in the brain, this situation is very highly probable. It is probable, too, that the same forces that shape preferences in philosophy play a role in women's subordination in the wider society.

Helen De Cruz

Jasper, Neil and Dan: I am well aware of evolutionary and other genealogical accounts of women's preferences, which are supposed to argue why women would be less interested in fields like metaphysics or formal epistemology. As someone who is very sympathetic to genealogical accounts and not opposed to positing something like innate gender differences, I find such explanations for the most part ad hoc and unconvincing. For one thing, there is a significant cultural variability in the performance gap between girls and boys in the sciences and maths. In many countries, girls outperform boys on science, but not in the US, see this graph:
If it's the result of some sort of inborn propensity, why would this vary across cultures? Why would girls fall prey to innate gender differences in the US but not in most other countries?
I blogged about this topic a while ago, suggesting that stereotype threat and other culture-specific factors are much better in explaining why women are underrepresented in philosophy, in particular in some of its areas:
To give an illustration (context: I have an undergraduate degree in art history): art criticism and art history *used to be seen* as something that men did - but now it has become something much more inclusive. We now tend to think of art as something feminine, but keep in mind that it used to be very prestigious, and many art academies even did not allow women to attend until well into the 2nd half of the 19th century. By then, the tide was turning as painting and sculpture became less tied up to the workshop and academy environment, but still, there was resistance to recognizing women as competent painters, sculptors, art historians and art critics.

Taylor Murphy

Forgive me if I somehow missed it above, but there seems to be a mechanism that would explain why being reminded of biases would perhaps disproportionately affect philosophers and make things worse. I cannot find the papers on my tablet but I recall a series of studies that showed that when people were presented with agreeable false facts and were later exposed to corrections, participants would become more confident in the falsehood (especially if politically loaded). These made some news so perhaps others remember these studies.

One possible mechanism was that when participants became aware of the conflict, they would become more confident because this requires explaining or justifying the initial belief in spite of a possible error. By coming up with arguments and reasons, participants could become more confident in their initial decision.

If philosophers are constantly trained to provide reasons and arguments to justify conclusions (for a broad range of things), they'll be even better at this, perhaps more affected by it too. Being aware of implicit biases just means that one ought to have even better reasons to reject such a candidate than otherwise, and so begins a more intensive search for ways of justifying one's initial decision as a way of ruling out the bias. More threat of bias —> more extensive search for justification —> greater confidence in conclusions originally caused by implicit bias.

It is just that the process of looking for reasons backfires when the confound is causal and not justificatory; that is to say, it works well when concerned that one's inference from P to Q may have been faulty–just see if Q would be justified even if ~P or if Q doesn't follow from P. However, when dealing with confounding causes of one's more general judgments, the search for justification just amplifies the effects of the confounding cause.

Dan Dennis

Hi Helen, Neil and Jasper

In reply to Neil’s question, yes, naturally I would “think it problematic if women chose ethics, in part, because they have been told (implicitly or explicitly) that metaphysics/logic/epistemology/what have you is too difficult or somehow inappropriate for women.”

I can only speak for England, but here I don’t think in society generally girls are commonly “told (implicitly or explicitly) that metaphysics/logic/epistemology/ is too difficult or somehow inappropriate for women,” or more difficult than ethics. Because metaphysics/logic/epistemology and their difficulty are not usually the subject of discussion in society generally. I would be surprised if other countries were different.

Once at university it is possible that some individuals express this view. As well as being obviously wrong to claim that “metaphysics/logic/epistemology is too difficult or somehow inappropriate for women,” it is also wrong to claim that “metaphysics/logic/epistemology” are more difficult than ethics. In many ways “metaphysics/logic/epistemology” are easier. The positions are often more clearly delineated, the subdisciplines within them are more compartmentalised so it is easier to specialise in a particular subdiscipline, and it is easier to come up with a clever argument or counter-argument which can then be encapsulated in a short paper. Ethics tends to be rather amorphous and all encompassing, and coming up with new arguments more difficult.

Moving on to the subject of Helens original post ‘Are philosophers more biased than other academics?’ : just because fewer women study philosophy than many other subjects, does not necessarily imply that philosophers are more biased than academics in those other subjects.

We should not assume that more women study in the arts, medicine, psychology and so on than philosophy entirely because they are repelled by philosophy and philosophers or their confidence is undermined etc. It *may* be – at least to some extent – because women are more attracted to those other subjects.

For instance, Feminist care ethicists commonly see caring and empathy as (for whatever reason) more important to women than men, and commonly see women to be currently on average better at these activities than men. If they are correct then this would provide an explanation for why women commonly choose ethics over metaphysics and psychology and medicine over philosophy. If, or to the extent that, this is the case, it is not something to be bemoaned.

Obviously we should do whatever we can to eliminate bias, sexism etc and whatever facets of philosophy discussion, teaching etc repel women. This may increase the proportion of women studying philosophy. It will not necessarily though have the result that as many women study philosophy as study psychology, medicine, art history etc.

Neil Levy

Dan, the OP is about implicit processes; it is not necessary for communication of these attitudes that anyone say anything. Indeed, a mere statistical association is enough to affect implicit attitudes - hence the importance of role models. Implicit processes work associatively, so the fact that people don't have attitudes about epistemology may not prevent the formation off implicit attitudes about epistemology once it is encountered: it is sufficient that it be associated with things regarding which the person has such attitudes.

Neil Levy

I think the paper you half remember is this one:

The mechanism you suggest is along the lines of the one Eric Schwitzgebel has proposed to explain the apparent fact that ethicists behave no better than non-ethicists.

Neil Levy

Now that I think about it, it would be staggering if there were not gendered implicit attitudes with regard to knowledge, given the cultural stereotype that associates "female" with feeling and intuition, rather than justified belief. I don't know of an IAT measuring this, but I am willing to bet on the results.

Dan Dennis

Hi Neil

I took title of the OP to be the topic of the OP: ‘Are philosophers more biased than other academics?’ I took the content of the first paragraph to be suggesting that the lower proportion of women in philosophy is evidence that philosophers are more biased than other academics. And then I took the claim that ’philosophers are less effective than academics from other fields in their ability to counter their own biases’ as explanation for why philosophers are more biased than other academics.

My replies question whether the lower proportion of women in philosophy is evidence that philosophers are more biased than other academics. If they are not more biased then there is no need to find an explanation for why they are.

I don’t doubt that academics of all stripes are prone to implicit bias.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Comments are moderated, and will not appear until the author has approved them.