Next week, I will be speaking at a career development workshop for female Oxford graduate and masters students. One of the things I want to focus on is the importance of building out a broad, strong, supportive professional network.
Academia is built on trust and personal relationships. Rarely are people invited as speakers at conferences, workshops etc purely on the basis of merit. Merit is an important consideration, but people want additional information (e.g., is she a good speaker, will she turn up?) that they can acquire through their network, either by directly knowing the potential invitee, or by knowing others who know her. People from one’s network can alert one to opportunities, including job opportunities. Without a professional network, one has no letter writers (except the advisor and readers of the dissertation), one is excluded from many aspects of academic life that thrive on trust and personal relationships, such as being a keynote speaker or contributing to an edited volume. Moreover, people from one’s network provide opportunities for mentoring, friendship and mutual support in the very competitive environment that is academia. If one has to move state or country and has to leave friends and family behind, the ability to be able to fall back on a network of professional comrades for support and friendship is very valuable. Therefore, I will advise the students to work on their networks early on, and to nurture them.
But there are problematic aspects to networking. Ned Dobos has argued that career networking is ‘an immoral attempt to gain an illegitimate advantage over others’. He makes clear that he doesn’t target emotional networking - plain old socialising - but specifically career networking, networking in the context of advancing one’s career, especially, but not uniquely, one’s job prospects.
It does not seem clear to me, however, whether we can make a clean separation between career networking and emotional networking, especially in academia, where (for reasons I outlined above) the people in one’s professional network and one’s emotional (friend) network overlap to some extent. Dobos offers several arguments against the legitimacy of career networking. Insofar as the search process is meritocratic, career networking is morally objectionable because it attempts to distort the meritocratic allocation of positions, in a process analogous to bribery, or to ‘earwigging’ attempting to persuade judges outside of the formal process. In both cases, the career networker obtains an unfair advantage. Is it possible to engage in ethical career networking?
If Elisabeth Lloyd’s take on the female orgasm is
correct—i.e. if it is homologous to the male orgasm—then FEMALE ORGASMis not a proper evolutionary category. Homology is sameness. Hence, male and female orgasms belong to the same category. The orgasm is an adaptation, whether male or female (and
Lloyd should agree). It is not a spandrel or by-product.
I’ll get back to this in a moment, but first some background. There are five NewAPPSers who have a particular interest in the
philosophy of biology. Roberta Millstein, Helen De Cruz, Catarina Dutilh Novaes, John Protevi, and myself. Aside from Roberta, each of us comes at it from a related area in which biological insight is
important. For me, that area is perception. I have written quite a bit about
biology, but my mind has always been at least half on the eye (and the ear, and
the nose, and the tongue, . . .).
There is a divide among us with respect to a leading controversy
in the field. Catarina is strongly anti-adaptationist and I am strongly
adaptationist (perhaps because of my motivating interest in perception, which is exquistely adaptive). Roberta, Helen, and John are somewhere in between, but likely closer to Catarina than to me. You can gauge where I stand when I tell you that in my view, Gould and Lewontin’s 1979
anti-adaptationist manifesto, “The Spandrels of San Marco and the Panglossian
Paradigm” is one of the worst, and certainly one of the most mendacious, papers I have
ever read in any field. Among the five of us, I am sure I am alone in this.
Given all of this, my take on adaptationism with regard to the orgasm may get a
hotly negative response from my co-bloggers. Nevertheless, I’ll get on with it.
[this post originally appeared in Aesthetics for Birds as a guest post] Hayao Miyazaki's animation movie Ponyo features a tsunami. The tsunami is shown in its full threatening and destructive power, yet is rendered with a great aesthetic sensibility. On several occasions, Miyazaki expressed his aesthetic delight in natural disasters, and defended it as follows:
There are many typhoons and earthquakes in Japan. There is no point in portraying these natural disasters as evil events. They are one of the givens in the world in which we live. I am always moved when I visit Venice to see that in this city which is sinking into the sea, people carry on living regardless. It is one of the givens of their life. In the same way people in Japan have a different perception of natural disasters.
Miyazaki is not the only artist to find inspiration in natural disasters. William Turner depicted with gusto a hapless steamboat, struggling in a snowstorm. That we find aesthetic delight in natural disasters is puzzling. Why do we sometimes delight in natural disasters? And is it morally appropriate to do so? These questions have not often been addressed because both aesthetics and psychology have tended to focus on positive and pleasurable aesthetic properties of nature, such as the delicacy of a flower, the slow twirling of autumn leaves, the majesty of a waterfall. But we are not only be moved by nature (as Noël Carroll describes our intuitive and visceral response to nature) in its delicate, pretty form, but also in its destructive form.
[note: this blogpost collects some scattered thoughts I hope to organize in article form sooner rather than later, for my British Academy project on religious social epistemology, see here]
There is an ongoing debate what we should do when we are confronted with disagreement with an epistemic peer; someone who is as knowledgeable and intellectually virtuous in the domain in question. Should we revise our beliefs (conciliationism), or not engage in any doxastic revision (steadfastness)? Epistemologists aim to settle this question in a principled way, hoping general principles like conciliationism and steadfastness can offer a solution not only for the toy examples that are being invoked, but also for real-world cases that we care passionately about, such as scientific, religious, political and philosophical disagreements. However, such cases have proven to be a hard nut to crack. A referee once commented on a paper I submitted on epistemic peer disagreement in science that the notion of epistemic peer in scientific practice was useless. S/he said "It works for simple cases like two spectators who disagree on which horse finished first, but when it comes to two scientists who disagree whether a fossil is a Homo floresiensis or Homo sapiens, the notion is just utterly useless."
That referee comment has always stuck in my mind as bad news for epistemology: if we can't use our principled answers in epistemology to apply to real-world cases of epistemic peerage, the debate is of marginal value. There seems to be an easy escape: one common response, both by steadfasters and conciliationists has been that we need not revise our beliefs in complex messy cases if we have reason to believe that we have access to some sort of insight that our epistemic peer lacks. van Inwagen, for instance, muses about his disagreements about some philosophical matters with David Lewis, whom he greatly respects: they both know the arguments, and both have considered them equally carefully. But ultimately, van Inwagen thinks
I suppose my best guess is that I enjoy some sort of philosophical insight (I mean in relation to these three particular theses) that, for all his merits, is somehow denied to Lewis. And this would have to be an insight that is incommunicable- -at least I don't know how to communicate it--, for I have done all I can to communicate it to Lewis, and he has understood perfectly everything I have said, and he has not come to share my conclusions.
As one can see, the notion of epistemic peer simply dissolves here, since van Inwagen just asserted that he has insights in the domain in question that are denied to Lewis. To take another example, suppose you are a Christian faced with a seemingly equally intelligent atheist. According to Plantinga (WCB), this disagreement is not a defeater to your beliefs, as you can confidently assume your dissenting peer "has made a mistake, or has a blind spot, or hasn’t been wholly attentive, or hasn’t received some grace she has, or is blinded by ambition or pride or mother love or something else". But how do we know when we are right? Is the "feeling of knowledge", the conviction we are right, any indication that we actually are right? I will argue here that it is not, and therefore, that simply discounting the other as epistemic peer on account of this is not warranted.
If you are a professional philosopher, it is likely that at some point you will have to write a grant proposal. There are many types of grants: small intra-university grants, large grants funded by the government, grants by philanthropic organizations. In some countries, like Belgium or the Netherlands, grants are the chief means of academic survival for young academics, as it takes at least 5 years or more before (if at all) one manages to obtain a permanent position. Earlier I wrote about how frustrating grants are and how they pose the problem of the red queen effect and tragedy of the commons.
I stand by this: collectively, grants have significant costs for the profession. But for an individual philosopher who wants to break into a new research area, and doesn't have loads of institutional funding already, projects are a great way to get in the game, to do the research you have always dreamed about doing, and to get the funding and time to actually do it!
How do you write a grant proposal? I've attended workshops on how to write them, talked to research facilitators, consulted colleagues who have been in boards. I have also been an external referee for two granting agencies, so I get a sense of what makes a project look good. And I have also received several grants. The following tips (below the fold) are distilled from these experiences:
A while ago I read the new theist, a particularly thorough CHE article on WL Craig's natural theology as apologetics. Together with Eric's recent blogpost on religion and changing epistemological fashions, it got me thinking about the role of natural theology in contemporary analytic philosophy of religion, and its wider role in apologetics. What I am wondering is whether analytic philosophy of religion (henceforth aPoR) really is as intellectually respectable as its proponents think it is, and how this connects to the role of natural theology within current aPoR as apologetics. I think these questions are related, somehow, although I would have to think more about how they relate. As someone who does aPOR and who has received Templeton funding, I am obviously not a neutral observer, but I hope to provide some balanced observations nonetheless.
Let me here observe too, continued CLEANTHES, that this religious argument, instead of being weakened by that scepticism so much affected by you, rather acquires force from it, and becomes more firm and undisputed. To exclude all argument or reasoning of every kind, is either affectation or madness. The declared profession of every reasonable sceptic is only to reject abstruse, remote, and refined arguments; to adhere to common sense and the plain instincts of nature; and to assent, wherever any reasons strike him with so full a force that he cannot, without the greatest violence, prevent it. Now the arguments for Natural Religion are plainly of this kind; and nothing but the most perverse, obstinate metaphysics can reject them. Consider, anatomise the eye; survey its structure and contrivance; and tell me, from your own feeling, if the idea of a contriver does not immediately flow in upon you with a force like that of sensation. The most obvious conclusion, surely, is in favour of design; and it requires time, reflection, and study, to summon up those frivolous, though abstruse objections, which can support Infidelity. Who can behold the male and female of each species, the correspondence of their parts and instincts, their passions, and whole course of life before and after generation, but must be sensible, that the propagation of the species is intended by Nature? Millions and millions of such instances present themselves through every part of the universe; and no language can convey a more intelligible irresistible meaning, than the curious adjustment of final causes. To what degree, therefore, of blind dogmatism must one have attained, to reject such natural and such convincing arguments?--Hume, Dialogues 3.
In her post yesterday, Helen de Cruz asserted that Cleanthes "makes an important empirical claim, namely that belief in a designer flows spontaneously, irresistibly and non-inferentially from our consideration of order in the natural world." Because Helen only quoted the sentence on with "anatomise the eye," she left me the straightforward rejoinder that according to Hume such anatomizing always presupposes expert judgment/taste/cultivation. In response, the up-and-coming Hume scholar, Liz Goodnick, pointed to more evidence for Helen's position. (I think it is a bit misleading to call that evidence "Later in Part III,"--it is the very same paragraph, and part of a single, non-trivial argument, but strictly speaking Goodnick is correct.) I am afraid that in larger context the claim by Helen and Liz cannot be sustained, or so I argue below the fold in some detail (apologies).
In many respects, Hume was a cognitive scientist of religion avant la lettre: his Natural history of religion, Enquiry and Dialogues concerning Natural Religion contain bold hypotheses about the origins of religion in human nature (NHR), the reason why people believe in and transmit miracle stories (Enquiry, On Miracles), and the intuitiveness of intelligent design/creationism (NHR and Dialogues). Many of these hypotheses are still being explored by current cognitive scientists of religion (CSR for short) who share Hume’s taste in making bold conjectures about the cognitive, historical and cultural factors that underlie widespread religious beliefs and practices. Recent Hume scholarship asks whether Hume thought that belief in creationism/intelligent design is a natural belief. The answer is not at all obvious, since Hume voices several seemingly conflicting opinions. In this blogpost I want to argue that Hume’s ideas about the intuitiveness of creationism/IDC are very relevant to cognitive science today, and that belief in intelligent design is not a natural belief, but that some of its constituent beliefs are.
The ideal of a pure language in which a pure, pared-down, unambiguous translation of
the truths of pure mathematics can be effected deserves a more extended discussion
than I have given it here. But I will limit myself to pointing out that this ideal language is very far indeed from the languages of man as conceived by Whorf; for to Whorf the least visible structures of a language, those that seem most natural to its Speakers, are
the structures most likely to embody the metaphysical preconceptions of the language
Community. On the other hand, the case of gravitational attraction does not at all
demonstrate what Whorf asserts about Newtonian cosmology as a System, namely that
the key concepts of the cosmology emerge smoothly from or fit smoothly into, the
structures of Newton's own language(s). Instead we find in Newton a real struggle, a
struggle sometimes — e. g., in the General Scholium to Book III of the Principia —
carried out in awareness of the issues involved, to bridge the gap between the non referential
symbolism of mathematics and a language too protean to be tied down to
single, pure meanings.--J.M. Coetzee (1982) "Newton and the Ideal of a Transparent Scientific Language," Journal of Literary Semantics.
Among recent philosophy the Whorf hypothesis is primarily an object of curiosity as background to Kuhn's Structure (and maybe Quine's Word and Object), although two of my favorite philosophers, Lieven Decock and our very own Helen de Cruz (and a few others), work on it. (Undoubtedly part of the lack of interest is recent, philosophical abhorrence of relativism, but the thesis has not disappeared from linguistics and psychology.)* A charismatic economist, Keith Chen, rediscovers a version of it in economics by focusing on the surprising impact of linguistic structure and financial activity (saving rates)--here's a popular video. (HT Hülya Eraslan; I ignore my methodological qualms today.) In the article quoted in the epigraph above (it's his conclusion), Coetzee is interested in the version -- he attributes it directly to Whorf -- that "we see nature along lines laid down by our native languages." I call this version, the "narrow Whorf thesis" (to distinguish it from broader claims about linguistic/cultural relativism and also Whorf's explanation for the narrow Whorf thesis.)
Now, what does the narrow Whorf thesis have to do with Newton and Coetzee?
There has been a lot of empirical and philosophical research on social cognition, in particular, on our ability to share attention with others, to put ourselves in the perspective of others, and to understand other people's intentions. As Gallotti and Frith point out in their recent article "Social cognition in the we-mode", many theorists still conceptualize social cognition in terms of individual minds and their capacities. Gallotti and Frith propose that interacting minds result in an irreducible 'we' mode, a collective mode of sharing minds that expands each of the participating minds' potential for understanding and action. This shift in social cognition research moves away from looking at social cognition in disembodied minds, but rather examines how people can augment their abilities by acting together with others.
This can shed light on the persistence and role of cognitive variability in humans, which I believe has also been studied too much in terms of individual cognition, and not enough in terms of interacting minds.
Take, for instance, people on the autistic spectrum. The prevalence of autism is quite high (although the high prevalence recently might largely be due to improved detection and early diagnosis). There have been numerous attempts to try to identity the underlying causes, such as (recently) older fathers, inducing labor, or genetic factors. Autism is also sometimes regarded as an extreme outlier in the normal range. In all cases, the underlying idea is that autism is abnormal, something that needs to be prevented, and failing that, treated so that the patient behaves in a way that approximates neurotypical children and adults. Terms like 'autism epidemic' and the recent scare over the alleged (now debunked) link between autism and the MMR vaccine reinforce this negative image of autism, something to be prevented at all costs.
While people with autism and their families undeniably face many challenges, looking at cognition in the "we" mode may shed new light on the phenomenon and may help explain its prevalence.
One can’t help but share in
Chagnon’s frustration at the hasty decision of the majority of his disciplinary
peers to disown its historical connection to any branch of the complex and
variegated scientific tradition. After all, until very recently (and to some
extent to this day still in languages such as French and German), a ‘science’
was any relatively systematic body of knowledge, anything the goal or product
of which was scientia, and it is only
in the very most recent times that the notion has been reduced to the figure of
somber men seeking to run the world on the basis of claims of unassailable
expertise. Yet the cartoon version of science that Chagnon proposes in
response, in its total failure to recognize that there might be special
problems of theory-ladenness, power inequality, looping effects, prejudice --in
a word, all those factors that make the scientific study of humans a more
delicate matter than the study of other domains of nature--, can easily make
one wish to take the ‘postmodern’ turn oneself, if only to get away from this
astoundingly simplistic pretense of scientificity.--Justin Smith (writing about Napoleon
Chagnon’s book, Noble Savages: My Life
among Two Dangerous Tribes- The Yanomamö and the Anthropologists (Simon
& Schuster, 2013).
Justin is one of the leading historians of philosophy of my generation. He is also a staunch defender of the fact that "one can in fact approach the subject matter
of anthropology naturalistically, using the conceptual tools of European
traditions of thought, and still come up with theoretically sophisticated
accounts of indigenous beliefs that remain nonetheless sensitive to the actual
concerns, to the ‘voices’, of the people being studied." (He also wants to bring some anthropological methods into the history of philosophy.)
Following up on Jon's beautiful and thoughtful post, I here offer some reflections on the role of free and undirected play in academic and non-academic creativity. Jon remarked that being focused on a narrow and highly specialized research field may contribute to a sense of unhappiness and lack of satisfaction for academics. We have increasing knowledge of how creativity works, for instance, cognitively: it is a stochastic process that does not thrive only on focused research efforts (although that has its place too), but also on the serendipitous coming together of ideas from different sources. It also thrives by having an overproduction of ideas, from which we can pick and select (this is why some regard creativity as a Darwinian procsess). Some degree of variation (deviation from one's specialized activity) helps the creative process, especially if the other activities are indirectly related to one's main tasks.
I am currently reading a book on the Inklings, an informal reading group of Oxford dons between the 1930s and the late 1940s. What struck me was that CS Lewis, Tolkien and other members of this group engaged in activities that we now would find completely unproductive, even for a graduate student, postdoc or someone on sabbatical. For instance, some members of the Inklings came together in a reading group where they would translate stories from old Icelandic into English. There were those who could already read Icelandic fairly well (like Tolkien, who had taught himself) - these members of the reading group translated several pages at the time. Others were absolute beginners, like CS Lewis. When he started the reading group, he couldn't read a word of Icelandic and had to use a dictionary for every word. He translated maybe 2 or 3 paragraphs at a time, with help and coaching from his peers who were better versed. The Inklings also wrote their own verse, based on Icelandic narratives, and their own stories, which would ultimately form the basis of the Lord of the Rings.
This form of free play would be unthinkable in the current UK climate of the REF and other assessment tools, which encourage academics to put their energy into getting their work in high-quality publishers. Would Tolkien be able to write the Lord of the Rings if he were an academic today? I don't think so.
[X-posted at Prosblogion] In the epistemology of religion, authors like Swinburne and Alston have argued influentially that mystical experience of God provides prima facie justification for some beliefs we hold about God on the basis of such experiences, e.g., that he loves us, is sovereign etc. Belief in God, so they argue, is analogous to sense perception. If I get a mystical experience that God loves me, prima facie, I am justified in believing that God loves me.
Alston relies critically on William James' Varieties of Religious Experience (1902). This seminal, but now dated psychological study draws on self-reports by mystics to characterize mystical experience. The mystical experiences James (and others) describe are unexpected, unbidden; they immediately present something (God) to one's experience, i.e., they provide a direct, unmediated awareness of God. More recent empirical work on the phenomenology of religious experience, such as that conducted by Tanya Luhrmann and other anthropologists, suggests that ordinary sense experience is a poor and misleading analogy for religious experience.
You can find a bunch of useful lists of advice for academics, lists that tell you, for instance, what you have to do to get into a good grad program, land a tenure track position, earn tenure, or get promoted. I will here not provide an extensive list of such lists, but they are easy to find, for instance here at NewApps. Even though the advice offered there is often sound, such lists sometimes result in the precise opposite effect of what they intend (i.e., foster success) by inducing anxiety in their readers. Especially when combined, they create a formidable to do list that nobody who has something of a life next to their work could possibly achieve, and ultimately lead to the question of whether academics should perhaps turn back to that monastic, celibate life, that leaves no time for a partner or children. And, at the end of one's life, one wonders whether an exclusive focus on work has been worth it (intriguingly, many commentors on the piece by Brit Brogaard I just linked to seemed to think not).
One heuristic that Radhika Nagpal offers and that I find very attractive is to not gauge your productivity, output, and efforts against standardized lists, but rather, against your ideal(ized) self. Radhika Nagpal calls this heuristic "I try to be the best “whole” person I can". This is not a compromise, she stresses, but
That *is* me giving it my very best. I’m pretty sure that the best scientists by the above definition are not in the running for most dedicated parent or most supportive spouse, and vice versa. And I’m not interested in either of those one-sided lives. I am obsessively dedicated to being the best whole person I can be.
Our very own Helen de Cruz called my attention to this fascinating post by Keith DeRose. Consider his claim [A]:
[A] But as it generally goes with philosophical arguments, they don't
produce knowledge of their controversial conclusions about substantive
Initially I thought that in [A] DeRose was relying on a claim about the nature of philosophical argument; let's call it "a no knowledge producing property of philosophical argument." This has a Humean flavor to it. I was (unintentionally, perhaps) led in this hermeneutic direction by DeRose's quote [B] from David Lewis' (1996) "Elusive Knowledge:"
[B] We have all sorts of everyday knowledge, and we have it in abundance. To
doubt that would be absurd. At any rate, to doubt it in any serious and
lasting way would be absurd; and even philosophical and temporary
doubt, under the influence of argument, is more than a little peculiar.
It is a Moorean fact that we know a lot.
This recent paper by Jean-Michel Fortin and David J. Currie in Plos ONE argues that funding does have a positive impact on scientific productivity and impact, as measured by productivity in terms of publications, and as measured by citations to papers funded by grants. However, surprisingly, the correlation is quite weak, and large grants had a lower impact per dollar than small grants. From the abstract
Impact was generally a decelerating function of funding. Impact per dollar was therefore lower for large grant-holders. This is inconsistent with the hypothesis that larger grants lead to larger discoveries. Further, the impact of researchers who received increases in funding did not predictably increase. We conclude that scientific impact (as reflected by publications) is only weakly limited by funding. We suggest that funding strategies that target diversity, rather than "excellence", are likely to prove to be more productive.
In other words, funding more researchers with a smaller budget may be more conducive to scientific progress than funding only the most "excellent" researchers. Indeed, in an earlier blog post I suggested that it might increase productivity even further if grants were dealt out at random, or distributed evenly among those who request them. At least then, researchers wouldn't spend disproportionate amounts of time writing grant applications.
Unfortunately, as the authors note, the model in grant-making, both through national funding agencies and private, philanthropic organizations, is veering towards large grants that reward mainly "excellence", which also creates a Matthew effect, whereby a researcher's or lab's success in obtaining past grants is one of the measures used in deciding whether or not to award a grant.
Following up on Catarina's post on Tania Lombrozo's article, one of the reasons offered for why women leave philosophy early on might be the lack of women philosophers in introductory syllabi. So how few women are there?
Meghan Masto, assistant professor at Lafayette College, conducted a survey and gave me the opportunity to share her results on NewApps:
We gathered syllabi from introductory level philosophy courses taught at the top 40 small liberal arts schools in the country (as ranked by U.S. World News). The results of this research confirmed our suspicions. So far we have collected 57 introductory level philosophy syllabi from 22 of the schools contacted. By our count there are a total of 739 philosophers included on the syllabi and only 60 of these are female. So a mere eight percent of reading covered in introductory level philosophy courses are written by women. Thirty one of the 57 courses include no female authors at all (courses included an average of thirteen philosophers on their syllabi). Further, if we were to exclude the syllabi from the ethics courses and include the syllabi only from the 43 non-ethics courses, the percentage of female philosophers discussed is even lower. By our count, these non-ethics syllabi include 602 philosophers on the reading lists, with only 36 of these being female—so a mere 6 percent of philosophers discussed in non-ethics courses are female. Twenty-four of the 46 non-ethics courses included no female authors at all.
As the third season of Game of Thrones has ended, this interesting reflection, written by Adam Brereton, contends that A Song of Fire and Ice by G.R.R. Martin and the TV series based on it simply don't work, because they do not obey what Chesterton has termed "elfin ethics":
according to elfin ethics all virtue is in an 'if'. The note of the fairy utterance always is, 'You may live in a palace of gold and sapphire, if you do not say the word "cow"'; or 'You may live happily with the King's daughter, if you do not show her an onion.' The vision always hangs upon a veto. All the dizzy and colossal things conceded depend upon one small thing withheld. All the wild and whirling things that are let loose depend upon one thing that is forbidden.
In GOT, however, this rule doesn't apply: people who do break oaths (like Robb Stark) get killed in a horrible way, but people who are honorable, try to do the right thing and don't break oaths (like Eddard Stark) also get killed in a horrible way. In this, Martin differs from other fantasy writers, like H.P. Lovecraft or J.R.R. Tolkien. We can expect something like the massacre of the Starks at the Red Wedding to occur on a biweekly basis. So, Brereton concludes
Westeros just doesn't work. Unlike Tolkien, Lovecraft and Peake, it is not a consistent creation. Where does the good exist?...In Martin's broken world, good only resides in individual acts, only as long they don't get you killed, which more often than not they do.
The intuition that fantasy works should have some moral compass, or indeed, that fantasy universes should ultimately be just worlds, is compelling. Indeed, as Mitch Hodge argues in this draft paper, we even have a strong intuition that the world, au fond, is a morally just place. People intuitively regard the world as a just place: the good prosper, the wicked suffer.
The case of McGinn's leaving Miami about sexually explicit e-mails to his female research assistant, and the many testimonies about sexual harassment on the What's it like to be a woman in philosophy blog make one think: is there more sexual harassment in philosophy than in other fields with a low ratio of women to men, such as say, economics, physics or mathematics? Jennifer Saul writes here:
I am firmly convinced that there are multiple factors involved in causing the under-representation of women, factors that interact with and compound each other. One important one is the likelihood that women in philosophy experience an unusually high level of sexual harassment. It is very hard to get good data, comparative or otherwise on prevalence of sexual harassment due to very low rates of official reporting. However, many have been shocked by the stories reported at What is it Like to Be a Woman in Philosophy (beingawomaninphilosophy.wordpress.com). As the editor of this blog, I have been even more shocked by the large number of cases I have been contacted about which never appeared on the blog due to fear of identification.
As far as I'm aware, there is no what's it like to be a woman in physics blog, but perhaps this is because there is no initiative for such a blog in physics, not because female physicists do not get harassed or discriminated. Perhaps there are fewer feminist physicists than feminist philosophers? Perhaps it takes feminist philosophers to start a blog like What's it like to be a woman in philosophy? If that is the case, sexual harassment might well be a significant contributing factor in many disciplines that have low percentages of women.
But alternatively, there might be reasons to assume that philosophy in particular has a problem, that, as Saul puts it "women in philosophy experience an unusually high level of sexual harassment." If this is the case (and - I want to stress - we don't know whether it is the case), what might be the causes?
As the mother of a newborn infant, I am struck by the normative ladenness of speech about, and imagery about, breastfeeding. Breastfeeding is a surprisingly difficult technique to master, especially considering that it is a universal mammalian behavior. Were it not for lactation experts, midwives and volunteers, I and many other new mothers would never be able to establish successful nursing.
Humans, to a larger extent than other primates, rely on imitation and teaching to transmit a variety of "natural" behaviors, such as foraging. Without being able observe other women who are nursing, it is very taxing and difficult to get a newborn to latch on and feed successfully.
Interestingly, Katie Hinde, a primatologist with expertise in lactation, has pointed out that contrary to common belief not nursing has never been a death sentence. "Hundreds of years before halfway-decent formula, infants were fed gruesome substitutes for breast milk (mushed bread and beer, say)—and although many more died than those who were nursed, many also survived." So it turns out that a choice about whether or not to breastfeed has existed longer than we currently acknowledge.
Ostensibly, information directed to new mothers is supposed to help them make an informed decision about whether to nurse or give formula. This information is often couched in agent-neutral, medicalized terms, such as "studies suggest that breastfeeding lowers the risk of sudden infant death syndrome and middle ear infection in infants, and breast cancer in mothers" and "the American Academy of Pediatrics recommends exclusive breastfeeding for the first 6 months of life". But they are also prescriptive statements, directed at young mothers. They make truth claims, obviously, about the benefits of breastfeeding and the recommendations of the AAP, but they have an important social function, namely to make new mothers recognize the distinctive weight the claims have for them in particular. I am here drawing on Kukla and fellow blogger Mark Lance's Yo and Lo! They single out a group of people (new mothers) and in effect say "You ought to breastfeed if you have your baby's best interests in mind".
Feminist philosophers drew attention to this THE article on gender equality in academia. The article highlights striking differences between countries on gender participation in academia, with a 47% female participation rate in Turkey, and an abysmal 12.7% in Japan as two extremes (see the map through the link). For most of my academic career, I have studied and worked in Belgium, where gender participation is very poor (it's one of the red countries on the map). Only 13% of full professors in Belgium are women. In the EU, only Cyprus and Luxembourg do worse. In this post, I want to examine causes for the disparity (the high % in Turkey; the low % in Belgium), drawing amongst others on personal experience, and on this highly relevant article on Turkish academia.
[this is cross-posted at Prosblogion] Richard Dawkins has argued several times (e.g., here) that bringing up your child religiously is a form of child abuse. I think his argument that religious upbringing in general is child abuse has little merit (after all, Dawkins himself is the product of a traditional Anglican upbringing and calls himself - rather proudly - a cultural Anglican, hardly the victim of child abuse). However, his claim in the linked article is that parents who attempt to instill things like Young Earth Creationism (henceforth YEC) in their children are doing something wrong, or are somehow overstepping their role as parents. This question, I believe, is worthy of further attention.
I read this paper by David MacNaughton on why philosophy is so tedious (recent link at Leiter's blog). Of the many interesting strands in this paper, I'd like to highlight this concern:
There is now so much to read that “keeping up with the current literature,” could occupy every
waking moment. But to what end? Do we really want to create a profession where, to get
recognition and to advance one’s career, one has no time to do anything except philosophy? That is not good news for philosophers. It is neither sensible nor humane to encourage this work-centered monomania in anyone … Moreover, it is inimical to one of the traditional justifications of philosophy that sees it as a reflection on life, a discipline that trains you to understand the world in which you live better and so enables you, and others, to live better. But we are in danger of abandoning that conception and leaving professional philosophers no time and no incentive to put that wisdom into practice, to engage in other worthwhile activities. Is philosophical training a preparation for doing philosophy, and nothing more? … Nor is this degree of absorption in philosophy good for philosophy itself. It is (predominantly) a liberal discipline, and the best philosophy (especially in my own subjects, ethics and the philosophy of religion) is enriched by a wide, reflective, and imaginative experience of literature, politics, art, and science (McNaughton, p. 7).
The author here is right: if philosophy is indeed the love of wisdom, and its practice is embedded in a richer social, cultural, artistic, political, etc. context, it would be very strange that the only thing that could contribute to our work as philosophers would be reading papers and books by other philosophers.
Non-philosophical activities and concerns could enrich philosophical practice. By this I mean a wide variety of things, for instance, being a parent, a musician, someone who actively engages with a religious tradition, someone who is involved in political activism, etc. I would like to hear from readers how their non-philosophical activities have influenced and enriched their philosophy.
It would be valuable to get an idea of this, as I think there is an increasing pressure, even on people who are not on a tenure track, to work incessantly - as if work alone is something that makes a good philosopher, and where one's personal life is regarded mainly as an impediment to being a blossoming philosopher. This is, of course, not a problem unique to philosophy (it pervades academia), but it does strike me as something our discipline needs to address.
I was recently having tea with a philosopher who is the head of an interdisciplinary research group. We talked about grant proposals. My interlocutor said he devoted a lot of his working hours (at least 1/3 in his estimation) to writing grant proposals. He also knew someone personally, who was not a philosopher, but someone from an empirical discipline, who devoted as much as 70% of his time on grant writing. That latter person even said that he can now scarcely keep up with the literature in his highly specialized field - let alone contribute original research. But, given that his research group (comprising many PhD students and postdocs) depended on his ability to secure grants, there was no other option but to devote more and more time to the grant writing process.
Since grant schemes often ask for unrealistically elaborate timetables and detailed projected results, many experienced grant writers have turned to this heuristic (they have admitted this freely to me, and are unabashed about it - I haven't tried this for myself, but the practice is widespread):
Write a grant proposal that describes the work you have recently done (let's call this research project X).
If your proposal gets funded, you start doing the research you really want to do (research project Y)
If asked for a report of results, you simply mention the papers that are now in press, undergoing review or are recently published from project X; you do not mention the actual work that is now going on in your centre or lab, project Y.
About 1 year off from the completion of your current grant, you start developing a new grant proposal, this time detailing how you will carry out project Y (which, of course, is already about completed), allowing you in the future to pursue project Z.
And so on. This practice illustrates, I believe, that there is something deeply wrong with the grant making process as it is currently practiced.
I've been recently reading some work by theistic philosophers and theologians who accept evolutionary theory. They seek to interpret scripture in such a way that it is compatible with the evolution of humans and other animals. One promising recent strategy is to read Genesis 1-3 through the theology of Irenaeus rather than through Augustine, trading one patristic author for another. Here, I want to examine whether this is a reasonable strategy for the empirically-informed theist.
It is still very common that students only get readings by male authors in their introductory classes to philosophy. This contributes to the image of philosophy as a boys only discipline. It would therefore be useful to have a list with readings written by women that are suitable for philosophy courses, such as general introduction to philosophy, philosophy of science, ethics, epistemology.
I would like to invite readers to contribute their favorite pieces, written by women philosophers, to the following Google spreadsheet (please fill out the spreadsheet, rather than using the comment section, except if you experience difficulties with the spreadsheet).
In first instance, the focus would be on papers and book excerpts that are not overtly specialist or technical, suitable for intro-level or intermediate courses. Ideally, they should have made a significant impact on their field. They should be readings you have either already successfully used in class context, or envisage using.
As is well known, philosophy is a very male-dominated (and white, straight, etc) field, when compared to all other humanities, social sciences, and even several STEM disciplines. Even if we take into account the difficulties that minorities face in academia, we cannot explain why philosophy does worse than most other academic fields. I'd like to put a slightly controversial idea on the table: there are good reasons to believe that philosophers are less effective than academics from other fields in their ability to counter their own biases, i.e., they exhibit a larger bias blind spot.
I regret to inform you that Awesome Bigname Philosophy Journal cannot accept your paper for publication. After having googled the title of your paper, and failing that, lines from your abstract and paper, our referee discovered your identity. He found that you are a nobody from an lackluster university, without a tenured or tenure track position but only a lowly [adjunct teacher, grad student, postdoc etc], and [a woman, black, non-English speaker etc] to boot. Therefore, after a perfunctory glance at your paper, the referee has decided that your paper is not of high enough quality to be published in ABPJ.
We pass on referees' comments in the hope that they may prove useful. We receive over n submissions each year, and must reject many very competent papers, especially those written by people on the bottom of the academic ladder. We hope that your work will find a home in another journal, though obviously one not as highly regarded as ABPJ.
Eric has recently attention to this wonderful paper by L.A. Paul. The paper focuses on the question of how we make decisions that can transform our lives, and whether we can ever do so rationally. Her paper looks at the decision whether or not to have children, but it applies to other potentially life-transforming decisions, such as whether or not to go to graduate school or get involved romantically with someone.
Here, I don't want to focus on Paul's claims about the extent to which we have knowledge about what's it like to be a parent. I think, like Eric, this depends a lot on cultural context, and westerners seem to be in a particularly impoverished epistemic position because of the rarity of children, and the cultural ideals that surround it. Parenthood is described in unrealistic romantic, language (e.g., when I was pregnant, friends and family assured me that I would be in a blissful and rosy cloud like state after the birth of my child; breastfeeding would be easy and a wonderful way to connect to my baby; I would forget the pain of childbirth the moment I held her in my arms - all claims that turned out, at least for me, false and made me wonder if anything was wrong with me).
But I think that Paul is nevertheless right that decision theory does not provide us with the right tools to make potentially life-transforming decisions. When westerners today have children, Paul observes that there is a cultural ideal to "think carefully and clearly about what they want before deciding that they want to start a family." How do we do this? According to standard decision theory "we first partition the logical space by determining the possible states that are the outcomes of each act we might perform. After we have the space of possible outcomes, we assign each outcome a value (or utility), and determine the probability of each outcome’s occurring, given the performance of the act." However, she goes on arguing, convincingly, that this model fails, as it is impossible to calculate expected value based on preferences about what it would be like to have one's own child.