To my knowledge, full book manuscripts are never reviewed anonymously. Given that the double anonymity of peer review is implemented to decrease biases, and presumably, thereby increase the focus on the quality of the writing, this is puzzling. David Chalmers wrote, in a very helpful comment on how to publish a book "Most book refereeing is not blind, unlike journal refereeing. And when what's being reviewed is a proposal rather than a full manuscript, reputation of the author make a huge difference in reviewers' and editors' confidence that the proposal will be fleshed out well to a book."
While I can see that the reputation or renown of an author can relevantly play a role at the proposal stage in assessing the competence of the prospective author in writing a full manuscript, I don't see why it should play a role when the full manuscript is reviewed. This will inevitably happen when review of full manuscripts is non-anonymous. It would be hard not to be influenced if the author of one's manuscript happened to work at high-ranking institution, is very senior, and already has an excellent track record (I declined to review a book for a major press for this reason), or conversely, if the author is relatively junior, working at a teaching-focused or obscure place.
This is part 3 of a 3-part series of interviews with philosophers who left academia right after grad school or in some cases later. See part 1 to see what jobs they held, and part 2 on how they evaluate their jobs. This part will focus on the transferrable skills of academics.
The burning question of academics who want to leave academia is: What transferrable skills can they bring to the private sector? The responses of the seven people I interviewed clearly indicate that the skills that are transferrable are broad and fairly high-level.
This is part 2 of a 3-part series of interviews I conducted with seven philosophers who went on to a non-academic career after obtaining their PhDs. For more background on these philosophers, the work they currently do, and the reasons they left academia, see part 1: How and Why do they end up there? This part will focus on the realities of having a non-academic job.
One of the main attractions of an academic job, especially one of a tenured academic professor, is the autonomy (intellectual and in terms of time management) it provides. However, there are downsides as well: the increasing pressure to churn out publications (which some of the respondents already alluded to in part 1, lack of support, and isolation lead to mental health problems in some academics. So how do philosophers with experience in academia and outside evaluate the work atmosphere?
This is the first of a three-part series featuring in-depth interviews with philosophers who have left academia. This part (part 1) focuses on their philosophical background, the jobs they have now, and why they left academia. Part 2 examines the realities of having a non-academic job and how it compares to a life in academia. In part 3, finally, the interviewees reflect on the transferable skills of a PhD in philosophy, and offer concrete advice on those who want to consider a job outside of academia.
Does having a PhD in philosophy mean your work opportunities have narrowed down to the academic job market? This assumption seems widespread, for example, a recent Guardian article declares that programs should accept fewer graduate students as there aren’t enough academic jobs for all those PhDs. Yet academic skills are transferrable: philosophy PhDs are independent thinkers who can synthesize and handle large bodies of complex information, write persuasively as they apply for grants, and they can speak for diverse kinds of audiences.
How do those skills translate concretely into the non-academic job market? To get a clearer picture of this, I conducted interviews with 7 philosophers who work outside of academia. They are working as consultant, software engineers, ontologist (not the philosophical sense of ontology), television writer, self-employed counselor, and government statistician. Some were already actively considering non-academic employment as graduate students, for others the decision came later—for one informant, after he received tenure.
These are all success stories. They are not intended to be a balanced representation of the jobs former academics hold. Success stories can provide a counterweight to the steady drizzle of testimonies of academic disappointment, where the inability to land a tenure track position is invariably couched in terms of personal failure, uncertainty, unhappiness and financial precarity. In this first part, I focus on what kinds of jobs the respondents hold, and how they ended up in non-academic jobs in the public and private sector. Why did they leave academia? What steps did they concretely take to get their current position?
I hope this series of posts will empower philosophy PhDs who find their current situation less than ideal, especially—but no only—those in non-tenure track position, to help them take steps to find a nonacademic career that suits them. And even if one’s academic job is as close to a dreamjob as one can conceivable get, it’s still fascinating to see what a PhD in philosophy can do in the wider world.
There are several variants of a list in circulation with skills our grandparents could do but the majority of us can't, for instance, 7 skills your grandparents had and you don't. Examples include ironing really well, sewing, knitting, crocheting, canning, cooking a meal from scratch, writing in beautiful longhand, basic DIY skills... What have the majority of us lost by not having these skills, which I'll call granparent skills for short, anymore?
As Lizzie Fricker argued today in a workshop held in honor of Charlotte Coursier, trust in other people is common and is a pervasive element of human life. We defer to the knowledge of others (testimonial dependence) and to their expertise (practical dependence): we rely on experts to tell us what the weather will be like, to fix our car, to give us a new haircut. Often, this deference is shallow and dispensable (we could in principle do it ourselves), but it can also be deep and ineluctable, as when we rely on electricians and other specialists.
This division of cognitive labor provides us with enormous gains, but does an increased reliance on testimony and expertise of others also come with costs? Fricker feels we do not reflect enough on this question, especially as the extent of both testimonial and practical dependence seems have increased dramatically in recent years. People increasingly rely on Google rather than internally stored semantic knowledge, and they increasingly outsource practical skills – navigation with maps, dead reckoning, and compasses is replaced by user-friendly technologies like GPS devices.
I live very close to Port Meadow, one of the largest meadows of open common land in the UK, already in existence in the 10th century, and mentioned in the Domesday book in 1086. I saw my first-ever live, wild oriole there. The land has been never ploughed, so it is possible to discern outlines of older archaeological remains, some going back to the Bronze Age. The consistent management of the land makes the changes predictable: it turns into a lake in winter, is sprinkled with buttercups this time of year (see pictures below the fold - both are taken at about the same place, but one in May and the other in November), and looks mysterious and misty in the fall. Whenever I walk on Port Meadow I take my camera, anxious to preserve any beautiful view that falls on my retina, to preserve it for future memories. And, like many other parents, I take dozens of pictures of my growing children. Recently, I saw an NPR piece (no author given) that took issue with this tendency to want to preserve pictures for future memory.
The article launches a two-pronged attack against pictures. First, by worrying about capturing the moment, we lose the transience and beauty of the moment and enjoy it less. Second, the article cites psychological evidence that shows that people actually remember fewer objects during a museum visit if they were allowed to take photos of them, compared to when they only were allowed to observe them. The phenomenon is known as the photo-taking-impairment effect. Linda Henkel, who discovered the effect, says: "Any time…we count on these external memory devices, we're taking away from the kind of mental cognitive processing that might help us actually remember that stuff on our own."
Helen De Cruz has some excellent suggestions for how to talk to creationists given that neither debate nor denouncement are likely to be productive. She describes the way in which a religious person who is not a creationist can speak to another religious person who is a creationist, e.g., by pointing out that Biblical literalism is a recently emerged approach, one that may be impossible to apply consistently, and for that reason among others it may not be thoroughly used by anyone.
This article by Dan Kahan suggests that disbelief in human-caused climate change is like belief in creationism in this respect: What people "believe" about each doesn't reflect what they know, but rather expresses who they are. This supports the thesis that providing evidence for creationism isn't likely to change minds and that providing evidence for climate change isn't likely to change minds, either.
But what is the climate change equivalent, where we speak to people from their own perspective as Helen proposes that we do for religious people who are creationists?
A friend of mine is doing her DPhil in Oxford. She's American, and out of term she goes back to her home in middle America. She recently went to see the newly refurbished museum in her home town. When she was looking at the displays on human evolution, a museum guard, who had been observing her, suddenly said "So, what side are you on: the Bible or evolution?" Whereupon my friend replied "What do you mean what side am I on? This is not a football game, you know".
I am deeply troubled by the incipient creationism, which treats biblical literalism as a serious intellectual contender to scientific inquiry. I want my children to grow up with normal biology textbooks, not with Of Pandas and People. If creationists win their lobbying efforts to make creationism mainstream in schools and the public sphere, that is a loss for everyone (including the creationists). Debates don't seem to do any instrumental good. If we are not going to fight creationism through debates, how can we - as public intellectuals - ensure that creationism doesn't encroach even further upon our schools and public life?
On the basis of this year’s partial hiring data, Marcus Arvan notes that the majority of tenure track hires (a whopping 88%) are from people of Leiter-ranked programs. Only 12% of hires are from people of unranked programs. Also, 37% of all tenure track hires come from just 5 schools, the Leiter top 5 list - this is amazing if one ponders it, and one may wonder at the direction philosophy is going to, if most of its future tenured workforce comes from just a few select programs.
This has caused a lot of debate: why would people go to grad school in unranked programs at all? Why attend an unranked program if you can’t get into a highly ranked one? But what is often overlooked are the many factors, such as class and ethnic background, may contribute to someone not getting (or, as I will examine in more detail below), even applying to get into top programs. In fact, going for pedigree may be a particularly effective way to screen out people who come from poorer backgrounds and of different ethnicities.
A few weeks ago I had a post on different ways of counting infinities; the main point was that two of the basic principles that hold for counting finite collections cannot be both transferred over to the case of measuring infinite collections. Now, as a matter of fact I am equally (if not more) interested in the question of counting finite collections at the most basic level, both from the point of view of the foundations of mathematics (‘but what are numbers?’) and from the point of view of how numerical cognition emerges in humans. In fact, to me, these two questions are deeply related.
In a lecture I’ve given a couple of times to non-academic, non-philosophical audiences (so-called ‘outreach lectures’) called ‘What are numbers for people who do not count?’, my starting point is the classic Dedekindian question, ‘What are numbers?’ But instead of going metaphysical, I examine people’s actual counting habits (including among cultures that have very few number words). The idea is that Benacerraf’s (1973) challenge of how we can have epistemic access to these elusive entities, numbers, should be addressed in an empirically informed way, including data from developmental psychology and from anthropological studies (among others). There is a sense in which all there is to explain is the socially enforced practice of counting, which then gives rise to basic arithmetic (from there on, to the rest of mathematics). And here again, Wittgenstein was on the right track with the following observation in the Remarks on the Foundations of Mathematics:
This is how our children learn sums; for one makes them put down three beans and then another three beans and then count what is there. If the result at one time were 5, at another 7 (say because, as we should now say, one sometimes got added, and one sometimes vanished of itself), then the first thing we said would be that beans were no good for teaching sums. But if the same thing happened with sticks, fingers, lines and most other things, that would be the end of all sums.
“But shouldn’t we then still have 2 + 2 = 4?” – This sentence would have become unusable. (RFM, § 37)
I have been thinking about an analogy to the Bechdel test for philosophy papers - this in the light of recent observations that women get fewer citations even if they publish in the "top" general philosophy journals (see also here). To briefly recall: a movie passes the Bechdel test if (1) there are at least 2 women in it, (2) they talk to each other, (3) about something other than a man.
A paper passes the philosophy Bechdel test if
It cites at least two female authors
At least one of these citations engages seriously with a female author's work (not just "but see" [followed by a long list of citations])
At least one of the female authors is not cited because she discusses a man (thanks to David Chalmers for suggesting #3).
The usual cautionary notes about the Bechdel test apply here too. A paper that doesn't meet these standards is not necessarily deliberately overlooking women's work (it could be ultra-short, it might be on a highly specialized topic that has no female authors in the field - is this common?), but on the whole, it seems like a good rule of thumb to make sure women authors in one's field are not implicitly overlooked when citing.
In philosophy of religion, realist theism is the dominant outlook: belief in God is similar to belief in other real things (or supposedly real things) like quarks or oxygen. There is a rather triumphalist narrative about the resurgence of realist theism since the demise of logical positivism (see for instance, Plantinga's advice to Christian philosophers) when logical positivism and its verifiability criterion held sway, philosophers were dissuaded from talking about God in realist terms: religious beliefs were not just false, but meaningless. With the demise of logical positivism, however, theists could again defend realist positions, using a variety of sophisticated arguments.
Nevertheless, the question is whether theists in philosophers of religion are not conceding too much to atheists by talking about theism mainly in terms of beliefs. To ignore practice is to ignore a large part of the religious experience, and what makes it meaningful to the theist. Such an exclusive focus can indeed be alienating, as it seems to suggest that theists believe a whole bunch of ideas that are wildly implausible, e.g., that a man resurrected from the dead, or was born of a virgin. This picture of religious life as believing in a set of strange propositions is, as Kvanvig memorably put it, a view that most theists will not recognize themselves in:
I hardly recognize this picture of religious faith and religious life, except in the sense that one can cease to be surprised or shocked by the neighbor who jumps naked on his trampoline after having seen it for years.
That is not to say that many theists do believe these things, even in a literal sense, but without looking at the larger picture of practices that help to maintain and instil these beliefs, our epistemology of religion remains woefully incomplete.
It is therefore refreshing to read philosopher Howard Wettstein's recent interview in The Stone, who, coming from a Jewish background, emphasizes the practice-based aspects of a religious lifestyle. He argues that "existence" is the wrong idea for God, following Maimonides, and instead argues that "the real question is one's relation to God, the role God plays in one’s life, the character of one’s spiritual life."
In the recent Mind & Language workshop on cognitive science of religion, Frank Keil presented an intriguing paper entitled "Order, Order Everywhere and Not an Agent to Think: The Cognitive Compulsion to Make the Argument from Design." Keil does not believe the argument from design is inevitable - I've argued elsewhere that while teleological reasoning and creationism is common, arguing for the existence of God on the basis of perceived design is rare; it typically only happens when there are plausible non-theistic worldviews available.
Rather, Keil argues that from a very early age on, humans can recognize order, and that they prefer agents as causes for order. Taken together, this forms the cognitive basis for making the argument from design (AFD). (For similar proposals, see here and here). He proposes two very intriguing puzzles, and I'm wondering what NewApps readers think:
Some forms of orderliness give us a sense of design, others do not. What kinds of order give rise to an inference to design, or a designer?
Babies already seem to recognize ordered states from disordered states. How do they do it? What is it they recognize?
As teachers, mentors and colleagues, we, professional philosophers, take our tasks of teaching, research, and service to the profession very seriously. We want to create a supportive environment where fellow faculty members and students feel safe and where their concerns are heard and addressed.
In light of recent events at more than one university, we the undersigned hereby petition the Board of Officers of the American Philosophical Association to produce, by one means or another, a code of conduct and a statement of professional ethics for the academic discipline of philosophy. We particularly urge past presidents of each division of the APA to sign this petition.
In a few months, my son will get the MMR vaccine. I count myself very fortunate to live in a place and time when this amazing protection against is made available for free, and I will of course have him vaccinated. When I had my oldest child vaccinated, nearly 10 years ago, there was (at least where I lived, Belgium) no vaccine debate. I was dimly aware there were some very religious people who refused vaccines, but they were so clearly an outgroup that people did not seriously consider them and their arguments. Not vaccinating didn't even seem like a live option to me. Now, fast-forward post-Wakefield UK…
In Louise Antony’s thought-provoking interview, Gary Gutting asked her about the rationality of her atheism if she were confronted with a theist who is an epistemic peer, someone who is equally intelligent, who knows the arguments for and against theism, etc., this was her response:
"In the real world, there are no epistemic peers — no matter how similar our experiences and our psychological capacities, no two of us are exactly alike, and any difference in either of these respects can be rationally relevant to what we believe.” — She further clarifies “How could two epistemic peers — two equally rational, equally well-informed thinkers — fail to converge on the same opinions? But it is not a problem in the real world. In the real world, there are no epistemic peers — no matter how similar our experiences and our psychological capacities, no two of us are exactly alike, and any difference in either of these respects can be rationally relevant to what we believe…The whole notion of epistemic peers belongs only to the abstract study of knowledge, and has no role to play in real life”.
I disagree with Antony’s analysis, and think that the criteria for epistemic peerage can be very much loosened. I do agree with her that the notion, as it is outlined in epistemology, in terms of equal access to evidence, cognitive equality etc is quite stringent, and indeed is very rare in real life. For instance, perhaps two graduate students, trained at the same department with the same advisor and the same specialization, and who are equally smart, would count as epistemic peers with respect to that specialization. However, our philosophical concept of what an epistemic peer is should not be drawn up a priori, but should be informed by how the concept is used in everyday practices, like forensic research, two doctors or midwives discussing a patient’s circumstances, or two scholars who disagree about a key issue in their discipline. Indeed, the idea of epistemic peer is thoroughly entrenched in scientific research, for instance in peer review and open peer commentary. If the notion of “epistemic peer” does not reflect this practice, it is not a sound philosophical notion, and would need to be replaced.
Recently I read the following story on What’s it like to be a woman in philosophy.
The poster says her partner thought the mother/daughter relationship is not a topic of meaningful or worthy philosophical investigation. She writes “It feels like I have to defend why the female experience is worthy of philosophical analysis. It feels like I am not taken seriously the moment I talk about what I want to talk about. It feels like I need to transform my thoughts into useless philosophical jargon. It feels like my relationship has tension now, because his words hurt my self-perception. It makes me second-guess my recent applications to graduate programs. It feels like I am not a philosopher–like my thoughts, feminine, worthless–will be forever excluded from the realm of the “lofty, the existential, the philosophical”.”
I am sure that this perspective is not unique, that somehow topics about mother-daughter relationships, motherhood, and other female topics are not deemed worthy of philosophical investigation. Yet what recent philosophical essay has received so much mainstream attention than Laurie Paul’s paper on deciding to have a child? And there are many other examples. One of my personal favorate examples is Rebecca Kukla's paper on ethics and advocacy in breastfeeding campaigns. Given the solid scientific evidence for the benefits of breastfeeding, and the tremendous pressure women experience to breastfeed (even while still pregnant), this is surely an important topic, philosophically speaking and otherwise.
Massimo Pigliucci has written an excellent piece criticizing Plantinga’s theistic arguments, recounted recently in an interview with Gary Gutting on the New York Times “Stone” blog. (See also Helen de Cruz's discussion.) Plantinga’s belief rests, according to himself, not on argument but on “experience.” We have an inborn inclination to believe in God, and like perceptual experience, this is self-validating. Theism doesn’t rest, for example, on inference to the best explanation. Denying God because science explains so much of what was once attributed to God is like denying the Moon because it is no longer needed to explain lunacy.
Fair enough. I won’t venture to oppose an argument that is credible only if you believe the conclusion. But what of Plantinga’s arguments against atheism? Here is one that will be familiar to most readers. Suppose that materialism and evolution are true. It follows (for present purposes, never mind how) our belief-producing processes will be imperfectly reliable. Given that we have hundreds of independent beliefs, it’s virtually certain that some will be false. This means that our “overall reliability,” i.e. the probability that we have no false beliefs, is “exceedingly low.” “If you accept both materialism and evolution, you have good reason to believe that your belief-producing faculties are not reliable.”
A recent interview in the Stone by Gary Gutting of Alvin Plantinga gave rise to expected criticisms, for instance by Massimo Pigliucci. The wide media exposure of Plantinga puts him forward as somehow representative of what Christian philosophers believe, and if his reasoning is not sound then, as Pigliucci puts it “theology is in big trouble”.
For Plantinga, as is well known and again iterated in this interview, the properly functioning sensus divinitatis is sufficient for belief in God, and one need not have any explicit arguments at all for God’s existence. Nevertheless, Plantinga does say that the “whole bunch taken together” of such arguments are “as strong as philosophical arguments ordinarily get”. In a brief digression to the problem of evil, Plantinga does not even fully acknowledge it as a problem (calling it the “so-called problem of evil”), although he acknowledges there is some strength to it. The problem is then quickly solved with a Fall theodicy, where God mends the abuse of freedom of his creatures through the horrible and humiliating death of his Son, which Plantinga thinks is a “magnificent possible world”.
Overall, I found the tone of this interview somewhat placid. Eleonore Stump has termed this sort of approach toward evil "the Hobbit attitude to evil" (note and update: to clarify, she does not refer to Plantinga's work in the essay, the interpretation is mine). She writes: “Some people glance into the mirror of evil and quickly look away. They take note, shake their heads sadly, and go about their business. ... Tolkien's hobbits are people like this. There is health and strength in their ability to forget the evil they have seen. Their good cheer makes them robust.” — In fairness, Plantinga did write defenses to account for the problem of evil, but in my view, he does not take it seriously enough. Eleonore Stump does not share Plantinga’s reasons for being a religious believer, nor do other philosophers of religion who have spoken out in Morris' and Kelly Clark’s collections of spiritual autobiographies of philosophers who believe. So why do Christian philosophers of religion believe that something like Christian theism is true?
(Many thanks to Bryce Huebner for drawing my attention to this work) - There has been a lot of speculation about whether or not sexual harassment is worse in philosophy than in other disciplines. While there are few hard data on this issue, a new paper by Dana Kabat-Farr and Lilia Cortina throws new light on this problem, looking at the correlations between gender disparity and harassment in a large sample of employees in the military, academia and the court system. Across all these fields, the authors found that a low gender representation for women results in higher levels of gender harassment. Gender harassment is defined as "a broad range of verbal and nonverbal behaviors not aimed at sexual cooperation but that convey insulting, hostile, and degrading attitudes” about people of one’s gender". Concretely, "when comparing a woman who works in a gender-balanced work-group to a woman who works with almost all men, we find that the latter woman is 1.68 times as likely to encounter [gender harassment]." Remarkably, they found no correlation between sexual advance harassment (which we have been hearing a lot about recently) and underrepresentation.
I think these data are highly relevant for the recent news about harassment in our profession, and that there are things to learn from it for concrete policies.
Next week, I will be teaching my first tutorials at Oxford University (the subject is philosophy of cognitive science). For those unfamiliar with the format, tutorials are one of the forms of teaching at Oxford that every undergraduate has. A lecturer and a student (or a small group of students, maximum 4) convene every week, and the student is guided and gets intensive feedback on the fruits of their independent study. A common procedure is that the student writes a brief paper each week, which they present at the start of the tutorial. The tutor suggests further reading, urges the student to think and to read on the basis of what they have said. There is no lecturing as such going on - it is rather a form of guided self-study.
Tutorials are sometimes misunderstood as a form of hand-holding or spoon-feeding the student, but in fact the format encourages independence and responsibility. The student has to make sure to do all the reading, digest it, and be able to do the final exam on the basis of it. As it's one-on-one (up to four) it is hard to hide and resort to shortcuts instead of actually doing the reading and the thinking. Tutors get support and training in how to guide students on the right track if they slack or lose motivation; timely interventions make sure the attrition rate and failure rate is very low.
Oxford's vice chancellor says the system will ultimately become too expensive, as tutorials cost more per student than the yearly tuition fees, which are capped at 9000 GBP . Educating an Oxford student costs about 16,000 GBP per student, which leaves a gap of 7000 GBP which is filled by various money-sources such as the endowments of colleges. His suggestion is to increase tuition fees - we know the outcome of unbridled student tuition fee increases - and it is a grim prospect. So one may wonder whether the tutorial is an institution worth preserving, given the costs.
(X-posted on Prosblogion) My last blogpost for this year will be a preliminary report on the qualitative survey I launched last month. In this open survey, I asked professional philosophers of religion (including graduate students) about their motivations and personal belief attitudes, and how their work relates to these beliefs. I am very grateful to all who participated (an amazing 151 respondents!), and to the British Academy for funding this research.
I was looking for the source of the following picture (see below), to find out more about it. The picture seems, to the casual observer, to be an early modern engraving, perhaps from Germany or the Low Countries. It shows a man who looks out of the confines of his world (the edge of the firmament), to look beyond, a nice illustration of the work of the metaphysician, theologian or scientist. However, a bit of poking around on the internet reveals that the engraving is probably a forgery - i.e., a late 19th century wood engraving that is deliberately made to resemble an early modern drawing. It appears first in Camille Flammarion’s L'atmosphère: météorologie populaire. The engraving was probably made on commission or by Flammarion himself to accompany the following text “A missionary of the Middle Ages tells that he had found the point where the sky and the Earth touch…” This text appears in the first edition of L’atmosphère but the engraving only appeared in the second edition.
Now that I look at it again, I wonder how I could ever have believed this to be an authentic early modern illustration. The borders, for instance, do not look authentic. The anatomy and pose of the man are unusual for 16th century iconography. It seems that artistic forgeries, when they are believed to be genuine, look genuine. Once we know, however, that they are forgeries, the many inauthentic elements seem pretty obvious: how could past art critics have been fooled?
In a series of experiments, the developmental psychologists Paul Harris, Kathleen Corriveau and Melissa Koenig have shown that young children are more confident about the existence of unobservable scientific entities than they are about the existence of unobservable (semi-)religious entities. 5-year-olds in the Boston area, for example, were more sure about the existence of germs and oxygen than they were about the existence of God and Santa Claus. The experimenters were surprised by this finding, and replicated in several settings, including children from religious households in Spain who were sent to religious (Catholic) schools, and children from a Mayan community in Mexico (Santa was replaced by local spirits that people widely express belief in). As I will show below the fold, a plausible explanation for why children are less confident about religious entities is that the testimony to religious entities differs from that of most scientific entities. It that’s true, we need to rethink how to spread and promote the acceptance of “controversial” scientific ideas like climate change, the safety of vaccines, and evolutionary theory. For, as I will argue, some well-meant efforts to promote such ideas may actually backfire and fuel skepticism.
Next week, I will be speaking at a career development workshop for female Oxford graduate and masters students. One of the things I want to focus on is the importance of building out a broad, strong, supportive professional network.
Academia is built on trust and personal relationships. Rarely are people invited as speakers at conferences, workshops etc purely on the basis of merit. Merit is an important consideration, but people want additional information (e.g., is she a good speaker, will she turn up?) that they can acquire through their network, either by directly knowing the potential invitee, or by knowing others who know her. People from one’s network can alert one to opportunities, including job opportunities. Without a professional network, one has no letter writers (except the advisor and readers of the dissertation), one is excluded from many aspects of academic life that thrive on trust and personal relationships, such as being a keynote speaker or contributing to an edited volume. Moreover, people from one’s network provide opportunities for mentoring, friendship and mutual support in the very competitive environment that is academia. If one has to move state or country and has to leave friends and family behind, the ability to be able to fall back on a network of professional comrades for support and friendship is very valuable. Therefore, I will advise the students to work on their networks early on, and to nurture them.
But there are problematic aspects to networking. Ned Dobos has argued that career networking is ‘an immoral attempt to gain an illegitimate advantage over others’. He makes clear that he doesn’t target emotional networking - plain old socialising - but specifically career networking, networking in the context of advancing one’s career, especially, but not uniquely, one’s job prospects.
It does not seem clear to me, however, whether we can make a clean separation between career networking and emotional networking, especially in academia, where (for reasons I outlined above) the people in one’s professional network and one’s emotional (friend) network overlap to some extent. Dobos offers several arguments against the legitimacy of career networking. Insofar as the search process is meritocratic, career networking is morally objectionable because it attempts to distort the meritocratic allocation of positions, in a process analogous to bribery, or to ‘earwigging’ attempting to persuade judges outside of the formal process. In both cases, the career networker obtains an unfair advantage. Is it possible to engage in ethical career networking?
If Elisabeth Lloyd’s take on the female orgasm is
correct—i.e. if it is homologous to the male orgasm—then FEMALE ORGASMis not a proper evolutionary category. Homology is sameness. Hence, male and female orgasms belong to the same category. The orgasm is an adaptation, whether male or female (and
Lloyd should agree). It is not a spandrel or by-product.
I’ll get back to this in a moment, but first some background. There are five NewAPPSers who have a particular interest in the
philosophy of biology. Roberta Millstein, Helen De Cruz, Catarina Dutilh Novaes, John Protevi, and myself. Aside from Roberta, each of us comes at it from a related area in which biological insight is
important. For me, that area is perception. I have written quite a bit about
biology, but my mind has always been at least half on the eye (and the ear, and
the nose, and the tongue, . . .).
There is a divide among us with respect to a leading controversy
in the field. Catarina is strongly anti-adaptationist and I am strongly
adaptationist (perhaps because of my motivating interest in perception, which is exquistely adaptive). Roberta, Helen, and John are somewhere in between, but likely closer to Catarina than to me. You can gauge where I stand when I tell you that in my view, Gould and Lewontin’s 1979
anti-adaptationist manifesto, “The Spandrels of San Marco and the Panglossian
Paradigm” is one of the worst, and certainly one of the most mendacious, papers I have
ever read in any field. Among the five of us, I am sure I am alone in this.
Given all of this, my take on adaptationism with regard to the orgasm may get a
hotly negative response from my co-bloggers. Nevertheless, I’ll get on with it.
[this post originally appeared in Aesthetics for Birds as a guest post] Hayao Miyazaki's animation movie Ponyo features a tsunami. The tsunami is shown in its full threatening and destructive power, yet is rendered with a great aesthetic sensibility. On several occasions, Miyazaki expressed his aesthetic delight in natural disasters, and defended it as follows:
There are many typhoons and earthquakes in Japan. There is no point in portraying these natural disasters as evil events. They are one of the givens in the world in which we live. I am always moved when I visit Venice to see that in this city which is sinking into the sea, people carry on living regardless. It is one of the givens of their life. In the same way people in Japan have a different perception of natural disasters.
Miyazaki is not the only artist to find inspiration in natural disasters. William Turner depicted with gusto a hapless steamboat, struggling in a snowstorm. That we find aesthetic delight in natural disasters is puzzling. Why do we sometimes delight in natural disasters? And is it morally appropriate to do so? These questions have not often been addressed because both aesthetics and psychology have tended to focus on positive and pleasurable aesthetic properties of nature, such as the delicacy of a flower, the slow twirling of autumn leaves, the majesty of a waterfall. But we are not only be moved by nature (as Noël Carroll describes our intuitive and visceral response to nature) in its delicate, pretty form, but also in its destructive form.
[note: this blogpost collects some scattered thoughts I hope to organize in article form sooner rather than later, for my British Academy project on religious social epistemology, see here]
There is an ongoing debate what we should do when we are confronted with disagreement with an epistemic peer; someone who is as knowledgeable and intellectually virtuous in the domain in question. Should we revise our beliefs (conciliationism), or not engage in any doxastic revision (steadfastness)? Epistemologists aim to settle this question in a principled way, hoping general principles like conciliationism and steadfastness can offer a solution not only for the toy examples that are being invoked, but also for real-world cases that we care passionately about, such as scientific, religious, political and philosophical disagreements. However, such cases have proven to be a hard nut to crack. A referee once commented on a paper I submitted on epistemic peer disagreement in science that the notion of epistemic peer in scientific practice was useless. S/he said "It works for simple cases like two spectators who disagree on which horse finished first, but when it comes to two scientists who disagree whether a fossil is a Homo floresiensis or Homo sapiens, the notion is just utterly useless."
That referee comment has always stuck in my mind as bad news for epistemology: if we can't use our principled answers in epistemology to apply to real-world cases of epistemic peerage, the debate is of marginal value. There seems to be an easy escape: one common response, both by steadfasters and conciliationists has been that we need not revise our beliefs in complex messy cases if we have reason to believe that we have access to some sort of insight that our epistemic peer lacks. van Inwagen, for instance, muses about his disagreements about some philosophical matters with David Lewis, whom he greatly respects: they both know the arguments, and both have considered them equally carefully. But ultimately, van Inwagen thinks
I suppose my best guess is that I enjoy some sort of philosophical insight (I mean in relation to these three particular theses) that, for all his merits, is somehow denied to Lewis. And this would have to be an insight that is incommunicable- -at least I don't know how to communicate it--, for I have done all I can to communicate it to Lewis, and he has understood perfectly everything I have said, and he has not come to share my conclusions.
As one can see, the notion of epistemic peer simply dissolves here, since van Inwagen just asserted that he has insights in the domain in question that are denied to Lewis. To take another example, suppose you are a Christian faced with a seemingly equally intelligent atheist. According to Plantinga (WCB), this disagreement is not a defeater to your beliefs, as you can confidently assume your dissenting peer "has made a mistake, or has a blind spot, or hasn’t been wholly attentive, or hasn’t received some grace she has, or is blinded by ambition or pride or mother love or something else". But how do we know when we are right? Is the "feeling of knowledge", the conviction we are right, any indication that we actually are right? I will argue here that it is not, and therefore, that simply discounting the other as epistemic peer on account of this is not warranted.
If you are a professional philosopher, it is likely that at some point you will have to write a grant proposal. There are many types of grants: small intra-university grants, large grants funded by the government, grants by philanthropic organizations. In some countries, like Belgium or the Netherlands, grants are the chief means of academic survival for young academics, as it takes at least 5 years or more before (if at all) one manages to obtain a permanent position. Earlier I wrote about how frustrating grants are and how they pose the problem of the red queen effect and tragedy of the commons.
I stand by this: collectively, grants have significant costs for the profession. But for an individual philosopher who wants to break into a new research area, and doesn't have loads of institutional funding already, projects are a great way to get in the game, to do the research you have always dreamed about doing, and to get the funding and time to actually do it!
How do you write a grant proposal? I've attended workshops on how to write them, talked to research facilitators, consulted colleagues who have been in boards. I have also been an external referee for two granting agencies, so I get a sense of what makes a project look good. And I have also received several grants. The following tips (below the fold) are distilled from these experiences: