Feminist philosophers drew attention to this THE article on gender equality in academia. The article highlights striking differences between countries on gender participation in academia, with a 47% female participation rate in Turkey, and an abysmal 12.7% in Japan as two extremes (see the map through the link). For most of my academic career, I have studied and worked in Belgium, where gender participation is very poor (it's one of the red countries on the map). Only 13% of full professors in Belgium are women. In the EU, only Cyprus and Luxembourg do worse. In this post, I want to examine causes for the disparity (the high % in Turkey; the low % in Belgium), drawing amongst others on personal experience, and on this highly relevant article on Turkish academia.
[this is cross-posted at Prosblogion] Richard Dawkins has argued several times (e.g., here) that bringing up your child religiously is a form of child abuse. I think his argument that religious upbringing in general is child abuse has little merit (after all, Dawkins himself is the product of a traditional Anglican upbringing and calls himself - rather proudly - a cultural Anglican, hardly the victim of child abuse). However, his claim in the linked article is that parents who attempt to instill things like Young Earth Creationism (henceforth YEC) in their children are doing something wrong, or are somehow overstepping their role as parents. This question, I believe, is worthy of further attention.
I read this paper by David MacNaughton on why philosophy is so tedious (recent link at Leiter's blog). Of the many interesting strands in this paper, I'd like to highlight this concern:
There is now so much to read that “keeping up with the current literature,” could occupy every
waking moment. But to what end? Do we really want to create a profession where, to get
recognition and to advance one’s career, one has no time to do anything except philosophy? That is not good news for philosophers. It is neither sensible nor humane to encourage this work-centered monomania in anyone … Moreover, it is inimical to one of the traditional justifications of philosophy that sees it as a reflection on life, a discipline that trains you to understand the world in which you live better and so enables you, and others, to live better. But we are in danger of abandoning that conception and leaving professional philosophers no time and no incentive to put that wisdom into practice, to engage in other worthwhile activities. Is philosophical training a preparation for doing philosophy, and nothing more? … Nor is this degree of absorption in philosophy good for philosophy itself. It is (predominantly) a liberal discipline, and the best philosophy (especially in my own subjects, ethics and the philosophy of religion) is enriched by a wide, reflective, and imaginative experience of literature, politics, art, and science (McNaughton, p. 7).
The author here is right: if philosophy is indeed the love of wisdom, and its practice is embedded in a richer social, cultural, artistic, political, etc. context, it would be very strange that the only thing that could contribute to our work as philosophers would be reading papers and books by other philosophers.
Non-philosophical activities and concerns could enrich philosophical practice. By this I mean a wide variety of things, for instance, being a parent, a musician, someone who actively engages with a religious tradition, someone who is involved in political activism, etc. I would like to hear from readers how their non-philosophical activities have influenced and enriched their philosophy.
It would be valuable to get an idea of this, as I think there is an increasing pressure, even on people who are not on a tenure track, to work incessantly - as if work alone is something that makes a good philosopher, and where one's personal life is regarded mainly as an impediment to being a blossoming philosopher. This is, of course, not a problem unique to philosophy (it pervades academia), but it does strike me as something our discipline needs to address.
I was recently having tea with a philosopher who is the head of an interdisciplinary research group. We talked about grant proposals. My interlocutor said he devoted a lot of his working hours (at least 1/3 in his estimation) to writing grant proposals. He also knew someone personally, who was not a philosopher, but someone from an empirical discipline, who devoted as much as 70% of his time on grant writing. That latter person even said that he can now scarcely keep up with the literature in his highly specialized field - let alone contribute original research. But, given that his research group (comprising many PhD students and postdocs) depended on his ability to secure grants, there was no other option but to devote more and more time to the grant writing process.
Since grant schemes often ask for unrealistically elaborate timetables and detailed projected results, many experienced grant writers have turned to this heuristic (they have admitted this freely to me, and are unabashed about it - I haven't tried this for myself, but the practice is widespread):
Write a grant proposal that describes the work you have recently done (let's call this research project X).
If your proposal gets funded, you start doing the research you really want to do (research project Y)
If asked for a report of results, you simply mention the papers that are now in press, undergoing review or are recently published from project X; you do not mention the actual work that is now going on in your centre or lab, project Y.
About 1 year off from the completion of your current grant, you start developing a new grant proposal, this time detailing how you will carry out project Y (which, of course, is already about completed), allowing you in the future to pursue project Z.
And so on. This practice illustrates, I believe, that there is something deeply wrong with the grant making process as it is currently practiced.
I've been recently reading some work by theistic philosophers and theologians who accept evolutionary theory. They seek to interpret scripture in such a way that it is compatible with the evolution of humans and other animals. One promising recent strategy is to read Genesis 1-3 through the theology of Irenaeus rather than through Augustine, trading one patristic author for another. Here, I want to examine whether this is a reasonable strategy for the empirically-informed theist.
It is still very common that students only get readings by male authors in their introductory classes to philosophy. This contributes to the image of philosophy as a boys only discipline. It would therefore be useful to have a list with readings written by women that are suitable for philosophy courses, such as general introduction to philosophy, philosophy of science, ethics, epistemology.
I would like to invite readers to contribute their favorite pieces, written by women philosophers, to the following Google spreadsheet (please fill out the spreadsheet, rather than using the comment section, except if you experience difficulties with the spreadsheet).
In first instance, the focus would be on papers and book excerpts that are not overtly specialist or technical, suitable for intro-level or intermediate courses. Ideally, they should have made a significant impact on their field. They should be readings you have either already successfully used in class context, or envisage using.
As is well known, philosophy is a very male-dominated (and white, straight, etc) field, when compared to all other humanities, social sciences, and even several STEM disciplines. Even if we take into account the difficulties that minorities face in academia, we cannot explain why philosophy does worse than most other academic fields. I'd like to put a slightly controversial idea on the table: there are good reasons to believe that philosophers are less effective than academics from other fields in their ability to counter their own biases, i.e., they exhibit a larger bias blind spot.
I regret to inform you that Awesome Bigname Philosophy Journal cannot accept your paper for publication. After having googled the title of your paper, and failing that, lines from your abstract and paper, our referee discovered your identity. He found that you are a nobody from an lackluster university, without a tenured or tenure track position but only a lowly [adjunct teacher, grad student, postdoc etc], and [a woman, black, non-English speaker etc] to boot. Therefore, after a perfunctory glance at your paper, the referee has decided that your paper is not of high enough quality to be published in ABPJ.
We pass on referees' comments in the hope that they may prove useful. We receive over n submissions each year, and must reject many very competent papers, especially those written by people on the bottom of the academic ladder. We hope that your work will find a home in another journal, though obviously one not as highly regarded as ABPJ.
Eric has recently attention to this wonderful paper by L.A. Paul. The paper focuses on the question of how we make decisions that can transform our lives, and whether we can ever do so rationally. Her paper looks at the decision whether or not to have children, but it applies to other potentially life-transforming decisions, such as whether or not to go to graduate school or get involved romantically with someone.
Here, I don't want to focus on Paul's claims about the extent to which we have knowledge about what's it like to be a parent. I think, like Eric, this depends a lot on cultural context, and westerners seem to be in a particularly impoverished epistemic position because of the rarity of children, and the cultural ideals that surround it. Parenthood is described in unrealistic romantic, language (e.g., when I was pregnant, friends and family assured me that I would be in a blissful and rosy cloud like state after the birth of my child; breastfeeding would be easy and a wonderful way to connect to my baby; I would forget the pain of childbirth the moment I held her in my arms - all claims that turned out, at least for me, false and made me wonder if anything was wrong with me).
But I think that Paul is nevertheless right that decision theory does not provide us with the right tools to make potentially life-transforming decisions. When westerners today have children, Paul observes that there is a cultural ideal to "think carefully and clearly about what they want before deciding that they want to start a family." How do we do this? According to standard decision theory "we first partition the logical space by determining the possible states that are the outcomes of each act we might perform. After we have the space of possible outcomes, we assign each outcome a value (or utility), and determine the probability of each outcome’s occurring, given the performance of the act." However, she goes on arguing, convincingly, that this model fails, as it is impossible to calculate expected value based on preferences about what it would be like to have one's own child.
There is a very useful thread up at Feminist Philosophers on the unwritten rules of the game of being a professional philosopher, as they apply to publishing, collegiality, teaching etc. A lot of knowledge we acquire as academics is tacit, not systematically taught or collected, and we have to discover it piecemeal over time. Some supervisors and programs help to make some of the tacit knowledge explicit (e.g., by organizing workshops on how to put together a good job application), but many do not. In any case, even if we take that into account, I think most knowledge transmission about academia is still informal.
Yet such knowledge is vital to thrive as an academic. Is it OK for a grad student to approach a specialist in her field she have never met before to ask for feedback on her unpublished paper? Is it acceptable to use someone else's syllabus as a basis for your own course? When is it appropriate to contact a journal editor to gently remind them about your paper?
On the thread is a lot of useful knowing-how information, but next to that there is also a lot of tacit knowing-that information that more experienced academics have.
For instance, as an undergrad I did not appreciate the difference between professors and various forms of adjunct faculty and postdocs. I simply saw them all as professors, cozily tenured until retirement. And this is a common mistake: an author for the Chronicle of Higher Education recounts how students at a liberal arts college thought her title 'visiting assistant professor' meant she was a distinguished tenured philosopher, visiting from another faculty. They assumed after her contract ended she would safely return to her home institution. The bleakness of the job market often only becomes apparent to people in their final years of graduate school.
These reflections are inspired by my reading of Howard Wettstein's book "The significance of religious experience" (OUP), Gutting's piece in the Stone on agnosticism, and a recent BBC report on an atheist church in London.
I am deeply intrigued by atheist religious practice. An atheist church in North London has opened last month. It proves to be very popular; as a matter of fact, vastly outstripping the neighboring Anglican evangelical church in congregation size. The ca. 300 members of this church congregate to sing secular songs, celebrate life and the natural world, have readings from secular texts, like Alice in Wonderland, and have secular sermons, on topics like "life is all too brief and nothing comes after it." The atheist church fits in a broader tendency of atheists to incorporate aspects of religious practice, including Alain de Botton's temples for atheists. Is there any point for an atheist who is attracted to religious practice to attend atheist ceremonies, structured in ways similar to traditional religions?
There are several measures to indicate the quality of a journal. In philosophy, there seems to be some degree of consensus on the relative prestige of journals, e.g., most would probably agree that Philosophical Review is a better journal to publish in than Unknown Local Journal of Philosophy. Here is a list for general philosophy that is, I take it, not too controversial. Here is a list of both general and specialist journals.
In Google Scholar you can find citation metrics that measure the influence of a journal by counting its citations (see below the fold for the top 10 philosophy journals according to citation count).
In the 1980s, Ruse wrote a series of important papers that revived evolutionary ethics. The debate on the implications of evolved moral intuitions for ethics remains very active up to today (see e.g., this conference that I'll be attending in a couple of hours, at least if the British railway system isn't disrupted by half an inch of snow!). Contemporary evolutionary ethics can build on a wealth of research, for instance, in the cognitive neuroscience of morality, developmental psychology, and the study of altruism in animals. But the metaethics of the folk remains a relatively understudied area. Are people intuitive moral realists? If so, what is the connection between metaethics and behavior?
Ruse hypothesized that humans are intuitive moral realists, and that this metaethical intuition has an evolved function: "human beings function better if they are deceived by their genes into thinking that there is a disinterested objective morality binding upon them, which all should obey" (Ruse & Wilson, 1986, 179). Ruse thought that if everyone thought that morality was subjective, that it was merely a matter of taste or convention, our social systems would collapse. Intuitive moral realism was thus a key component in human altruistic behavior, held together by moral beliefs, which in turn were cemented by intuitive moral realism. As Ruse wrote later on: "Substantive morality stays in place as an effective illusion because we think that it is no illusion but the real thing" (Ruse, 2010, 310).
When Ruse first formulated this hypothesis, it was by no means clear that humans were intuitive moral realists. Also, it was not clear to what extent an intuitive moral realism, if anything, helped us to act more morally. In the meantime, there is some empirical work on this, which I'll discuss briefly below the fold.
Evolutionary naturalism provides an account of our capacities that undermines their reliability, and in doing so undermines itself...I agree with Alvin Plantinga that...the application of evolutionary theory to the understanding of our own cognitive capacities should undermine, though it need not completely destroy, our confidence in them. Mechanisms of belief formation that have selective advantage in the everyday struggle for existence do not warrant our confidence in the construction of theoretical accounts of the world as a whole. I think the evolutionary hypothesis would imply that though our cognitive capacities could be reliable, we do not have the kind of reason to rely on them that we ordinarily take ourselves to have using in them directly--as we do in science. Thomas Nagel, Mind and Cosmos, 27-28 (emphasis in original)
A non-trivial (albeit not the most fundamental) feature of Nagel's book (recall my here, here, here; see Feser's response to me and also Mohan's posts: here, here, here and here) is his reliance on Plantinga's so-called evolutionary argument against naturalism (hereafter EAAN; see also pp. 74-78). Let's leave aside the fact that Nagel pretends in his book that this (evolving) EAAN argument has not been subject to significant criticism. (It must be convenient to think that one is obliged to engage only with one's referee [Sober, although even his criticism of EAAN is ignored], one's colleague [Street], one's cheerleader [Plantinga], and one's deus ex machina [Hawthorne & Nolan].) Here I explore a response to this style of argument that is overlooked by Nagel and, I think, not explored in the literature (but would love to learn otherwise--it's not my field). So, let's grant -- for the sake of argument -- the claim that "Mechanisms of belief formation that have selective advantage in the
everyday struggle for existence do not warrant our confidence in the
construction of theoretical accounts of the world as a whole." What follows from this?
My quick and dirty answer is: nothing. For the crucial parts of science really do not rely on such mechanisms of belief formation. Much of scientific reason is or can be performed by machines; as I have argued before, ordinary cognition, perception, and locution does not really matter epistemically in the sciences.
There is a recent interesting Prosblogion blogpost on the question of whether theodicy can ever be successful, and if so what success conditions a theodicy must meet. I want consider a related, yet distinct question: can theodicies be convincing in the light of specific instances of evil, and the immediate sense this provokes: "God, if he exists, would not have allowed this"? In the wake of the tragic shooting incident at Newtown, I have been thinking a lot about the problem of evil and classical theodicies and defenses, such as John Hick's soul building theodicy and various forms of free will theodicies/defenses (e.g., Plantinga's; Augustine's).
One way to approach the problem of evil is to look at it as an abstract puzzle to be solved. Wielding modal logic and other tools that analytic philosophy offers, we can argue that evil is unavoidable even for a loving, powerful and omniscient God, if he wishes specific goods like free will to obtain. A different option is to focus on concrete, vivid examples. William Rowe presented the case of a fawn, trapped in a forest fire that was caused by lightning, the fawn suffers horrible burns, and lies in dreadful agony for days until its death. A pointless instance of suffering that, Rowe argues, God could have prevented.
Now for cases like Newtown we could invoke the free will defense, since - unlike the forest fire in Rowe's example - the incident was caused by a human agent, exercising his free will, and it was made possible by other instances of free will, such as American policies on gun ownership. But it still seems to me quite a different thing to argue in the face of particular, vivid instances like this that suffering is outweighed by the greater good of the unbridled exercise of free will by moral agents. When confronted with concrete evil like this, theodicy, or indeed any theistic response to the problem of evil, becomes a formidable task indeed.
Hypatia, a leading journal in feminist philosophy, has decided to place a moratorium on new submissions until July 2013. There have been moratoria in other journals, for instance in Noûs and PPR, but these are perhaps a bit less problematic, as they are general philosophy journals, and so you could find an alternative home for your paper in epistemology, metaphysics, philsophy of language etc. if there is a moratorium in those journals. By contrast, while Hypatia is not the only venue for feminist philosophy, it is the most prominent one, and it still seems less straightforward to publish feminist philosophy in general philosophy journals (I see no principled reason against this, but it still remains rare).
Moratoria on new submissions are often motivated by long back logs in the journal's publication, and increases on editors and referees, all caused by an increase in submissions. For Hypatia, the editors attribute the steep rise in submissions to an increased interest in feminist philosophy. For Noûs and PPR, the reason for increased submissions might be that these journals have relatively decent turnaround times in terms of refereeing, combined with an excellent reputation, which leads young researchers to seek these venues, rather than, say Mind, which has a very long average review time (Cullison's journal surveys provide comparative data). It seems to me (although I don't immediately find quantitative data to back this up) that increased submissions by graduate students and other junior philosophers contribute to the problem.
While the decision of Hypatia's editors is understandable, I think that moratoria, in the long run, are not a good solution. They disadvantage people who are untenured, and require good publications on their CV for jobs or their tenure evaluations. Often they have to get such papers in a relatively tight schedule. If the primary venues for their work happen to have a moratorium, this would be an unacceptable bad luck factor. So what are the alternatives?
Mohan wrote a very interesting blogpost, in which he argues that Obama's belief that his "belief in evolution" is compatible with faith is "an utterly false platitude". Mohan writes "Whether or not science is literally "incompatible" with religion, it seems to me obvious that belief in evolution should decrease one's faith inasmuch as it takes away one main reason for believing in God. Certainly, I can't see how knowledge of evolution could possibly strengthen anybody's rationally based faith." I disagree with this view, and would like to put forward some of the reasons for why I think this.
The more general question behind Mohan's question is: in how far is religious faith dependent on natural theology? I take natural theology not only in the narrow sense, as in e.g., the cosmological argument, design argument, moral argument, but also in a broader sense of theology that is not based on personal experience or revelation, but on a reasoned consideration of natural phenomena. One of the strands of natural theology, for instance, espoused by Swinburne, is that God provides a good explanation to explain some natural phenomena (e.g., the existence of the universe, fine-tuning etc). Now, there seems to be a tendency in American philosophers, even among those whose position towards natural theology is ambivalent (Plantinga is a good example) that natural theology in this sense provides more warrant for faith. If natural science can't explain it, we can appeal to God, so the idea goes, and this easily gives rise to the view that science and natural theology are somehow in competition. However, this view is strongly tied to a very specific perspective on natural theology, which has not been endorsed in much of the history of theology, namely natural theology is somehow prior to faith, a view also endorsed by some recent atheists on the topic (e.g., Philipse). Let me stress that this is not in line with the way that the relationship between faith and science has been regarded throughout history.
As every mild chocolate-addict, I am always keen to read studies arguing that chocolate is somehow beneficial to one's health. The most recent exemplar in this long line of studies is one recently published in the New England Journal of Medicine, which indicated that the number of Nobel laureates of a country, controlling for population size, correlates positively with annual chocolate consumption per capita. As always in this sort of correlational studies involving chocolate, the results are probably spurious. They crumble on closer scrutiny, and at present seem to invoke mainly snickering in the scientific community. The bitter lesson is that we should raise the bar in our research, in particular in controlling for lurking variables, or just freaky results (in a large sample, so a statistical dictum goes, outrageous results are bound to happen; moreover, the author "reports regular daily chocolate consumption, mostly but not exclusively in the form of Lindt's dark varieties.")
Even if we do not take this pessimistic outlook, a lot of data reported in the medical (and it looks more and more, in the social sciences too) are false positives. So given the high probability of false positives, why is news about the adverse or beneficial effects of food, even if published in relatively obscure journals, headline material?
It's hard not to be fascinated by hand axes, those enigmatic guide fossils of the Lower Paleolithic. Hand axes were symmetrical tools produced by diverse species of hominids from about 1.7 million years ago. A small proportion of them are highly symmetrical, finely finished and polished, much beyond what you would expect for purely practical purposes. The beauty of such extreme hand axes has made some people muse whether they could be considered as artworks. For instance, the philosopher of art Gregory Currie describes a British hand axe as follows:
a piece of worked stone, shaped as an elongated tear drop, roughly symmetrical in two dimensions, with a twist to the symmetry which has retained an embedded fossil. In size and shape it would not have been a useful butchery implement, and is worked on to a degree out of proportion to any likely use. While it may be too much to call it an “early work of art,” it is at least suggestive of an aesthetic sensibility (Currie, 2009, 1).
Archaeologists have voiced similar sentiments:
If one stresses aesthetics […] at least a borderline, case of art before modern humans is provided by a tiny proportion of the billions of Acheulean handaxes produced in Africa and, subsequently, Eurasia from about 1.5 million to 35,000 years ago (if the Mousterian of Acheulean Tradition is included). An estimated 1 in 100, or perhaps even 1 per 50 (which is an enormous number, given the total amount of handaxes) shows up symmetry and regularity seemingly beyond practical requirements (Corbey, Layton & Tanner, 2008).
So can we go a bit further and argue that these hand axes are, or could be, works of art?
But all the appetites which take their
origin from a certain state of the body, seem to suggest the means of their own
gratification; and even long before experience, some anticipation or
preconception of the pleasure which attends that gratification. In the appetite
for sex, which frequently, I am disposed to believe almost always, comes a long
time before the age of puberty, this is perfectly and distinctly evident. The
appetite for food suggests to the new-born infant the operation of sucking, the
only means by which it can possibly gratify that appetite. It is continually
sucking. It sucks whatever is presented to its mouth. It sucks even when there
is nothing presented to its mouth, and some anticipation or preconception of
the pleasure which it is to enjoy in sucking, seems to make it delight in
putting its mouth in the shape and configuration by which it alone can enjoy
that pleasure. There are other appetites in which the most unexperienced imagination
produces a similar effect upon the organs which Nature has provided for their
gratification.--Adam Smith (“Of the External Senses,” 79, p. 165)
Smith clearly commits himself to the existence of what (in my book-manuscript-in-progress) I label "proto-passions." In fact, the two examples he offers (appetite for sex
and appetite for food) are two of the original passions (not unlike the natural sentiment of resentment).
Here, Smith is adamant that such proto-passions are innate (“long before
experience…the new-born infant…the most unexperienced imagination.”) So, while
it is, of course, possible that some proto-passions are themselves a
consequence of habitual experience (this is implied, perhaps, by a passage that I have discussed here), Smith appears to think that a group (“there are other appetites”)
of the proto-passions are innate (in his terminology: they are provided by
Nature, not experience). Of course, that proto-passions are innate is
compatible with the further claim that they require environmental cues or
triggering objects to be activated. While below I describe some pre-conceptions
that according to Smith do require such triggering objects, Smith clearly
thinks some of those innate proto-passions are self-activating: the infant “sucks
even when there is nothing presented to its mouth.” Even though Smith was a
life-long, childless bachelor, he clearly showed an active interest in
child-development (and I think this makes him so insightful on the mutual emotional regulation that Helen calls attention to).
This is X-posted from Prosblogion. Let me be clear from the outset: the majority of work in analytic philosophy of religion (PoR) does not aim to proselytize, but is concerned with fairly technical topics, such as the possibility of creaturely free will in heaven, the compatibility of specific divine attributes, or the evidential problem of evil. But some portion of PoR is clearly aimed at convincing the reader that religious belief (usually, Christianity, given the demographics of academic philosophy) is reasonable. To this end, philosophers construct sophisticated arguments, for instance, to show that religious belief does not require evidence, that religious faith is also, or even primarily, a matter of practical rationality, that the evidence is overwhelmingly in favor of theism, etc. Plantinga and Swinburne are good examples.
Such philosophy of religion can be plausibly regarded as a form of proselytism--I'm using a wider term than the usual apologetics, as apologetics is the more narrow notion of systematically defending a particular religious position. But I'm not entirely happy with the term proselytism either, since I also think that some of this PoR is aimed at those people who have religious faith, but who are wavering, for instance, because others tell them their faith is not rational. So I'll settle for proselytism cum apologetics as a not entirely satisfactory term for this type of PoR. Is it acceptable for philosophers of religion to engage in proselytism/apologetics?
Mohan's recent post on looming theocracy in America made me think more about why Americans would not want an atheist as president, or indeed in any other important political position. Research by Azim Shariff, for instance Gervais, Shariff and Norenzayan (2011) indicates that prejudice against atheists is widespread, and primarily fueled by a distrust in atheists. Remarkably, this research was carried out in Canada, with undergraduates from the University of British Columbia, not a particularly religiously zealous population. Nevertheless, the research indicates a pervasive distrust of atheists. Just to give a flavor of this, one experiment let people read a story about a 31-year-old man, Richard, who does some morally questionable stuff. "Richard is 31 years old. On his way to work one day, he accidentally backed his car into a parked van. Because pedestrians were watching, he got out of his car. He pretended to write down his insurance information. He then tucked the blank note into the van’s window before getting back into his car and driving away. Later the same day, Richard found a wallet on the sidewalk. Nobody was looking, so he took all of the money out of the wallet. He then threw the wallet in a trash can."
Students were then asked whether they thought it was more probable that Richard was a teacher or that Richard was a teacher AND xxx. Where xxx was - between subjects - "Christian", "Muslim", "Rapist", or "Atheist". Remarkably (stunningly!) students thought it more likely that Richard was a teacher AND atheist or a rapist than they thought he was a teacher AND a Muslim or Christian. They committed the conjunction fallacy least with Christians, a bit more with Muslims, and most with rapists and atheists. The difference between atheist and rapist was not statistically significant….
This brief musing is prompted by Catarina's excellent post on the scarcity of women in Mind. In the interesting discussion that developed, it turns out that Mind does (or at least did in the recent past) practice triple blind refereeing. But lack of triple blind refereeing is not the only factor that might account for these low percentages. I believe that long review times combined with tiny acceptance factors might also be a contributing factor for why women do not publish much in top philosophy journals. Mind has, according to Cullison's journal surveys a long review time (7 months). Mind is not alone in this. For instance, Journal of Philosophy has 10.44 months as average review time. Not all top philosophy journals are slow. Noûs and PPR are faster. But the tiny acceptance rate might still be an additional barrier for people who need to get their first publications out, or to beef up their tenure dossier. People who are TT positions, or postdocs or grad students who try to get TT positions cannot afford to wait for a very long time for what will in all likelihood be a rejection. By contrast, people with tenure can afford to wait for a decision. Very long decision times give tenured faculty members an advantage, at the expense of (male and female) junior faculty, graduate students and postdocs.
[this is cross-posted from the Cognition & Culture blog] In the public sphere, religious beliefs are often considered to be a matter of private sentiment or preference, not as matters of fact. While this may be helpful for the maintenance of a pluralistic society, religious individuals often regard their beliefs as true in an objective sense. Attempts to incorporate fictionalism into religious practice, such as the Anglican Sea of Faith, have met only with limited success. There is thus a tension between the large diversity of religious beliefs, which prompt a more subjectivist understanding, and the appraisal by individual religious believers, who seem to have a more fact-like understanding. How do we intuitively conceptualize religious beliefs?
After Darwin's complete online collection, Alfred Russel Wallace's complete work is now also online. Surveying this online collection, it's impressive to see the diversity of Wallace's work, and his capacity to start entire new fields of enquiry. I particularly admire Wallace, because, unlike Darwin, Wallace did not come from a privileged social environment. His family was relatively poor, and Wallace could not enjoy a university education, being largely self-taught on books by Malthus, Lyell and Chambers (the then anonymously published Vestiges).
Unlike Darwin, who married very richly, Wallace struggled financially all his life. This led Wallace to engage in endeavors that his richer gentlemen-scholar colleagues frowned upon, such as a £500 wager against a flat-Earth theorist (John Hampden) that he could prove the Flat Earth theory incorrect. Wallace was also a controversial figure, who was amongst other things, into spiritism, and a proponent of intelligent design to explain the human intellect.
Below the fold, I'd like highlight just two of Wallace's lesser-known intellectual achievements, which still have a large impact on scientific practice today:
[Xposted at Problogion] I've just re-read Paul Griffiths' and John Wilkins' inspiring paper on evolutionary debunking arguments (EDAs) for religion (it is a very influential paper on cognitive science of religion and evolutionary debunking, despite its not having appeared in print yet) for a chapter of a monograph I'm writing. Using Guy Kahane's debunking genealogical framework, they argue that natural selection is an off-track process, i.e., one that does not track truth: it produces beliefs in a manner that is insensitive to the truth those beliefs. From this, they conclude that the beliefs that are the outputs of evolved systems are unjustified.
Causal premise.S's belief that p is explained by X
Epistemic premise. X is an off-track process
Therefore, S's belief that p is unjustified
When we apply this argument in a generalized manner, where X stands for "natural selection", this looks like a bad strategy for the naturalist - ultimately, it leads to self-defeat in a Plantingesque manner that most proponents of EDAs would like to avoid. G&W's position is more subtle: they don't want to treat truth-tracking and fitness-tracking as competing explanations, instead, they argue that fitness-tracking and truth-tracking operate at different explanatory levels. In many cases, tracking truth *is* the best way of tracking fitness, especially given (1) that cognition is costly (brains consume a lot of energy), (2) your beliefs influence how you will behave, (3) your behavior influences your fitness. They propose "Milvian bridges", which link truth-tracking and fitness-tracking, in order to salvage commonsense and scientific beliefs.
Having presented his famous wager, Pascal considers the atheist who sincerely wants to bet on God but who is psychologically unable to. He urges "Endeavour, then, to convince yourself, not by increase of proofs of God, but by the abatement of your passions. You would like to attain faith and do not know the way; you would like to cure yourself of unbelief and ask the remedy for it. Learn of those who have been bound like you […] Follow the way by which they began; by acting as if they believed, taking the holy water, having masses said, etc. Even this will naturally make you believe, and deaden your acuteness."
Tim Mawson (member of the Oxford philosophy faculty) has a similar suggestion in a paper published in International Journal for Philosophy of Religion. Suppose you are an atheist but consider it possible (with a non-negligible probability) God exists. According to Mawson, you are then under an epistemic obligation to pray to God ask him to stop you from being an atheist. He writes "the person who prays that God help him or her to believe in Him is as reasonable as someone who finds himself or herself shouting ‘Is anyone there?’ in a darkened room about which he or she has various reasonable prior beliefs." If you hear something, that's prima facie evidence that there is in fact a person in the dark room. If you don't hear anything, this is evidence too: "When one shouts into a darkened room, ‘Is there anyone there?’ and hears nothing by way of reply, this is in itself evidence that there’s no-one there, all other things being equal."
The problem is that the evidence is defeasible (especially if it is positive), and I think it is even more so than Mawson allows for.
Following the high-profile cases of Diederik Stapel, Dirk Smeesters, and Marc Hauser, there has been a lot of blogging on NewAPPs and elsewhere on academic fraud in experimental psychology. Between outright fraud (fabricating data, omitting data etc) and doing clean experiments, there is a large grey zone.
How do we delineate what is and isn't ethical in experimental psychology?
Richard Feynman compared grey-zone practices in psychology and other fields to Oceanic cargo cults (where the adherents believe the ancestors will come and shower them with cargo, if only they perform the right rituals). Feynman admonishes scientists that they are in effect behaving like Cargo cultists. To avoid cargo cult science, we need a rigorous ethics of conducting and reporting experiments. Here is a recent take on this for psychology.
However, the practice of experimental psychological publishing makes it difficult to adopt these simple principles. Reporting a series of experiments, a psychologist team is not merely reporting the testing of a hypothesis, but constructing a compelling narrative. Below the fold are some common practices (as I gather from speaking with experimental psychologists).
An interesting article on BBC on the rising numbers of the so-called "nones" in America, people who would in the past have ticked the Christian box, when asked for religious affiliation, and who now tick "none". Most of them are not atheists, i.e., explicit and firm in their rejection of a supernatural worldview. Those numbers are rising too, from 1% to 5%. The larger part of those 40% who do not self-identify as religiously affiliated are "religious but not spiritual". Weekly church attendance in the United States went down from about 40% in 1970 to 33% in 2010. There is a clear downward trend in church membership. Mainline protestants are hit hardest, but evangelicals do not do well either. Catholics should not be complacent - their steep decline in membership is masked by the influx of Hispanics.
It is not easy to find out why this is happening. I was reminded by a lovely lecture by the gerontologist Vern Bengston at Oxford some months ago. He examined how aging correlates with religiosity. Particularly: is the fact that we see mainly older people in church a cohort effect (i.e., younger generations are less churchy) or an age effect (i.e., when one gets older one thinks "Oh, no, I'm going to die, I'll better go to church then!).
As a girl from the late 1970s, I played with a lot of gender-neutral toys like lego. I had dolls as well as cars. Now the world has changed. Toy shops are divided into distinctly girl and boy areas. They can be easily distinguished by their color code: pink-colored areas with some pastel colors for the girls, and areas with lots of other colors (mainly blue, black, and green, and some dashes of red and orange) for the boys. And not only toys are gendered, but also items for adults, such as laptops, phones, and now - via feminist philosophers - even the Bic for her.
A brief selection of reviews for this item "The days of confusion over unisex pens are finally over. I was sick and tired of my naïve and, quite frankly, deluded girlfriend believing that she had the right to use my male products.", "For years I've been working as a science and technology writer and wondering why so many of my colleagues are differently anatomically-configured than I am.", "Where are the "For Him" pens? How can I embrace my masculinity, when there is no pen for me?", "How can I extol these miraculous gifts to Womankind enough? All of my writing experiences up until now have consisted of trying to wrap my dainty lady hands around robust, manly pens and failing miserably. Their harsh, heavy colors blinded my delicate lady eyes, and their mighty weight was too tremendous for my weak lady constitution. At times, I was able to practice my literacy on my small pink laptop, but only for short periods before I was overwhelmed by the complexity of the technology." etc.
Now the following is just a qualitative and perhaps incorrect observation: there seems to be more opposition to female-gendered products than to male-gendered ones. Male-gendered products are fun, exciting and interesting (I'm thinking of a shop devoted exclusively to male novelty items and toys for grown men, such as a tie with a functional piano, or miniature-remoted controlled helicopters, and the clientele there: everybody likes the toys (many of which you can try out) and never have I heard anyone say they are demeaning to men. By contrast, judging from the Amazon reviews female-gendered products are perceived as demeaning and an insult to women.