Argumentation gets a bad press. It’s often portrayed as futile: people are so ridden with cognitive biases—less technically, they are pigheaded—that they barely ever change their mind, even in the face of strong arguments. In her last post, Helen points to some successes of argumentation in laboratory experiments with logical tasks, but she doubts whether these successes would extend to other domains such as politics or morality.
I think this view of argumentation is unduly pessimistic: argumentation works much better than people generally give it credit for. Moreover, even when argumentation fails to meet some standards, the problem might lay more with the standards than with argumentation. Here are some arguments in support of a view that is both more realistic in its aspirations and more optimistic in its depiction of argumentation—we’ll see if these arguments can change Helen’s mind about the power of arguments.
It is well-attested that people are heavily biased when it comes to evaluating arguments and evidence. They tend to evaluate evidence and arguments that are in line with their beliefs more favorably, and tend to dismiss it when it isn't in line with their beliefs. For instance, Taber and Lodge (2006) found that people consistently rate arguments in favor of their views on gun control and affirmative action more strongly than arguments that are incongruent with their views on these matters. They also had a condition where people could freely pick and choose information to look at, and found that most participant actively sought out sympathetic, nonthreatening sources (e.g., those pro-gun control were less likely to read the anti-gun control sources that were presented to them).
Such attitudes can frequently lead to belief polarization. When we focus on just those pieces of information that confirm what we already believe, we get further and further strengthened in our earlier convictions. That's a bad state of affairs. Or isn't it? The argumentative theory of reasoning, put forward by Mercier and Sperber suggests that confirmation bias and other biases aren't bugs but design features. They are bugs if we consider reasoning to be a solitary process of a detached, Cartesian mind. Once we acknowledge that reasoning has a social function and origin, it makes sense to stick to one's guns and try to persuade the other.
Like an invisible hand, the joint effects of biases will lead to better overall beliefs in individual reasoners who engage in social reasoning: "in group settings, reasoning biases can become a positive force and contribute to a kind of division of cognitive labor" (p. 73). Several studies support this view. For instance, some studies indicate that, contrary to earlier views, people who are right are more likely to convince others in argumentative contexts than people who think they are right. In these studies, people are given a puzzle with a non-obvious solution. It turns out that those who find the right answer do a better job at convincing the others, because the arguments they can bring to the table are better. But is there any reason to assume that this finding generalizes to debates in science, politics, religion and other things we care about? It's doubtful.
This article in Aesthetics for Birds has some interesting statistics on the percentage of papers authored or co-authored by women and minorities in the top print aesthetics journals: Journal of Aesthetics and Art Criticism and British Journal of Aesthetics. About 20% of articles in these journals are written by women in the period from 2010 onwards. When we look at memberships of professional aesthetics organizations, the percentage of female aestheticians is about 32%. So that means women are underrepresented in JAAC and BJA. What can account for this disparity? JAAC keeps a record of gender and geographic location of submissions.
Sherri Irvin finds "It is notable that over the past three years, women authors have submitted to JAAC at a rate substantially higher than the rate at which they are published in JAAC from 2010-2014, and closer to the proportion of women members in the ASA. During 2 of the last 3 years, the acceptance rate for women has been lower than for men. Though the differences seem small (only 2-3 percentage points), another way of putting them is that in 2012-3, men were 21.4% more likely than women to have their manuscripts accepted, while in 2013-4, they were 11.6% more likely." She also writes "US submissions tend to be accepted at a rate slightly over 20%, while submissions from non-English-speaking countries tend to be accepted at far lower rates".
JAAC practices double-anonymous refereeing. I am in the statistics, since I've co-authored an article that was published in JAAC in 2011. My co-author and I were very pleased with thoroughness of our reviewer, who is one of the few experts on the aesthetics of paleolithic art. We could guess who he was, and it turned out (as he later communicated with us) he also had an inkling as to who we were. Aesthetics is a small world. The only time I reviewed for JAAC I didn't know who the author was, so I believe I reached a verdict that was unsullied by considerations of the author's identity. But was it? Thoughts about the identity of an author can play a role in one's decision, even if you don't want to, this is after all how implicit bias works.
There have been lots of discussions on the PGR (e.g., here), especially on its leader, Brian Leiter, including a poll on whether the of 2014 should be produced. Regardless of the outcome of this, I think we can already start considering alternative ways, independent of the PGR, to provide information for prospective philosophy graduate students.
Ideally, such information should should not be primarily about rankings of quality. Quality is a complex concept that is vulnerable to biases and enforcing the status quo. We should rather provide prospective grad students with clear measures of placement rates and places where they could study the topic of their choice. Perhaps any type of ranking will be problematic. We could just provide descriptive info on a wide range of topics, e.g., where are places to study experimental philosophy, continental French etc. One can give that info *without* giving an overall rank of perceived quality.
The methodology by which placement rates are made and by which assessments of strengths within departments are made should be empirically informed by the social sciences e.g., in its selection of experts who make these assessments
Collecting and dessiminating this information shouldn't be in the hands of one individual but shared responsibility. I originally thought it was something the APA, or perhaps a task force consisting of people from the APA, the AAP etc could do, but I am now not so sure whether this is a good idea. PhilPapers+ seems like a good place to host the information, especially given that prospective graduate students will already be familiar with PhilPapers
It would be nice to expand information for prospective graduate students to non-Anglosaxon departments. There are lots of grad students outside the English-speaking world who could benefit from lists of placement records and specializations of faculty members outside the US, UK etc.
In a recent survey, I asked philosophers about their submissions to journals, to get a sense of what journals people submit to and also what factors might influence their decisions on where to submit papers. Specifically, I wanted to know how frequently people submit their work to the top 5 journals in philosophy, which are usually regarded (according to polls) as the best journals in the field: Philosophical Review, Journal of Philosophy, Mind, Noûs and Philosophy and Phenomenological Research. Increasingly, publications in these journals are regarded as a marker of excellence.
However, there are several hurdles to getting published in the top 5. The acceptance rates are forbidding (I don’t have exact numbers, but journals in the top-20 that have published acceptance rates as low as 5%, (e.g., Australasian Journal of Philosophy, Canadian Journal of Philosophy). Presumably, the acceptance rates in the top-5 are lower still, making them more difficult to get into than Science or Nature. Also, review times at some of these journals tends to be longer than the standard 3 months. Those journals that are quicker close submissions for half the year, and unfortunately, they do so concurrently (otherwise, so a senior philosopher pointed out to me, they wouldn’t have the lower submission rates they are aiming for).
251 philosophers completed the survey. Below the fold is a summary of some results. I asked respondents to say how many papers they submitted to top-5 journals and any refereed journal over the past year (i.e., since September 2013).
Although over half the world' population are theists (according to Pew survey results), God's existence isn't an obvious fact, not even to those who sincerely believe he exists. To put it differently, as Keith DeRose recently put it, even if God exists, we don't know that he does. This presents a puzzle for theists: why doesn't God make his existence more unambiguously known? The problem of divine hiddenness has long been recognized by theists (for instance, Psalm 22), but only fairly recently has it become the focus of debate in philosophy of religion.
In several works, J.L. Schellenberg has argued that divine hiddenness constitutes evidence against God's existence. A simple version of this argument goes as follows (Schellenberg 1993, 83):
If there is a God, he is perfectly loving.
If a perfectly loving God exists, reasonable non-belief in the existence of God does not occur
Reasonable non-belief in the existence of God does occur.
No perfectly loving God exists.
There is no God.
The controversial premises are 2 and 3. Authors like Swinburne and Murray have argued against premise 2: God may have reasons to make his existence less obviously true. Their arguments state that if we knew God existed, we wouldn't be able to make morally significant choices. This is an empirical claim. Obviously, it cannot be experimentally tested directly. However, research in the cognitive science of religion (CSR) on the relationship between belief in God and morality may indicate whether or not this is a plausible claim.
New APPS readers probably remember Helen De Cruz's excellent post on the polarized debate surrounding evolutionary science (which was picked up by NPR), as well as Roberta Millstein's follow-up post on the perhaps equally polarized debate concerning climate change. Both posts cite the work of Dan Kahan, who has a distinct take on these issues:
"I study risk perception and science communication. I’m going to tell you what I regard as the single most consequential insight you can learn from empirical research in these fields if your goal is to promote constructive public engagement with climate science in American society. It's this: What people “believe” about global warming doesn’t reflect what they know; it expresses who they are."
I just attended a talk by Michael Ranney, who opposes Kahan's position. In Ranney's view, communicating the mechanism of global climate change is enough to change the minds of people on both sides of the political spectrum. (Check out the videos!) Ranney shows, surprisingly, that just about no one understands the mechanism of climate change (Study 1). Further, he shows that revealing that mechanism changes participants' minds about climate change (Study 2).
Google the keywords “academic” and “mother” or “motherhood”, and you will find various websites with discussions about the baby penalty in academia for women. Representative for this literature is an influential Slate article by Mary Ann Mason, who writes “For men, having children is a career advantage; for women, it is a career killer. And women who do advance through the faculty ranks do so at a high price. They are far less likely to be married with children.”
As an untenured mother of two children, I find these reports unsettling. When my second child was born, several women who are junior academics approached me to ask me if it was doable, or how I managed to get anything done. They wanted children but were scared that it would kill their careers. How do children impact one’s work? This got me thinking that it would be good to hear the stories of philosophers who did manage to combine a flourishing academic career with parenthood.
To this end, I interviewed seven tenured professors who are parents. Six of them are mothers, but I decided to also include an involved father. I aimed to include some diversity of circumstance. Some of my interviewees have very young children whereas one respondent has grown children, she had them in a time when being a mother and a professor was even less evident than it is now. One of my interviewees is a single mother, who had her child in graduate school. One went to a first-round APA interview when her son was six weeks old, with a sitter in the hotel room. Two of my interviewees have special needs children, a fact that shaped their academic careers in important ways. I aimed also for geographic diversity—my respondents come from the US, the UK, Canada and The Netherlands—since countries and institutional culture differ in the formal and informal support parents receive, such as paid leave and childcare.
I have long believed the conventional wisdom that women are not proportionately distributed through every subfield in philosophy. In my field of theoretical ethics, in particular, it is often said that more women in philosophy seem to be found here than are in the profession more widely.
I believe it a little less today, though it may still turn out to be true. Trent University student Cole Murdoch undertook a short summer research project for me, looking at the ratio of male to female authors in two leading journals of moral philosophy.
Although we've still data to wade through, it is interesting to me that in looking at a five-year window of publications in Ethics and Journal of Moral Philosophy, the student did not find that women-authored articles appeared in much greater numbers than our number in the profession. I tasked him with this merely to find out who and what the journals in my field publish, for self-interested reasons, but I also expected that, as we regularly hear women in philosophy disproportionately specialize in ethics, he'd find much more parity in JMP and Ethics, or at least, higher numbers of women's names than one might find in the profession. [see below for a report of the analysis]
How can we combine the economic necessities of work with caring for infants? This dilemma recurs across cultures, and western culture is no exception. In a series of interviews with professors who are mothers (which I hope to put on NewApps by the end of this month), one of my respondents, who has grown children remarked about their preschool years:
"I was completely stressed out. It wasn’t just that childcare was expensive—and even with two salaries it was a stretch: It was insecure. If a childcare provider decided to quit, I would be left in the lurch; if my kid wet his pants once too often he’d be kicked out of pre-school [which had strict rules about children being toilet-trained] and I’d have to make other arrangements."
This concern resonates with many parents. It is especially acute among low-income, single mothers who struggle to find last-minute childcare to fit their employers' unpredictable scheduling. Also symptomatic are heart-wrenching stories about a woman whose children were taken away because she failed to find childcare when she had to go on a job interview and left them in a car, or a woman who was arrested for allowing her nine-year-old daughter to play in a park while she worked in a nearby fast food restaurant.
Can we learn anything from how other cultures solve the working mother's dilemma?
Thomas Reid argued that the human default trust in testimony is a gift of nature, which is sustained by two principles that "tally with each other", the propensity to speak the truth, and the tendency to trust what others tell us. Interestingly, he observed an embodied aspect of this trust:
It is the intention of nature, that we should be carried in arms before we are able to walk upon our legs; and it is likewise the intention of nature, that our belief should be guided by the authority and reason of others, before it can be guided by our own reason. The weakness of the infant, and the natural affection of the mother, plainly indicate the former; and the natural credulity of youth, and authority of age, as plainly indicate the latter. The infant, by proper nursing and care, acquires strength to walk without support (1764, Inquiry into the Human Mind, chapt VI, Of Seeing)
Reid's observations point to an intriguing possibility: to what extent is social cognition, such as trust in testimony, influenced by our bodily position, in particular the position we have as helpless infants? The Japanese primatologist Tetsuro Matsuzawa has argued that the supine position (that is, position on the back) of human newborns, has been a decisive factor in the evolution of human social cognition.
Humans and chimpanzees differ quite markedly in how much they trust others. For instance, although both chimpanzees and humans imitate, human children are more prone to overimitation than juvenile chimps, the children, but not the chimps, indiscriminately follow actions by an adult that are reduntant in obtaining a desired result (see e.g., here).
In order to examine and address issues of participation faced by minority and underrepresented groups in academic philosophy (e.g. gender, race, native-language, sexual orientation, class, and disability minorities), a number of UK departments have recently started to build a UK network of chapters of MAP ( www.mapforthegap.com ).
With 24 active chapters to date, MAP (Minorities And Philosophy) is already a successful and widespread organization in the US and elsewhere. If you would like to have a MAP chapter at your own institution, this Call For Collaborators is for you. MAP chapters are generally run by graduate students (typically 3 or 4 per department), with some help from academic staff members; undergraduate participation is also encouraged.
At this stage we would be happy to hear especially from graduate students (groups or individuals) at UK Philosophy departments as well as from UK Philosophy academic staff who would like to coordinate graduate student interest in their institutions. Please contact Filippo Contesi (filippo.contesi at gmail dot com).
To my knowledge, full book manuscripts are never reviewed anonymously. Given that the double anonymity of peer review is implemented to decrease biases, and presumably, thereby increase the focus on the quality of the writing, this is puzzling. David Chalmers wrote, in a very helpful comment on how to publish a book "Most book refereeing is not blind, unlike journal refereeing. And when what's being reviewed is a proposal rather than a full manuscript, reputation of the author make a huge difference in reviewers' and editors' confidence that the proposal will be fleshed out well to a book."
While I can see that the reputation or renown of an author can relevantly play a role at the proposal stage in assessing the competence of the prospective author in writing a full manuscript, I don't see why it should play a role when the full manuscript is reviewed. This will inevitably happen when review of full manuscripts is non-anonymous. It would be hard not to be influenced if the author of one's manuscript happened to work at high-ranking institution, is very senior, and already has an excellent track record (I declined to review a book for a major press for this reason), or conversely, if the author is relatively junior, working at a teaching-focused or obscure place.
This is part 3 of a 3-part series of interviews with philosophers who left academia right after grad school or in some cases later. See part 1 to see what jobs they held, and part 2 on how they evaluate their jobs. This part will focus on the transferrable skills of academics.
The burning question of academics who want to leave academia is: What transferrable skills can they bring to the private sector? The responses of the seven people I interviewed clearly indicate that the skills that are transferrable are broad and fairly high-level.
This is part 2 of a 3-part series of interviews I conducted with seven philosophers who went on to a non-academic career after obtaining their PhDs. For more background on these philosophers, the work they currently do, and the reasons they left academia, see part 1: How and Why do they end up there? This part will focus on the realities of having a non-academic job.
One of the main attractions of an academic job, especially one of a tenured academic professor, is the autonomy (intellectual and in terms of time management) it provides. However, there are downsides as well: the increasing pressure to churn out publications (which some of the respondents already alluded to in part 1, lack of support, and isolation lead to mental health problems in some academics. So how do philosophers with experience in academia and outside evaluate the work atmosphere?
This is the first of a three-part series featuring in-depth interviews with philosophers who have left academia. This part (part 1) focuses on their philosophical background, the jobs they have now, and why they left academia. Part 2 examines the realities of having a non-academic job and how it compares to a life in academia. In part 3, finally, the interviewees reflect on the transferable skills of a PhD in philosophy, and offer concrete advice on those who want to consider a job outside of academia.
Does having a PhD in philosophy mean your work opportunities have narrowed down to the academic job market? This assumption seems widespread, for example, a recent Guardian article declares that programs should accept fewer graduate students as there aren’t enough academic jobs for all those PhDs. Yet academic skills are transferrable: philosophy PhDs are independent thinkers who can synthesize and handle large bodies of complex information, write persuasively as they apply for grants, and they can speak for diverse kinds of audiences.
How do those skills translate concretely into the non-academic job market? To get a clearer picture of this, I conducted interviews with 7 philosophers who work outside of academia. They are working as consultant, software engineers, ontologist (not the philosophical sense of ontology), television writer, self-employed counselor, and government statistician. Some were already actively considering non-academic employment as graduate students, for others the decision came later—for one informant, after he received tenure.
These are all success stories. They are not intended to be a balanced representation of the jobs former academics hold. Success stories can provide a counterweight to the steady drizzle of testimonies of academic disappointment, where the inability to land a tenure track position is invariably couched in terms of personal failure, uncertainty, unhappiness and financial precarity. In this first part, I focus on what kinds of jobs the respondents hold, and how they ended up in non-academic jobs in the public and private sector. Why did they leave academia? What steps did they concretely take to get their current position?
I hope this series of posts will empower philosophy PhDs who find their current situation less than ideal, especially—but no only—those in non-tenure track position, to help them take steps to find a nonacademic career that suits them. And even if one’s academic job is as close to a dreamjob as one can conceivable get, it’s still fascinating to see what a PhD in philosophy can do in the wider world.
There are several variants of a list in circulation with skills our grandparents could do but the majority of us can't, for instance, 7 skills your grandparents had and you don't. Examples include ironing really well, sewing, knitting, crocheting, canning, cooking a meal from scratch, writing in beautiful longhand, basic DIY skills... What have the majority of us lost by not having these skills, which I'll call granparent skills for short, anymore?
As Lizzie Fricker argued today in a workshop held in honor of Charlotte Coursier, trust in other people is common and is a pervasive element of human life. We defer to the knowledge of others (testimonial dependence) and to their expertise (practical dependence): we rely on experts to tell us what the weather will be like, to fix our car, to give us a new haircut. Often, this deference is shallow and dispensable (we could in principle do it ourselves), but it can also be deep and ineluctable, as when we rely on electricians and other specialists.
This division of cognitive labor provides us with enormous gains, but does an increased reliance on testimony and expertise of others also come with costs? Fricker feels we do not reflect enough on this question, especially as the extent of both testimonial and practical dependence seems have increased dramatically in recent years. People increasingly rely on Google rather than internally stored semantic knowledge, and they increasingly outsource practical skills – navigation with maps, dead reckoning, and compasses is replaced by user-friendly technologies like GPS devices.
I live very close to Port Meadow, one of the largest meadows of open common land in the UK, already in existence in the 10th century, and mentioned in the Domesday book in 1086. I saw my first-ever live, wild oriole there. The land has been never ploughed, so it is possible to discern outlines of older archaeological remains, some going back to the Bronze Age. The consistent management of the land makes the changes predictable: it turns into a lake in winter, is sprinkled with buttercups this time of year (see pictures below the fold - both are taken at about the same place, but one in May and the other in November), and looks mysterious and misty in the fall. Whenever I walk on Port Meadow I take my camera, anxious to preserve any beautiful view that falls on my retina, to preserve it for future memories. And, like many other parents, I take dozens of pictures of my growing children. Recently, I saw an NPR piece (no author given) that took issue with this tendency to want to preserve pictures for future memory.
The article launches a two-pronged attack against pictures. First, by worrying about capturing the moment, we lose the transience and beauty of the moment and enjoy it less. Second, the article cites psychological evidence that shows that people actually remember fewer objects during a museum visit if they were allowed to take photos of them, compared to when they only were allowed to observe them. The phenomenon is known as the photo-taking-impairment effect. Linda Henkel, who discovered the effect, says: "Any time…we count on these external memory devices, we're taking away from the kind of mental cognitive processing that might help us actually remember that stuff on our own."
Helen De Cruz has some excellent suggestions for how to talk to creationists given that neither debate nor denouncement are likely to be productive. She describes the way in which a religious person who is not a creationist can speak to another religious person who is a creationist, e.g., by pointing out that Biblical literalism is a recently emerged approach, one that may be impossible to apply consistently, and for that reason among others it may not be thoroughly used by anyone.
This article by Dan Kahan suggests that disbelief in human-caused climate change is like belief in creationism in this respect: What people "believe" about each doesn't reflect what they know, but rather expresses who they are. This supports the thesis that providing evidence for creationism isn't likely to change minds and that providing evidence for climate change isn't likely to change minds, either.
But what is the climate change equivalent, where we speak to people from their own perspective as Helen proposes that we do for religious people who are creationists?
A friend of mine is doing her DPhil in Oxford. She's American, and out of term she goes back to her home in middle America. She recently went to see the newly refurbished museum in her home town. When she was looking at the displays on human evolution, a museum guard, who had been observing her, suddenly said "So, what side are you on: the Bible or evolution?" Whereupon my friend replied "What do you mean what side am I on? This is not a football game, you know".
I am deeply troubled by the incipient creationism, which treats biblical literalism as a serious intellectual contender to scientific inquiry. I want my children to grow up with normal biology textbooks, not with Of Pandas and People. If creationists win their lobbying efforts to make creationism mainstream in schools and the public sphere, that is a loss for everyone (including the creationists). Debates don't seem to do any instrumental good. If we are not going to fight creationism through debates, how can we - as public intellectuals - ensure that creationism doesn't encroach even further upon our schools and public life?
On the basis of this year’s partial hiring data, Marcus Arvan notes that the majority of tenure track hires (a whopping 88%) are from people of Leiter-ranked programs. Only 12% of hires are from people of unranked programs. Also, 37% of all tenure track hires come from just 5 schools, the Leiter top 5 list - this is amazing if one ponders it, and one may wonder at the direction philosophy is going to, if most of its future tenured workforce comes from just a few select programs.
This has caused a lot of debate: why would people go to grad school in unranked programs at all? Why attend an unranked program if you can’t get into a highly ranked one? But what is often overlooked are the many factors, such as class and ethnic background, may contribute to someone not getting (or, as I will examine in more detail below), even applying to get into top programs. In fact, going for pedigree may be a particularly effective way to screen out people who come from poorer backgrounds and of different ethnicities.
A few weeks ago I had a post on different ways of counting infinities; the main point was that two of the basic principles that hold for counting finite collections cannot be both transferred over to the case of measuring infinite collections. Now, as a matter of fact I am equally (if not more) interested in the question of counting finite collections at the most basic level, both from the point of view of the foundations of mathematics (‘but what are numbers?’) and from the point of view of how numerical cognition emerges in humans. In fact, to me, these two questions are deeply related.
In a lecture I’ve given a couple of times to non-academic, non-philosophical audiences (so-called ‘outreach lectures’) called ‘What are numbers for people who do not count?’, my starting point is the classic Dedekindian question, ‘What are numbers?’ But instead of going metaphysical, I examine people’s actual counting habits (including among cultures that have very few number words). The idea is that Benacerraf’s (1973) challenge of how we can have epistemic access to these elusive entities, numbers, should be addressed in an empirically informed way, including data from developmental psychology and from anthropological studies (among others). There is a sense in which all there is to explain is the socially enforced practice of counting, which then gives rise to basic arithmetic (from there on, to the rest of mathematics). And here again, Wittgenstein was on the right track with the following observation in the Remarks on the Foundations of Mathematics:
This is how our children learn sums; for one makes them put down three beans and then another three beans and then count what is there. If the result at one time were 5, at another 7 (say because, as we should now say, one sometimes got added, and one sometimes vanished of itself), then the first thing we said would be that beans were no good for teaching sums. But if the same thing happened with sticks, fingers, lines and most other things, that would be the end of all sums.
“But shouldn’t we then still have 2 + 2 = 4?” – This sentence would have become unusable. (RFM, § 37)
I have been thinking about an analogy to the Bechdel test for philosophy papers - this in the light of recent observations that women get fewer citations even if they publish in the "top" general philosophy journals (see also here). To briefly recall: a movie passes the Bechdel test if (1) there are at least 2 women in it, (2) they talk to each other, (3) about something other than a man.
A paper passes the philosophy Bechdel test if
It cites at least two female authors
At least one of these citations engages seriously with a female author's work (not just "but see" [followed by a long list of citations])
At least one of the female authors is not cited because she discusses a man (thanks to David Chalmers for suggesting #3).
The usual cautionary notes about the Bechdel test apply here too. A paper that doesn't meet these standards is not necessarily deliberately overlooking women's work (it could be ultra-short, it might be on a highly specialized topic that has no female authors in the field - is this common?), but on the whole, it seems like a good rule of thumb to make sure women authors in one's field are not implicitly overlooked when citing.
In philosophy of religion, realist theism is the dominant outlook: belief in God is similar to belief in other real things (or supposedly real things) like quarks or oxygen. There is a rather triumphalist narrative about the resurgence of realist theism since the demise of logical positivism (see for instance, Plantinga's advice to Christian philosophers) when logical positivism and its verifiability criterion held sway, philosophers were dissuaded from talking about God in realist terms: religious beliefs were not just false, but meaningless. With the demise of logical positivism, however, theists could again defend realist positions, using a variety of sophisticated arguments.
Nevertheless, the question is whether theists in philosophers of religion are not conceding too much to atheists by talking about theism mainly in terms of beliefs. To ignore practice is to ignore a large part of the religious experience, and what makes it meaningful to the theist. Such an exclusive focus can indeed be alienating, as it seems to suggest that theists believe a whole bunch of ideas that are wildly implausible, e.g., that a man resurrected from the dead, or was born of a virgin. This picture of religious life as believing in a set of strange propositions is, as Kvanvig memorably put it, a view that most theists will not recognize themselves in:
I hardly recognize this picture of religious faith and religious life, except in the sense that one can cease to be surprised or shocked by the neighbor who jumps naked on his trampoline after having seen it for years.
That is not to say that many theists do believe these things, even in a literal sense, but without looking at the larger picture of practices that help to maintain and instil these beliefs, our epistemology of religion remains woefully incomplete.
It is therefore refreshing to read philosopher Howard Wettstein's recent interview in The Stone, who, coming from a Jewish background, emphasizes the practice-based aspects of a religious lifestyle. He argues that "existence" is the wrong idea for God, following Maimonides, and instead argues that "the real question is one's relation to God, the role God plays in one’s life, the character of one’s spiritual life."
In the recent Mind & Language workshop on cognitive science of religion, Frank Keil presented an intriguing paper entitled "Order, Order Everywhere and Not an Agent to Think: The Cognitive Compulsion to Make the Argument from Design." Keil does not believe the argument from design is inevitable - I've argued elsewhere that while teleological reasoning and creationism is common, arguing for the existence of God on the basis of perceived design is rare; it typically only happens when there are plausible non-theistic worldviews available.
Rather, Keil argues that from a very early age on, humans can recognize order, and that they prefer agents as causes for order. Taken together, this forms the cognitive basis for making the argument from design (AFD). (For similar proposals, see here and here). He proposes two very intriguing puzzles, and I'm wondering what NewApps readers think:
Some forms of orderliness give us a sense of design, others do not. What kinds of order give rise to an inference to design, or a designer?
Babies already seem to recognize ordered states from disordered states. How do they do it? What is it they recognize?
As teachers, mentors and colleagues, we, professional philosophers, take our tasks of teaching, research, and service to the profession very seriously. We want to create a supportive environment where fellow faculty members and students feel safe and where their concerns are heard and addressed.
In light of recent events at more than one university, we the undersigned hereby petition the Board of Officers of the American Philosophical Association to produce, by one means or another, a code of conduct and a statement of professional ethics for the academic discipline of philosophy. We particularly urge past presidents of each division of the APA to sign this petition.
In a few months, my son will get the MMR vaccine. I count myself very fortunate to live in a place and time when this amazing protection against is made available for free, and I will of course have him vaccinated. When I had my oldest child vaccinated, nearly 10 years ago, there was (at least where I lived, Belgium) no vaccine debate. I was dimly aware there were some very religious people who refused vaccines, but they were so clearly an outgroup that people did not seriously consider them and their arguments. Not vaccinating didn't even seem like a live option to me. Now, fast-forward post-Wakefield UK…
In Louise Antony’s thought-provoking interview, Gary Gutting asked her about the rationality of her atheism if she were confronted with a theist who is an epistemic peer, someone who is equally intelligent, who knows the arguments for and against theism, etc., this was her response:
"In the real world, there are no epistemic peers — no matter how similar our experiences and our psychological capacities, no two of us are exactly alike, and any difference in either of these respects can be rationally relevant to what we believe.” — She further clarifies “How could two epistemic peers — two equally rational, equally well-informed thinkers — fail to converge on the same opinions? But it is not a problem in the real world. In the real world, there are no epistemic peers — no matter how similar our experiences and our psychological capacities, no two of us are exactly alike, and any difference in either of these respects can be rationally relevant to what we believe…The whole notion of epistemic peers belongs only to the abstract study of knowledge, and has no role to play in real life”.
I disagree with Antony’s analysis, and think that the criteria for epistemic peerage can be very much loosened. I do agree with her that the notion, as it is outlined in epistemology, in terms of equal access to evidence, cognitive equality etc is quite stringent, and indeed is very rare in real life. For instance, perhaps two graduate students, trained at the same department with the same advisor and the same specialization, and who are equally smart, would count as epistemic peers with respect to that specialization. However, our philosophical concept of what an epistemic peer is should not be drawn up a priori, but should be informed by how the concept is used in everyday practices, like forensic research, two doctors or midwives discussing a patient’s circumstances, or two scholars who disagree about a key issue in their discipline. Indeed, the idea of epistemic peer is thoroughly entrenched in scientific research, for instance in peer review and open peer commentary. If the notion of “epistemic peer” does not reflect this practice, it is not a sound philosophical notion, and would need to be replaced.
Recently I read the following story on What’s it like to be a woman in philosophy.
The poster says her partner thought the mother/daughter relationship is not a topic of meaningful or worthy philosophical investigation. She writes “It feels like I have to defend why the female experience is worthy of philosophical analysis. It feels like I am not taken seriously the moment I talk about what I want to talk about. It feels like I need to transform my thoughts into useless philosophical jargon. It feels like my relationship has tension now, because his words hurt my self-perception. It makes me second-guess my recent applications to graduate programs. It feels like I am not a philosopher–like my thoughts, feminine, worthless–will be forever excluded from the realm of the “lofty, the existential, the philosophical”.”
I am sure that this perspective is not unique, that somehow topics about mother-daughter relationships, motherhood, and other female topics are not deemed worthy of philosophical investigation. Yet what recent philosophical essay has received so much mainstream attention than Laurie Paul’s paper on deciding to have a child? And there are many other examples. One of my personal favorate examples is Rebecca Kukla's paper on ethics and advocacy in breastfeeding campaigns. Given the solid scientific evidence for the benefits of breastfeeding, and the tremendous pressure women experience to breastfeed (even while still pregnant), this is surely an important topic, philosophically speaking and otherwise.
Massimo Pigliucci has written an excellent piece criticizing Plantinga’s theistic arguments, recounted recently in an interview with Gary Gutting on the New York Times “Stone” blog. (See also Helen de Cruz's discussion.) Plantinga’s belief rests, according to himself, not on argument but on “experience.” We have an inborn inclination to believe in God, and like perceptual experience, this is self-validating. Theism doesn’t rest, for example, on inference to the best explanation. Denying God because science explains so much of what was once attributed to God is like denying the Moon because it is no longer needed to explain lunacy.
Fair enough. I won’t venture to oppose an argument that is credible only if you believe the conclusion. But what of Plantinga’s arguments against atheism? Here is one that will be familiar to most readers. Suppose that materialism and evolution are true. It follows (for present purposes, never mind how) our belief-producing processes will be imperfectly reliable. Given that we have hundreds of independent beliefs, it’s virtually certain that some will be false. This means that our “overall reliability,” i.e. the probability that we have no false beliefs, is “exceedingly low.” “If you accept both materialism and evolution, you have good reason to believe that your belief-producing faculties are not reliable.”