I would be very grateful if NewApps readers who are philosophers could fill out the following brief, anonymous survey on journal submissions. The aim is to get a picture of what kinds of journals you submit to, especially to the journals that are regarded as the top general philosophy journals. https://surveys.qualtrics.com/SE/?SID=SV_6lM1JE4Q88BruhD
Results will be posted when the data have been processed.
In December, I will be presenting at the Aesthetics in Mathematics conference in Norwich. The title of my talk is Beauty, explanation, and persuasion in mathematical proofs, and to be honest at this point there is not much more to it than the title… However, the idea I will try to develop is that many, perhaps even most, of the features we associate with beauty in mathematical proofs can be subsumed to the ideal of explanatory persuasion, which I take to be the essence of mathematical proofs.
As some readers may recall, in my current research I adopt a dialogical perspective to raise a functionalist question: what is the point of mathematical proofs? Why do we bother formulating mathematical proofs at all? The general hypothesis is that most of the defining criteria for what counts as a mathematical proof – and in particular, a good mathematical proof – can be explained in terms of the (presumed) ultimate function of a mathematical proof, namely that of convincing an interlocutor that the conclusion of the proof is true (given the truth of the premises) by showing why that is the case. (See also this recent edited volume on argumentation in mathematics.) Thus, a proof seeks not only to force the interlocutor to grant the conclusion if she has granted the premises; it seeks also to reveal something about the mathematical concepts involved to the interlocutor so that she also apprehends what makes the conclusion true – its causes, as it were. On this conception of proof, beauty may well play an important role, but this role will be subsumed to the ideal of explanatory persuasion.
The characters in Nevil Shute's On The Beachknow that barring natural disasters, and other unforeseen circumstances, they will die in a few months time--in September 1963--of radiation sickness, brought on by the thirty-seven day thermonuclear war that has already wiped out life in the northern hemisphere. They know its painful and uncomfortable symptoms--diarrhea and vomiting--will resemble those of cholera; they have the option to commit suicide by using a pill--supplied by the government and made available at local chemists. All humans know they will die; these ones know when and how. (As John Osborne notes, ""You've always known that you were going to die sometime. Well, now you know when.")
Perhaps unsurprisingly, last week, during a classroom discussion centered on Shute's novel, the following question slowly hoved into view: Would you want to know the time and manner of your death? We live our lives with the knowledge of our certain death; would we want to further refine it in this fashion? Why or why not? (We could also induce another twist by asking whether, if possessed of this knowledge with regards to someone else, we should tell them about it, without withholding any details. A variant of this situation occurs quite often, I think, in some medical contexts involving terminally ill patients and their doctors. Other twists include the knowledge of the details of, not our deaths, but those of loved ones.)
The answers to this cluster of questions are likely to be quite revealing. Knowledge of the time and manner of death may permit a settling of affairs, a more directed planning of one's activities, a more systematic prioritization of one's objectives; it may induce an urgency into our lives that some may find currently lacking. It may have a calming effect on some, But it may also induce paralyzing anxiety for some; the fear of the manner of death--perhaps gruesome dismemberment for some, or brutal murder for others--may have such an effect.
Why is the raising and answering of this question a philosophical exercise? Perhaps because these answers reveal valuations crucial to the chosen path of conduct in our lives--and what could be more fundamental a philosophical question than 'What is the good life?' Perhaps because in answering a question about whether some item of knowledge is desirable or not, we may possibly articulate limits on what should be known by us--a puzzle that, in the past, often confronted those who worked on thermonuclear weapons, or as in these days, those who work on cloning technologies. Answering this question could be an introspective and retrospective exercise, forcing not just a look inwards at our beliefs and desires, but also a look backwards at the lives we have lived thus far, an act likely to be imbued with an ethical and moral assessment. Such an examination of our beliefs and our plans for our lives, and the manner in which we would choose to live them, seems a fairly fundamental philosophical activity, perhaps even of the kind that Socrates was always urging on us.
Although over half the world' population are theists (according to Pew survey results), God's existence isn't an obvious fact, not even to those who sincerely believe he exists. To put it differently, as Keith DeRose recently put it, even if God exists, we don't know that he does. This presents a puzzle for theists: why doesn't God make his existence more unambiguously known? The problem of divine hiddenness has long been recognized by theists (for instance, Psalm 22), but only fairly recently has it become the focus of debate in philosophy of religion.
In several works, J.L. Schellenberg has argued that divine hiddenness constitutes evidence against God's existence. A simple version of this argument goes as follows (Schellenberg 1993, 83):
If there is a God, he is perfectly loving.
If a perfectly loving God exists, reasonable non-belief in the existence of God does not occur
Reasonable non-belief in the existence of God does occur.
No perfectly loving God exists.
There is no God.
The controversial premises are 2 and 3. Authors like Swinburne and Murray have argued against premise 2: God may have reasons to make his existence less obviously true. Their arguments state that if we knew God existed, we wouldn't be able to make morally significant choices. This is an empirical claim. Obviously, it cannot be experimentally tested directly. However, research in the cognitive science of religion (CSR) on the relationship between belief in God and morality may indicate whether or not this is a plausible claim.
How we ought to understand the terms "civility" and "collegiality" and to what extent they can be enforced as professional norms are dominating discussions in academic journalism and the academic blogosphere right now. (So much so, in fact, that it's practically impossible for me to select among the literally hundreds of recent articles/posts and provide for you links to the most representative here.) Of course, the efficient cause of civility/collegiality debates' meteoric rise to prominence is the controversy surrounding Dr. Steven Salaita's firing (or de-hiring, depending on your read of the situation) by the University of Illinois only a month ago, but there are a host of longstanding, deeply contentious and previously seething-just-below-the-surface agendas that have been given just enough air now by the Salaita case to fan their smoldering duff into a blazing fire.
In the interest of full disclosure, I'll just note here at the start that I articulated my concerns about (and opposition to) policing norms of civility/collegiality or otherwise instituting "codes" to enforce such norms some months ago (March 2014) in a piece I co-authored with Edward Kazarian on this blog here (and reproduced on the NewAPPS site) entitled "Please do NOT revise your tone." My concern was then, as it remains still today, that instituting or policing norms of civility/collegiality is far more likely to protect objectionable behavior/speech by those who already possess the power to avoid sanction and, more importantly, is likely to further disempower those in vulnerable professional positions by effectively providing a back-door manner of sanctioning what may be their otherwise legitimately critical behaviors/speech. I'm particularly sympathetic to the recent piece "Civility is for Suckers" in Salon by David Palumbo-Liu (Stanford) who retraces the case-history of civility and free speech and concludes, rightly in my view, that "civility is in the eye of the powerful."
On Friday Sept. 5, Chancellor Dirks of UC Berkeley circulated an open statement to his campus community that sought to define the limits of appropriate debate at Berkeley. Issued as the campus approaches the 50th anniversary of the Free Speech Movement, Chancellor Dirks' statement, with its evocation of civility, echoes language recently used by the Chancellor of the University of Illinois, Urbana and the Board of Trustees of the University of Illinois (especially its Chair Christopher Kennedy) concerning the refused appointment of Steven Salaita. It also mirrors language in the effort by the University of Kansas Board of Regents to regulate social media speech and the Penn State administration's new statement on civility. Although each of these administrative statements have responded to specific local events, the repetitive invocation of "civil" and "civility" to set limits to acceptable speech bespeaks a broader and deeper challenge to intellectual freedom on college and university campuses.
CUCFA Board has been gravely concerned about the rise of this discourse on civility in the past few months, but we never expected it to come from the Chancellor of UC Berkeley, the birthplace of the Free Speech Movement. To define “free speech and civility” as “two sides of the same coin,” and to distinguish between “free speech and political advocacy” as Chancellor Dirk does in his text, not only turns things upside down, but it does so in keeping with a relentless erosion of shared governance in the UC system, and the systemic downgrading of faculty’s rights and prerogatives. Chancellor Dirks errs when he conflates free speech and civility because, while civility and the exercise of free speech may coexist harmoniously, the right to free speech not only permits, but is designed to protect uncivil speech. Similarly, Chancellor Dirks is also wrong when he affirms that there exists a boundary between “free speech and political advocacy” because political advocacy is the apotheosis of free speech, and there is no “demagoguery” exception to the First Amendment.
I would be interested in hearing from other folks on their use of fiction in their class reading lists. Where and how did you do so? What was your experience like? Links to sample syllabi would be awesome.
Several months ago, I argued here that big data is going to make a big mess of privacy – primarily because of a distinction between “data,” understood as the effluvia of daily life, generated by such activities as moving around town or making phone calls, and “information,” which implies some sort of meaning. Privacy protects the disclosure of “information,” since this can be an intentional act; big data allows surveillance of areas traditionally considered private without any act of disclosure, since the analytic computers will take care of turning the data into information. My standard talking-point here is a recent study of Facebook likes which determined that all sorts of non-trivial correlations could be deduced from what people “like:”
I am working on a paper now (together with my student Leon Geerdink, for a volume on the history of early analytic philosophy being edited by Chris Pincock and Sandra Lapointe) where I elaborate on a hypothesis first presented at a blog post more than 3 years ago: that the history of analytic philosophy can to a large extent be understood as the often uneasy interplay between Russellianism and Mooreanism, in particular with respect to their respective stances on the role of common sense for philosophical inquiry. In the first part of the paper, we present an (admittedly superficial and selective) overview of some recent debates on the role of intuitions and common sense in philosophical methodology; in the second part we discuss Moore and Russell specifically, and in the third part we discuss what I take to be another prominent instantiation of the opposition between Russellianism and Mooreanism: the debate between Carnap and Strawson on the notion of explication.
I am posting here a draft of the first part, i.e. the overview of recent debates. I would be very interested to hear what readers think of it: is it at least roughly correct, even if certainly partial and incomplete? Are the categories I carved up to make sense of these debates helpful? Can they be improved? Feedback would be most welcome!
UPDATE: I forgot to mention that a paper that has been extremely useful for me to organize my thoughts on this topic is Michael Della Rocca's 'The taming of philosophy', which gets quite extensively discussed in other sections of my paper with Leon. It is an excellent paper. However, there is still a substantive disagreement between Della Rocca and us, namely that we think there is a lot more tension between Russell and Moore on the question of common sense's role for philosophy than Della Rocca recognizes (he describes both Moore and Russell as fans of common sense).
The first reading in my Philosophical Issues in Literature class this semester--which focuses on the post-apocalyptic novel--is Nevil Shute's On The Beach. I expected, more often than not, moral, ethical, and political issues to be picked up on in classroom discussions; I was pleasantly surprised to find out that the very first class meeting--on Monday--honed in on an epistemic issue, more specifically, one of normative epistemology: What should we believe? Are beliefs that comfort us--but that are otherwise without adequate evidentiary foundation--good ones? Can they ever be? Under what circumstances?
Dwight Towers, the American Navy submarine captain, is one of those unfortunates who have, thanks to nuclear war, lost their all--their homes, their families--in the northern hemisphere. In Towers' case, this means his home in Connecticut, and his wife and child. Indeed, this loss provokes his host in Australia, Peter Holmes, to take the precaution of arranging extra companionship--as distraction--for him when Holmes invites Towers to his home for dinner. But Towers does not seem to regard his family as lost. As he attends a church service, Shute grants us access to his thoughts about home:
He would be going back to them in September, home from his travels. He would see them all again in less than nine months time. They must not feel, when he rejoined them, that he was out of touch, or that he had forgotten things that were important in their lives. Junior must have grown quite a bit; kids did at that age.
Later, Shute does the same with Moira Davidson, his new-found female friend in Melbourne, who has seen the photographs of his family in his cabin:
She had known for some time that his wife and family were very real to him, more real by far than the half-life in a far corner of the world that had been forced upon him since the war. The devastation of the northern hemisphere was not real to him, as it was not real to her. He had seen nothing of the destruction of the war, as she had not; in thinking of his wife and his home it was impossible for him to visualise them in any other circumstances than those in which he had left them. He had little imagination, and that formed a solid core for his contentment in Australia.
Towers makes this explicit:
"I suppose you think I'm nuts," he said heavily. "But that's the way I see it, and I can't seem to think about it any other way."
These reflections bring us, as should be evident, to the Clifford-Jamesdebate. I have taught that debate before--in introductory philosophy classes and in philosophy of religion. The discussions--and judgments--it provokes are often quite illuminating; Monday's was no exception. The novelistic embedding of these attitudes in the context of a post-apocalyptic situation also enabled a segue into the broader ethics of 'coping strategies' and escapism, like, for instance, Moira Davidson's palliative heavy drinking.
I expect this issue to recur during this semester's discussions; I look forward to seeing how my students respond to the varied treatments of it that my reading list will afford them.
Especially given the attention we've paid to the case here (see our new tag, and also Samir's posts here and here, and Eric Schwitzgebel's here), it is important to note that Steven Salaita had a press conference today, at which he issued this following statement.
The full audio of the statement and the press conference is here. And in addition, there's a short video (embedded below) of Salaita addressing two of the core questions that have been raised in the affair, that of the nature of his engagements on Twitter and that of his approach in the classroom.
[Update: here is the full video of the event, including Salaita's full statement and the press conference.]
Finally, as many of you surely know, the Board of Trustees at UIUC is meeting on Thursday. This is a very crucial day, and it is important to produce as many visible expressions of support as possible in advance of the Trustees' meeting. If you have not already done so, there is still time for you to email the Trustees. Corey Robin's post on how to do so is here. Also, John Protevi is managing the philospher's boycott statement (see here for info on how to add your name).
One of the few productive things that came out of the recent kerfufle about ableism was a useful discussion of where we should draw the line between what seem like acceptable uses of terms like "blind review", on the one hand, and obviously offensive terms like "spaz," on the other. And if we can find that line, why is the line where we think it is?
I can think of three factors that might go into such a decision:
1. One is whether the term is being used pejoritively. So, calling an argument lame is bad because I am disparaging the argument. Saying "Justice is blind" is ok, because this is a positive characteristic of justice. (The first example was given by Keith DeRose on facebook in response to Eric S's proposal along these lines. The second was Mohan M's in a comment in a thread here.)
2. A second is whether the term has a non-metaphorical use that is not related to disability. I don't think the word "blind" is first and foremost a word for a disability. It is a word for being obscured from sight. Blindfold is not referencing a disability at all. The disability "blindness" is only one source of blinding. So, on this view, it's ok to say that someone is blind to important considerations.
3. A third thing we might cite is a long history of detachment. Calling an idea "crazy" might seem ok to you because it has referred to a colloquial category for so long in the absense of referring to a clinical condition.
What do people think? Are any or all of these principled reasons one could use to distinguish offensive terms from acceptable ones?
New APPS readers probably remember Helen De Cruz's excellent post on the polarized debate surrounding evolutionary science (which was picked up by NPR), as well as Roberta Millstein's follow-up post on the perhaps equally polarized debate concerning climate change. Both posts cite the work of Dan Kahan, who has a distinct take on these issues:
"I study risk perception and science communication. I’m going to tell you what I regard as the single most consequential insight you can learn from empirical research in these fields if your goal is to promote constructive public engagement with climate science in American society. It's this: What people “believe” about global warming doesn’t reflect what they know; it expresses who they are."
I just attended a talk by Michael Ranney, who opposes Kahan's position. In Ranney's view, communicating the mechanism of global climate change is enough to change the minds of people on both sides of the political spectrum. (Check out the videos!) Ranney shows, surprisingly, that just about no one understands the mechanism of climate change (Study 1). Further, he shows that revealing that mechanism changes participants' minds about climate change (Study 2).
It seems apropos to introduce a small point of order: New APPS is a group blog, which means that there are many authors here and we all speak for ourselves--and only ourselves.
A case in point would be my strong disagreement with Jon Cogburn's post below. I find it to trade in a series of unfortunate false dichotomies: 1) between valuing or appreciating ability and seeking to avoid speaking in a way that may be hurtful offensive to people with disabilities, or which marginalizes them; 2) between recognizing that illnesses (mental or physical), injuries, or other afflictions are real sources of suffering and seeking to avoid speaking about people suffering from such conditions in a way that marginalizes, delegitimates, hurts or excludes them; and more generally, 3) between being able to express oneself adequately or take joy in life and seeking to avoid harming others carelessly or thoughtlessly, especially where they may also be subject to various systems of marginalization, delegitimation, or exclusion.
I also disagree with Jon's suggestion that some of our former bloggers were wrong to push as hard as possible for the development in the profession and among those who engaged with us here of a much greater degree of sensitivity and care with respect to how we speak about folks who have historically been marginalized, delegitimated, and excluded by the profession and by the history of 'Western' philosophy.
Increasingly, when I see someone accused of "ableism" because of some inartful (or perfectly fine) turn of expression, I become angry. It just strikes me as Forrest Gumpism. Everything is really peachy, as long as we confine our discourse to positive platitudes (and attacking those who don't so confine themselves).*
But all else being equal, it is better to be able. Speaking in ways that presupposes this is not bad, at least not bad merely in virtue of the presupposition (see also the Johnny Knoxville/Eddie Barbanell video below).
The place where my son gets occupational therapy (to deal with a bunch of sensory processing disabilities he inherited from me)** is called "Abilities." Good for them! I don't want my child to suffer as much as I do. The thought that I should feel guilty for that, or feel guilty for expressing something that presupposes it, just strikes me as insane. And I don't feel guilty for saying it strikes me as insane. To not be able to use "insane" as a derogation when it is appropriate would be to lose sight of the fact that it is horrible to be insane, which would in fact be extraordinarily cruel to the insane.
My friend Justin Isom dealt with his blindness and cancer with incredible dignity. He played a very bad hand extraordinarily well. But any pretense that it was not a bad hand would have been insulting and condescending (just as he would have taken, on the other side, excessive pity to be condescending). Justin thought it was hilarious when I first squirmed about saying "see you later" to him. When you have a blind friend you realize just how much language is seeded with visual metaphors. For the anti-ableist, we are supposed to police our speech in ways that would pretend otherwise. (And please read Neil Tennant's obituary for Justin below,*** which speaks to Justin's astonishingly rich ability (not just astonshingly rich for a blind guy, but all the more interesting and impressive since it's a blind guy talking) to describe experiences, such as public street in Indonesia, in visual terms.)
But for the anti-ableist speech policer, we can't say that a good idea is "visionary" because that might have hurt Justin's feelings. No. I reject that. You don't speak for Justin and you have no right to present him as emotionally infantile enough to care about such things.
Following a suggestion from a friend that some of what’s come to light about the roles of the administration and the board in the Salaita affair might not be consistent with accrediting principles regarding shared governance, I decided to check out the specific rules that UIUC is supposed to be operating under.
The upshot of my survey, which I'll explain in detail below, is that UIUC is at least generally bound to respect principles of academic freedom and shared governance by their accreditation regime, and more specifically, that 1) the Board of Trustees is bound to remain free of undue influence by donors and other exteranl parties where this is contrary to the interests of the university, and 2) that the Board and the Administration are bound to let the faculty oversee academic matters. These last two considerations seem to create a real problem given what we now know about the role of external donor pressure on the board and about the way in which the Trustees and the Chancellor seem to have avoided any consultation with the faculty in making the decision to 'dehire' Salaita. (For those who need an update, your best bet is to read Corey Robin's blog, especially this post.)
In constructing the analogy I noted Professor F, like Salaita, had a distinguished academic record, that she worked in a field which often featured polemically charged debates, many of which for her, because of her personal standing and situation–Professor F has very likely experienced considerable sexism in her time–were likely to be charged emotionally, and that a few hyperbolic, intemperate responses, made in a medium not eminently suited to reasonable discourse, and featuring many crucial limitations in its affordance of sustained intellectual engagement, should not disqualify her from an academic appointment made on the basis of her well-established scholarship and pedagogy.
I could very easily have constructed another analogy, using an accomplished professor of African American studies, Professor B, who stepping into the Ferguson debate, after engaging, dispiritingly, time and again, in his personal and academic life, with not just the bare facts of racism in American life and the depressing facts pertaining to informal, day-to-day segregation but also with a daily dose of bad news pertaining to the fate of young black men in America, might finally experience the proverbial last straw on the camel’s back, and respond with a few tweets as follows: