I am deeply grateful for the wonderful feedback I received from readers along the way (also in the form of comments and discussions over at Facebook). I could never have written this paper if it wasn't for all this help, given that much of the material falls outside the scope of my immediate expertise. So, again, thanks all!
(And now, on to start working on a new paper, on the definition of the syllogism in Aristotle, Ockham and Buridan. In fact, it will be an application of the conceptual genealogy method, so it all ties together in the end.)
If you haven't already, you should read yesterday's Stone article in the NYT by Justin McBrayer entitled "Why Our Children Don't Believe There Are Moral Facts." There, McBrayer bemoans the ubiquity of a certain configuration of the difference between "fact" and "opinion" assumed in most pre-college educational instruction (and, not insignificantly, endorsed by the Common Core curriculum). The basic presumption is that all value claims-- those that involve judgments of good and bad, right and wrong, better and worse-- are by definition "opinions" because they refer to what one "believes," in contradistinction to "facts," which are provable or disprovable, i.e., True or False. The consequence of this sort of instruction, McBrayer argues, is that our students come to us (post-secondary educators) not believing in moral facts, predisposed to reject moral realism out of hand. Though I may not be as quick to embrace the hard version of moral realism that McBrayer seems to advocate, I am deeply sympathetic with his concern. In my experience, students tend to be (what I have dubbed elsewhere on my own blog) "lazy relativists." It isn't the case, I find, that students do not believe their moral judgments are true--far from it, in fact-- but rather that they've been trained to concede that the truth of value judgments, qua "beliefs," is not demonstrable or provable. What is worse, in my view, they've also been socially- and institutionally-conditioned to think that even attempting to demonstrate/prove/argue that their moral judgments are True-- and, correspondingly, that the opposite of their judgments are False-- is trés gauche at best and, at worst, unforgivably impolitic.
I cringe, I wince, when I hear someone refer to me as a 'philosopher.' I never use that description for myself. Instead, I prefer locutions like, "I teach philosophy at the City University of New York", or "I am a professor of philosophy." This is especially the case if someone asks me, "Are you a philosopher?". In that case, my reply begins, "Well, I am a professor of philosophy...". Once, one of my undergraduate students asked me, "Professor, what made you become a philosopher?" And I replied, "Well, I don't know if I would go so far as to call myself a philosopher, though I did get a Ph.D in it, and...". You get the picture.
I've been asked to write a review of Williamson's brand new book Tetralogue for the Times Higher Education. Here is what I've come up with so far. Comments are very welcome, as I still have some time before submitting the final version. (For more background on the book, here is a short video where Williamson explains the project.)
Disagreement in debates and discussions is an interesting phenomenon. On the one hand, having to justify your views and opinions vis-à-vis those who disagree with you is perhaps one of the best ways to induce a critical reevaluation of these views. On the other hand, it is far from clear that a clash of views will eventually lead to a consensus where the parties come to hold better views than the ones they held before. This is one of the promises of rational discourse, but one that is all too often not kept. What to do in situations of discursive deadlock?
Timothy Williamson’s Tetralogue is precisely an investigation on the merits and limits of rational debate. Four people holding very different views sit across each other in a train and discuss a wide range of topics, such as the existence of witchcraft, the superiority and falibilism of scientific reasoning, whether anyone can ever be sure to really know anything, what it means for a statement to be true, and many others. As one of the most influential philosophers currently in activity, Williamson is well placed to give the reader an overview of some of the main debates in recent philosophy, as his characters debate their views.
I now discuss the five main features of the historicist conception of philosophical concepts that motivates and justifies the method of conceptual genealogy for philosophical concepts. In a sense, this is the backbone of the paper and of the whole project, so I'm particularly interested in feedback from readers now.
We are now in a better position to describe in more detail what I take to be the five main characteristics of the historicist conception of philosophical concepts that I defend here, borrowing elements from Nietzsche’s conception of genealogy and Canguilhem’s concept-centered historical approach. In short, these are (they will each be discussed in turn subsequently):
Superimposition of layers of meaning
Multiple lines of influence
Connected to (extra- or intra-philosophical) practices and goals
This is the third installment of my series of posts with different sections of the paper on conceptual genealogy that I am working on. Part I is here; Part II.1 is here; a tentative abstract of 2 years ago, detailing the motivation for the project, is here.
I now turn to Canguilhem as an author exemplifying the kind of approach I have in mind when I speak of 'conceptual genealogy'. The main difference is that Canguilhem focused on scientific concepts (especially from biology and medicine), whereas I am articulating a methodology for the investigation of philosophical concepts (though of course, often the line between the two groups will be rather blurry). The same caveat of the previous installment on Nietzsche applies: this is a very brief and inevitably superficial discussion of Canguilhem's ideas, on which there is obviously much more to say.
The thesis of the relevance of historical analysis for philosophical theorizing rests crucially on a historicist conception of philosophical concepts, namely that they are not (or do not correspond to) a-historical essences or natural kinds. However, ‘historicism’ can have different meanings (Beiser 2011, Introduction), so let me now spell out in more detail in what sense I defend a historicist conception of philosophical concepts.
As some readers may recall (see this blog post with a tentative abstract -- almost 2 years ago!), I am working on a paper on the methodology of conceptual genealogy, which is the methodology that has thus far informed much of my work on the history and philosophy of logic. Since many people have expressed interest in this project, in the next couple of days I will post the sections of the paper that I've already written. Feedback is most welcome!
Today I post Part I, on the traditionally a-historical conception of philosophy of analytic philosophers. Tomorrow I will post Part II.1, on Nietzschean genealogy; on Thursday and Friday I will post Part II.2, on the historicity of philosophical concepts, in two installments.
Wiliams (2002) and Craig (2007) fittingly draw a distinction between genealogies that seek to expose the reprehensible origins of something and thereby decrease its value, and genealogies that seek to glorify their objects by exposing their ‘noble’ origins. The former are described as ‘subversive’, ‘shameful’ or ‘debunking’, while the latter may be dubbed ‘vindicatory’. (I will have much more to say on this distinction later on.) Nietzsche’s famous genealogical analysis of morality is the archetypal subversive genealogy, and has given rise to a formidable tradition of deconstruction of concepts, values, views, beliefs etc. by the exposure of their pudenda origo, their shameful origins. As described by Srinivasan (2011, 1),
Nietzsche’s innovation prompted a huge cultural shift towards subversive genealogical thinking – what might be called the ‘Genealogical Turn’ – including Freudian analysis, 20th-century Marxism, Foucault’s historical epistemology, certain strands of postcolonial and feminist theory, and much of what goes by the label ‘postmodernism’. These ideological programmes operate by purporting to unmask the shameful origins – in violence, sexual repression, gender or racial hegemony and economic and social oppression – of our concepts, beliefs and political structures.
We continue awaiting the decision of a grand jury on whether or not to indict Darren Wilson, a white police officer, who shot and killed Michael Brown, an unarmed black teenager, exactly 15 weeks ago today on a suburban street in Ferguson, Missouri. News reporters from across the globe have been camped out in Ferguson for months, their expectation of an announcement teased and disappointed several times in the last week alone. On Monday, Missouri Governor Jay Nixon declared a state of emergency and activated the National Guard in advance of the grand jury's decision. Yesterday, President Barack Obama, in what can only be judged to be an anticipation of Wilson's non-indictment, preemptively urged protesters not to use Ferguson as an "excuse for violence." In the meantime, demonstrators of various ilk remain on standby, rallying their troops, refining their organizational strategies, painting their oppositional signs, standing vigilantly at the ready for whatever may come.
But what are we waiting for, really, as we wait for Ferguson?
Today is UNESCO’s World Philosophy Day, which is celebrated on the third Thursday of November every year. As it so happens, November 20th is also the United Nations’ Universal Children’s Day (here is a blog post I wrote for the occasion 2 years ago). I am truly delighted that these two days coincide today, as children and philosophy are two of my greatest passions. But the intimate connection between children and philosophy runs much deeper than my particular, individual passions, and so it should be celebrated.* As Wittgenstein famously (but somewhat dismissively) put it:
Philosophers are often like little children, who first scribble random lines on a piece of paper with their pencils, and now ask an adult "What is that?" (Philosophical Occasions 1912-1951)
My own favorite definition of philosophy is that philosophy is at heart the activity of asking questions about things that appear to be obvious but are not. (True enough, it also involves attempting to provide answers and giving arguments to support one’s preferred answers.) And so it is incumbent on the philosopher to ask for example ‘What is time, actually?’, while everybody else goes about their daily business taking the nature of time for granted. Indeed, philosophy is intimately connected with curiosity and inquisitiveness, and this idea famously goes back all the way to the roots of philosophy as we know it:
It is well-attested that people are heavily biased when it comes to evaluating arguments and evidence. They tend to evaluate evidence and arguments that are in line with their beliefs more favorably, and tend to dismiss it when it isn't in line with their beliefs. For instance, Taber and Lodge (2006) found that people consistently rate arguments in favor of their views on gun control and affirmative action more strongly than arguments that are incongruent with their views on these matters. They also had a condition where people could freely pick and choose information to look at, and found that most participant actively sought out sympathetic, nonthreatening sources (e.g., those pro-gun control were less likely to read the anti-gun control sources that were presented to them).
Such attitudes can frequently lead to belief polarization. When we focus on just those pieces of information that confirm what we already believe, we get further and further strengthened in our earlier convictions. That's a bad state of affairs. Or isn't it? The argumentative theory of reasoning, put forward by Mercier and Sperber suggests that confirmation bias and other biases aren't bugs but design features. They are bugs if we consider reasoning to be a solitary process of a detached, Cartesian mind. Once we acknowledge that reasoning has a social function and origin, it makes sense to stick to one's guns and try to persuade the other.
Like an invisible hand, the joint effects of biases will lead to better overall beliefs in individual reasoners who engage in social reasoning: "in group settings, reasoning biases can become a positive force and contribute to a kind of division of cognitive labor" (p. 73). Several studies support this view. For instance, some studies indicate that, contrary to earlier views, people who are right are more likely to convince others in argumentative contexts than people who think they are right. In these studies, people are given a puzzle with a non-obvious solution. It turns out that those who find the right answer do a better job at convincing the others, because the arguments they can bring to the table are better. But is there any reason to assume that this finding generalizes to debates in science, politics, religion and other things we care about? It's doubtful.
How we ought to understand the terms "civility" and "collegiality" and to what extent they can be enforced as professional norms are dominating discussions in academic journalism and the academic blogosphere right now. (So much so, in fact, that it's practically impossible for me to select among the literally hundreds of recent articles/posts and provide for you links to the most representative here.) Of course, the efficient cause of civility/collegiality debates' meteoric rise to prominence is the controversy surrounding Dr. Steven Salaita's firing (or de-hiring, depending on your read of the situation) by the University of Illinois only a month ago, but there are a host of longstanding, deeply contentious and previously seething-just-below-the-surface agendas that have been given just enough air now by the Salaita case to fan their smoldering duff into a blazing fire.
In the interest of full disclosure, I'll just note here at the start that I articulated my concerns about (and opposition to) policing norms of civility/collegiality or otherwise instituting "codes" to enforce such norms some months ago (March 2014) in a piece I co-authored with Edward Kazarian on this blog here (and reproduced on the NewAPPS site) entitled "Please do NOT revise your tone." My concern was then, as it remains still today, that instituting or policing norms of civility/collegiality is far more likely to protect objectionable behavior/speech by those who already possess the power to avoid sanction and, more importantly, is likely to further disempower those in vulnerable professional positions by effectively providing a back-door manner of sanctioning what may be their otherwise legitimately critical behaviors/speech. I'm particularly sympathetic to the recent piece "Civility is for Suckers" in Salon by David Palumbo-Liu (Stanford) who retraces the case-history of civility and free speech and concludes, rightly in my view, that "civility is in the eye of the powerful."
I am working on a paper now (together with my student Leon Geerdink, for a volume on the history of early analytic philosophy being edited by Chris Pincock and Sandra Lapointe) where I elaborate on a hypothesis first presented at a blog post more than 3 years ago: that the history of analytic philosophy can to a large extent be understood as the often uneasy interplay between Russellianism and Mooreanism, in particular with respect to their respective stances on the role of common sense for philosophical inquiry. In the first part of the paper, we present an (admittedly superficial and selective) overview of some recent debates on the role of intuitions and common sense in philosophical methodology; in the second part we discuss Moore and Russell specifically, and in the third part we discuss what I take to be another prominent instantiation of the opposition between Russellianism and Mooreanism: the debate between Carnap and Strawson on the notion of explication.
I am posting here a draft of the first part, i.e. the overview of recent debates. I would be very interested to hear what readers think of it: is it at least roughly correct, even if certainly partial and incomplete? Are the categories I carved up to make sense of these debates helpful? Can they be improved? Feedback would be most welcome!
UPDATE: I forgot to mention that a paper that has been extremely useful for me to organize my thoughts on this topic is Michael Della Rocca's 'The taming of philosophy', which gets quite extensively discussed in other sections of my paper with Leon. It is an excellent paper. However, there is still a substantive disagreement between Della Rocca and us, namely that we think there is a lot more tension between Russell and Moore on the question of common sense's role for philosophy than Della Rocca recognizes (he describes both Moore and Russell as fans of common sense).
As someone who has spent the better part of her career researching, analyzing and teaching not only about the structure and nature of oppressive power regimes, but also better and worse ways to resist or transform such regimes, I've nevertheless been unable to settle in my own mind, to my own satisfaction, my position with regard to the moral or political value of revolutionary violence. I can say that my core moral intuitions (for whatever those are worth) definitely incline me toward favoring nonviolence as a principled ethical commitment... though, over the years, I have found those intuitive inclinations fading in both intensity and persuasiveness. As a philosopher, a citizen and a moral agent, I continue to be deeply unsettled by my own ambivalence on this matter.
First, a preliminary autobiographical anecdote: I spent a year between undergraduate and gradate school in the nonprofit sector, as the Director of the M.K.Gandhi Institute for the Study of Nonviolence. (That was back in 2000, when the Gandhi Institute was still housed at Christian Brothers University in Memphis, which is now my academic home, evidencing the kind of bizarro turn-of-fate that can only be credited to some particularly clever-- or ironically humorous-- supernatural bureaucrat.) I went to the Gandhi Institute initially because nonviolence was an all-but-unquestioned moral virtue for me at the time. But, after a few years in graduate school and consistently since, the many and varied until-then-unposed questions about the moral or political legitimacy of violence pressed their way to the fore of my mind. In roughly chronological order, I'd say that the combination of (1) my first real engagement with Frantz Fanon's argument in "Concerning Violence" (from his Wretched of the Earth), the arguments by Marx (and Marxists) in various texts advocating more or less violent revolution, and Noam Chomsky's considerations of the same, (2) my extensive research into human rights violations, transitional justice and transitional democracies, postcolonial theory, feminist theory and critical race theory, which collectively constituted the subject of my dissertation, (3) the radically dramatic shift in what counts as properly-speaking "political" and/or "revolutionary" violence in the post-9/11 world and (4) my own experiences, from near and afar, with the increasing number of (threatened, proto-, aborted, defeated and/or more-or-less successful) revolutions taking place in my adult lifetime (e.g., OWS, the Arab Spring and, much closer to home and far less violent, the current and ongoing academic revolution surrounding the Salaita case), all worked together to contribute to my rethinking the merits and demerits of violence as a way of resisting/combatting/correcting oppressive, exclusionary or otherwise unjust power regimes.
For they know they are not animals. And at the very moment when they discover their humanity, they begin to sharpen their weapons to secure its victory. --Frantz Fanon, The Wretched of the Earth America has been and remains an apartheid state. That sad but increasingly undeniable fact was made apparent last night in Ferguson, Missouri to a group of peaceful protesters amidst tanks, deafening LRADs, a haze of tear gas and a firestorm of rubber (and real) bullets. The other tragic fact made apparent in Ferguson last night is that America is only ever a hair's-breadth away from a police state... if we understand by "police"not a regulated body of law-enforcement peacekeepers empowered to serve and protect the citizenry, but rather a heavily-armed, extra-constitutional, militarized cadre of domestic soldiers who provoke and terrorize with impunity. Much of the time, we are able to forget or ignore these unfortunate truths about contemporary America-- and by "we" I mean our elected officials, our bureaucrats and financiers, and a lot of self-delusionally "post-racial," though really white, people-- but the mean truth of gross inequality, both de facto and de jure, remains ever-present in spite of our disavowals, simmering steadily just below the allegedly free and fair democratic veneer of our polis.
Greg Howard, journalist and parrhesiates, said it about as plainly as it can be said this past Tuesday in his article for Deadspin: America is not for black people. The truth of "American apartheid" should make us all ashamed, saddened, angry, deeply troubled as moral and political agents. And, what is more, it should frighten us all.
There's been a good bit conversation recently about the merits and demerits of "public philosophy" and, as someone who considers herself committed to public philosophy (whatever that is). I'm always happy to stumble across a piece of remarkably insightful philosophical work in the public realm. Case in point: Robin James (Philosophy, UNC-Charlotte) posted a really fascinating and original short-essay on the Cyborgology blog a couple of days ago entitled "An attempt at a precise & substantive definition of 'neoliberalism,' plus some thoughts on algorithms." There, she primarily aims to distinguish the sense in which we use the term "neoliberalism" to indicate an ideology from its use as a historical indicator, and she does so by employing some extremely helpful insights about algorithms, data analysis, the mathematics of music, harmony, and how we understand consonance and dissonance. I'm deeply sympathetic with James' underlying motivation for this piece, namely, her concern that our use of the term "neoliberalism" (or its corresponding descriptor "neoliberal") has become so ubiquitous that it is in danger of being evacuated of "precise and substantive" meaning altogether. I'm sympathetic, first, as a philosopher, for whom precise and substantive definitions are as essential as hammers and nails are to a carpenter. But secondly, and perhaps more importantly, I'm sympathetic with James' effort because as Jacques Derrida once said "the more confused the concept, the more it lends itself to opportunistic appropriation." Especially in the last decade or so, "neoliberalism" is perhaps the sine qua non term that has been, by both the Left and the Right, opportunistically appropriated.
James' definition of neoliberalism's ideological position ("everything in the universe works like a deregulated, competitive, financialized, capitalist market") ends up relying heavily on her distinction of neoliberalism as a particular type of ideology, i.e., one "in which epistemology and ontology collapse into one another, an epistemontology." In sum, James conjectures that neoliberal epistemontology purports to know what it knows (objects, beings, states of affairs, persons, the world) vis-a-vis "the general field of reference of economic anaylsis."
Carolyn Dicey Jennings has a post up discussing the unfortunate implications of criticizing a person’s views in terms of their presumed (lack of) intelligence. I agree with much of what she says there (though I don’t think the issue is exclusively or even predominantly about criticism of women and members of other disadvantaged groups, even if impacts these groups to a greater extent). I want however to bring up another aspect of Brian Leiter’s criticism of Carolyn’s analysis, namely his use of the adjective ‘nonsense’, and connect it to what seems to be a pervasive but somewhat questionable practice among philosophers.
In fact, I was thinking of such a post even before reading Carolyn’s post. The idea was prompted by a conversation with Chris Menzel over lunch last week in Munich. Chris was telling me about some of his thoughts on Williamson’s Modal Logic as Metaphysics, and how Williamson describes the actualism vs. possibilism debate as ‘confused’, i.e. as something that he cannot make sense of. So technically, Williamson is (here) not accusing specific people of holding nonsensical positions, but according to him this is a nonsensical debate, as it were. (Chris Menzel is working on a paper on this material where he objects to Williamson's diagnosis of the debate.)
The notion of ‘nonsense’ has an interesting philosophical (recent) history, dating back at least to the Tractatus, and then later appropriated by the Vienna Circle. (I’d be interested to hear of earlier systematic uses of the notion of nonsense for philosophical purposes.) So, to be sure, it is in itself a philosophically interesting notion, but I think it becomes problematic when 'this is nonsense!' counts as a legitimate, acceptable move in a philosophical debate.
This is the first of a three-part series featuring in-depth interviews with philosophers who have left academia. This part (part 1) focuses on their philosophical background, the jobs they have now, and why they left academia. Part 2 examines the realities of having a non-academic job and how it compares to a life in academia. In part 3, finally, the interviewees reflect on the transferable skills of a PhD in philosophy, and offer concrete advice on those who want to consider a job outside of academia.
Does having a PhD in philosophy mean your work opportunities have narrowed down to the academic job market? This assumption seems widespread, for example, a recent Guardian article declares that programs should accept fewer graduate students as there aren’t enough academic jobs for all those PhDs. Yet academic skills are transferrable: philosophy PhDs are independent thinkers who can synthesize and handle large bodies of complex information, write persuasively as they apply for grants, and they can speak for diverse kinds of audiences.
How do those skills translate concretely into the non-academic job market? To get a clearer picture of this, I conducted interviews with 7 philosophers who work outside of academia. They are working as consultant, software engineers, ontologist (not the philosophical sense of ontology), television writer, self-employed counselor, and government statistician. Some were already actively considering non-academic employment as graduate students, for others the decision came later—for one informant, after he received tenure.
These are all success stories. They are not intended to be a balanced representation of the jobs former academics hold. Success stories can provide a counterweight to the steady drizzle of testimonies of academic disappointment, where the inability to land a tenure track position is invariably couched in terms of personal failure, uncertainty, unhappiness and financial precarity. In this first part, I focus on what kinds of jobs the respondents hold, and how they ended up in non-academic jobs in the public and private sector. Why did they leave academia? What steps did they concretely take to get their current position?
I hope this series of posts will empower philosophy PhDs who find their current situation less than ideal, especially—but no only—those in non-tenure track position, to help them take steps to find a nonacademic career that suits them. And even if one’s academic job is as close to a dreamjob as one can conceivable get, it’s still fascinating to see what a PhD in philosophy can do in the wider world.
Last year weannounced the launch of Ergo, an Open Access Journal of Philosophy. Today is the grand day of the publication of Ergo’s very first issue, with four amazing papers. To commemorate this occasion, the Ergo editors asked four distinguished philosophers each to comment on one of the four papers by means of blog posts. These are:
Julia Jorati (OSU) on a paper in early modern by Paul Lodge (Oxford), at The Mod Squad.
Last week I was ‘touring’ in Scotland, first in St. Andrews for a workshop on medieval logic and metaphysics, and then in Edinburgh for a workshop on philosophical methodologies, organized by the Edinburgh Women in Philosophy Group. In the latter, I presented a paper entitled ‘Virtuous adversariality as a model for philosophical inquiry’, which grew out of a number of blog posts on the topic I’ve been writing in the recent past (here, here and here). Quoting from the abstract:
In my talk, I will develop a model for philosophical inquiry that I call 'virtuous adversariality', which is meant to be a response to critics from both sides [those who criticize and those who endorse adversariality in philosophy]. Its key feature is the idea that a certain form of adversariality, more specifically disagreement and debate, is indeed at the heart of philosophy, but that philosophical inquiry also has a strong cooperative, virtuous component which regulates and constrains the adversarial component. The main inspiration for this model comes from ancient Greek dialectic.
And so I gave my talk, and somewhat against the spirit of it, everybody in the audience seemed to agree with pretty much everything I had said – where are these opponents when you need them? But one person, Amia Srinivasan (Oxford), raised what is perhaps the most serious objection to any adversarial mode of inquiry, virtuous or not: it may well minimize our endorsement of false beliefs, but it does so at the risk of also minimizing our endorsement of true beliefs.
I have been thinking about an analogy to the Bechdel test for philosophy papers - this in the light of recent observations that women get fewer citations even if they publish in the "top" general philosophy journals (see also here). To briefly recall: a movie passes the Bechdel test if (1) there are at least 2 women in it, (2) they talk to each other, (3) about something other than a man.
A paper passes the philosophy Bechdel test if
It cites at least two female authors
At least one of these citations engages seriously with a female author's work (not just "but see" [followed by a long list of citations])
At least one of the female authors is not cited because she discusses a man (thanks to David Chalmers for suggesting #3).
The usual cautionary notes about the Bechdel test apply here too. A paper that doesn't meet these standards is not necessarily deliberately overlooking women's work (it could be ultra-short, it might be on a highly specialized topic that has no female authors in the field - is this common?), but on the whole, it seems like a good rule of thumb to make sure women authors in one's field are not implicitly overlooked when citing.
The news has just been released that Rev. Fred Phelps, founder and lifelong shepherd of the Westboro Baptist Church (in Topeka, Kansas) has died at the age of 84. I find it difficult, I confess, to summon the normal human compassion that usually accompanies news of another's death in this case, largely because Phelps dedicated his life to broadcasting his rejection of-- not to mention enlisting others, including children, to stage carnival-like circuses around his rejection of-- what most people would consider even the most minimally-decent exhibitions of human compassion. Fred Phelps was one of the most infamous, outrageous, dishonorable and genuinely despicable hatemongers of my generation. And, what is more, Fred Phelps' hate was as ferocious and vicious as it was blind. Through the prism of his delusional and evangelical abhorrence, the Westboro congregants en masse considered themselves justified in casting an unjustifiably wide net of Judgment. Caught in that net were many: ranging from bona fide innocents against whom no reasonable person could or ought cast aspersions, like Matthew Shepard, to a whole host of other "collateral-damage" victims of Phelps' quasi-political positions who found themselves the inadvertent and inauspicious targets of his his flock's detestation.
I say again: I find it very, very difficult to summon the normal human compassion that ought to accompany the news of Fred Phelps' passing.
Nevertheless, these are the moments when our inclination toward Schadenfreude, however deeply affirming and deeply satisfactory indulging that sentiment may feel, ought to be on principle squelched.
I am currently supervising a student writing a paper on Wittgenstein’s notion of therapy as a metaphilosophical concept. The paper relies centrally on a very useful distinction discussed in N. Rescher’s 1985 book The Strife of Systems (though I do not know whether it was introduced there for the first time), namely the distinction between prescriptive vs. descriptive metaphilosophy (the topic of chap. 14 of the book).
The descriptive issue of how philosophy has been done is one of factual inquiry largely to be handled in terms of the history of the field. But the normative issue of how philosophy should be done – or significant questions, adequate solutions, and good arguments – is something very different. (Rescher 1985, 261)
Rescher goes on to argue that descriptive metaphilosophy is not part of philosophy at all; it is a branch of factual inquiry, namely the history of philosophy and perhaps its sociology. Prescriptive metaphilosophy, by contrast, is real philosophy: methodological claims on how philosophy should be done are themselves philosophical claims. (Full disclosure: I haven’t read the whole chapter, only what google books allows me to see…) Rescher’s position as described here seems to be quite widespread, encapsulating the ‘disdain’ with which not only descriptive metaphilosophy, but also the history of philosophy in general, is often viewed by ‘real philosophers’. And yet, this position seems to me to be fundamentally wrong (and this is also the claim that my student is defending in his paper).
(Notice that to discuss the status of descriptive metaphilosophy as philosophy, we need to go meta-metaphilosophical! It’s turtles all the way up, or down, depending on how you look at it.)
A recent interview in the Stone by Gary Gutting of Alvin Plantinga gave rise to expected criticisms, for instance by Massimo Pigliucci. The wide media exposure of Plantinga puts him forward as somehow representative of what Christian philosophers believe, and if his reasoning is not sound then, as Pigliucci puts it “theology is in big trouble”.
For Plantinga, as is well known and again iterated in this interview, the properly functioning sensus divinitatis is sufficient for belief in God, and one need not have any explicit arguments at all for God’s existence. Nevertheless, Plantinga does say that the “whole bunch taken together” of such arguments are “as strong as philosophical arguments ordinarily get”. In a brief digression to the problem of evil, Plantinga does not even fully acknowledge it as a problem (calling it the “so-called problem of evil”), although he acknowledges there is some strength to it. The problem is then quickly solved with a Fall theodicy, where God mends the abuse of freedom of his creatures through the horrible and humiliating death of his Son, which Plantinga thinks is a “magnificent possible world”.
Overall, I found the tone of this interview somewhat placid. Eleonore Stump has termed this sort of approach toward evil "the Hobbit attitude to evil" (note and update: to clarify, she does not refer to Plantinga's work in the essay, the interpretation is mine). She writes: “Some people glance into the mirror of evil and quickly look away. They take note, shake their heads sadly, and go about their business. ... Tolkien's hobbits are people like this. There is health and strength in their ability to forget the evil they have seen. Their good cheer makes them robust.” — In fairness, Plantinga did write defenses to account for the problem of evil, but in my view, he does not take it seriously enough. Eleonore Stump does not share Plantinga’s reasons for being a religious believer, nor do other philosophers of religion who have spoken out in Morris' and Kelly Clark’s collections of spiritual autobiographies of philosophers who believe. So why do Christian philosophers of religion believe that something like Christian theism is true?
I have a PhD student working on justification in epistemology. He just got started a few months ago, so for now we are sort of ‘sniffing around’ before we define a more precise focus. (He wrote his Master’s thesis on John Norton and the justification of induction.) Now, by a nice twist of fate, last week I received a Google Scholar citation alert which put us on a very promising track: Rawls’ notion of justification. (My book Formal Languages in Logic was cited in this Pitt dissertation, in the same section where there is a discussion of Rawls on justification. The dissertation, by Thomas V. Cunningham, looks very interesting by the way.)
Here is the crucial passage as quoted in the dissertation:
Justification is argument addressed to those who disagree with us, or to ourselves when we are of two minds. It presumes a clash of views between persons or within one person, and seeks to convince others, or ourselves, of the reasonableness of the principles upon which our claims and judgments are founded … justification proceeds from what all parties to the discussion hold in common … thus, mere proof is not justification … proofs become justification once the starting points are mutually recognized, or the conclusions so comprehensive and compelling as to persuade us of the soundness of the conception expressed by their premises…[C]onsensus…is the nature of justification. (Rawls, A Theory of Justice (1999 ed.), 508-509).
In my regular visits to Munich as an external member of the MCMP, a frequent item on my program is meeting with Peter Adamson, of ‘History of Philosophy without any Gaps’ fame, to talk about, well, the history of philosophy (there are still gaps to be filled!). So last week, after another lovely 2-hour session that felt like 10 minutes, Peter told me about a chapter of Julian Barnes’ AHistory of the World in 10 ½ Chapters, where everyone goes to heaven and gets to do whatever they want for however long they want. After some years of pleasurable life, almost everyone then gives up and wants to die ‘for real’, but a particular group of people is remarkably resilient: the philosophers, who are happy to go on discussing with each other for decades and decades. They are the ones who last the longest in heaven. (I haven’t read the book yet, but coincidentally I was reading another one of Barnes’ books.)
Coincidence or not, a day later I came across an article by Nigel Warburton, of ‘Philosophy Bites’ fame, on how philosophy is above all about conversation. (Those podcasters like their talking alright.) The article points out that, while the image of the philosopher as the lone thinker, associated with Descartes, Boethius, and Wittgenstein, is still influential, it is simply a very partial, if not entirely wrong, picture of philosophical practice. Warburton relies on John Stuart Mill to emphasize the importance of conversation and dissent for philosophical inquiry:
Yet another interesting piece in the Guardian on academia: Nobel-prize winner (in medicine) Randy Schekman declares he will no longer submit papers to ‘luxury’ journals such as Nature, Science and Cell. His main argument:
These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called "impact factor" – a score for each journal, measuring the number of times its papers are cited by subsequent research. Better papers, the theory goes, are cited more often, so better journals boast higher scores. Yet it is a deeply flawed measure, pursuing which has become an end in itself – and is as damaging to science as the bonus culture is to banking.
This is my first foray into newAPPS waters-- and I thank the newAPPS coterie for the invitation!-- so I thought I’d start by tossing out a fairly straightforward philosophical claim: Tolerance is not a virtue.
When I say that tolerance is not a virtue, to be clear, I don’t mean to imply that tolerance is a vice. No reasonable moral agent, certainly no moral philosopher worth his or her salt, would concede that. Rather, I only want to point out that “being tolerant” requires at most little if not nothing more than refraining from being vicious. Not only is it the case that we don’t define any other virtue in this explicitly negative way, but we also don't generally ascribe any particular kind of moral credit to persons who are merely refraining from being vicious.
This week, we’ve had a new round of discussions on the ‘combative’ nature of philosophy as currently practiced and its implications, prompted by a remark in a column by Jonathan Wolff on the scarcity of women in the profession. (Recall the last wave of such discussions, then prompted by Rebecca Kukla’s 3AM interview.) Brian Leiter retorted that there’s nothing wrong with combativeness in philosophy (“Insofar as truth is at stake, combat seems the right posture!”). Chris Bertram in turn remarked that this is the case only if “there’s some good reason to believe that combat leads to truth more reliably than some alternative, more co-operative approach”, which he (apparently) does not think there is. Our own John Protevi pointed out the possible effects of individualized grading for the establishment of a competitive culture.
As I argued in a previous post on the topic some months ago, I am of the opinion that adversariality can have a productive, positive effect for philosophical inquiry, but not just any adversariality/combativeness. (In that post, I placed the discussion against the background of gender considerations; I will not do so here, even though there are obvious gender-related implications to be explored.) In fact, what I defend is a form of adversariality which combines adversariality/opposition with a form of cooperation.
Whatever Schubert intended with this request, I cannot imagine a greater compliment from one composer to another.
Let's leave aside, those professional philosophers for whom philosophy is primarily a job or an interesting diversion from which one can 'retire.' Let's imagine, rather, those ('the infected philosophers') for whom philosophy is a necessity. Such an infected philosopher would keep at philosophy to the very end. Yet, on her deathbed, would she turn to a work by somebody else (e.g., as Hume did with Lucian), would she keep teaching (Socrates), would she, in fact, try to complete her last work(s), would she seek consolation, or would she ask to re-read or hear one of her own results/works?
In a recent post I introduced a distinction between two types of pragmatic functions corresponding to two
directions of fit with social norms. An invocative function was one whereby a
properly performed speech act contributed to the institution of a social norm -
say, giving a warranted order - and a reflective function - that asserts the
(prior) existence of norm. Here, I first mention two other independent
dimensions of variation and develop a single example that has been on my mind