Two points are relevant here.
Two points are relevant here.
As regular readers of this blog know I hold that the claim that 'analytical philosophy is clear' is at best a useful regulative ideal by which we delude ourselves in order to promote group cohesion and at worst a barrier to self-understanding. Of course, most of the time it is neither; we use it to kick down at continentals and litcritters in order to justify our ongoing ignorance of alternative points of view. Now consider the opening lines of this book review:
In the decades following the publication of Intention, readers saw Anscombe's philosophy of action largely through a Davidsonian lens. Davidson's selective reconstruction was more accessible and less Wittgensteinian than the original. It also encouraged the hope of absorbing Anscombe's insights within a comfortable causalism about the mental. This hope could be sustained as long as relatively few philosophers made a serious study of Anscombe's book.
As the present volume shows, those days are over. We now have a critical mass of authors with the scholarly skill and the philosophical acumen to put us in direct contact with Intention.
Anscombe is, of course, a paradigmatic analytical philosopher. The very idea that one needs "scholarly skill" to read an analytical philosopher simply does not fit our self-conception. Unlike say, Whitehead, who is also notoriously difficult and who has been written out of our tradition and has become assimilated by post-postmodern Continentals to theirs (recal this), Anscombe has not been dismissed (yet). If you want to see how such dismissal works in practice, one can do worse than to start with Davidson's assertion that in Whitehead's class, "Truth, or even serious argument, was irrelevant." (p 14; I thank Stefan Koller for calling my attention to the passage.)
First, we need to note the rhetorical ploy played in this review. There is a contrast between "accessible" and "comfortable" on the one hand, and the "serious study," on the other. It is strongly implied that the Davidsonian reading isn't just mistaken, it just isn't serious (by being selective). Second, without any further evidence it is baldly asserted that in earlier times there was little such "serious study" of Anscombe. Now according to scholar.google Intention has been cited over 2100 times (presumably not all of these subsequent to the volume under review). Anscombe's book has been continuously discused, in fact, by nearly all the leading lights of the tradition. This no serious study before us ploy is as insipid as the Vlastos-move (i.e., until me nobody has understood Plato so we can ignore everybody else). Or the varieties of Spinozism without Spinoza move promoted by French scholars. Such ploys make it impossible to learn from earlier readers.
Note how both wrestlers break kayfabe (the sacred code of wrestling where one never admits that anything is scripted) at 1:38, just to twist the knife in Glenn Beck. I love how they condescendingly explain what a "promo" is. Then the way they effortlessly go back into character at 4:46 is wonderful.
Beck didn't show up to Raw, but WWE did broadcast a reporter failing to get an interview with him at his headquarters. Not since Dusty Rhodes have I seen left wing positions so clearly paired with stereo-typical professional wrestling virtues. Given wrestling's history as a cultural bellwether, this strikes me as a very good thing.
Pornography has been a topic of interest to feminist philosophers and feminists in general for quite some time. (It has also been an important topic in aesthetics and philosophy of art, but here I will not engage with this literature at all.) I think it is fair to say that most feminists (philosophers or otherwise) tend to be critical of pornography and to view it as yet another form of oppression of women, for a variety of reasons. (For an overview, see Watson’s 2010 review article in Philosophy Compass.) But at least some feminists (e.g. Betty Dodson) have defended more favorable views of pornography, for example by recognizing that there is wide diversity under the general heading of pornography. Some say that there is no such thing as Pornography with capital P, but rather a range of pornographies, different instantiations of the general idea.
Let me state from the outset that I (feminist or not) identify with the second camp. I do not endorse the view that pornography is in principle, by definition, degrading (to women or others). So to define pornography as “the graphic sexually explicit subordination of women, whether in pictures or words” (as MacKinnon and Dworkin did in the early 1980s) is in a sense conveniently question-begging. Moreover, it seems to exclude from the realm of pornography manifestations which most of us would not hesitate to call pornography (say, a graphic sexually explicit encounter between two men).
A helpful interpretation of Kant should do a number of things. First, it should have a clear conception of its target audience and their familiarity with Kant. Second, it should treat systematically Kant's method and strategy in the Critique, including the place of each argument in his "architectonic." Third, as background it should explain clearly the views of the philosophers to whom Kant is responding. Fourth, it should take seriously Kant's technical terminology and the positions based on it. And finally, it should demonstrate good familiarity with the Kant literature. This book misses the mark on all counts.
This seems to me to leave out something crucial, which is the reception of the work in the immediate cultural milieu. As far as I can tell, this isn't anomalous in contemporary history though.
While doing job searches in history, I've noticed that the norm in dissertations concerns deep reading of the figure's texts themselves, and sometimes discussion of what came before the figure. But there is almost never discussion of the immediate reception of the figure. Though I'm not a historian, this strikes me as frankly weird and potentially damaging. Is it possible to be a Kant scholar, for example, without having any expertise on German Idealism?
I'm interested if any historians think that the lack of this norm has harmed analytic history of philosophy. First, two possible examples.
I regret to inform you that Awesome Bigname Philosophy Journal cannot accept your paper for publication. After having googled the title of your paper, and failing that, lines from your abstract and paper, our referee discovered your identity. He found that you are a nobody from an lackluster university, without a tenured or tenure track position but only a lowly [adjunct teacher, grad student, postdoc etc], and [a woman, black, non-English speaker etc] to boot. Therefore, after a perfunctory glance at your paper, the referee has decided that your paper is not of high enough quality to be published in ABPJ.
We pass on referees' comments in the hope that they may prove useful. We receive over n submissions each year, and must reject many very competent papers, especially those written by people on the bottom of the academic ladder. We hope that your work will find a home in another journal, though obviously one not as highly regarded as ABPJ.
Given the justified lack of popularity of the neo-cons (and Leo Strauss' purported influence on them), I suspect many of our readers will say 'yes.' Moreover, many of the best historians of philosophy have a deservedly visceral, negative response to Straussians who substitute numerological obsessions and evidence-free arguments about hidden meanings (in which everything is reduced to will to power) for serious scholarship. Let's call what's being rejected "vulgar Straussianism."
Even so, esoteric readings of our philosophical past crop up in surprising places. For example, consider two recent books emanating at the time from Vancouver. First, in her important book, Epicureanism at the Origins of Modernity (Oxford), Catherine Wilson writes (purportedly quoting Malebranche): "that the crux of a philosopher’s doctrine is to be found in those passages where he defends an unpopular thesis; his defense of accepted theses has no informational value." (p. 148) Let's call this "Wilson's Dictum." In context Wilson is commenting on Walter Charleton's use of the dialogue format; a genre that lends itself to multiple readings and, thus, esotericism.
Note that Wilson's is not evidence-free reading; you only get to attribute an unpopular thesis to an author if s/he asserts it once and would have motive not to call repeated attention to it. Primed by Wilson's focus on Epicureanism, I noticed that Adam Smith states very clearly in his own voice: “Fortune, which governs the world” (The Theory of Moral Sentiments 2.3.1, [p. 104 in the Glasgow edition]) in a chapter which according to its (perhaps now ironic) title describes the "final cause" of our psychological constitution. Of course, a sentence fragment does not settle the matter against a Providentialist interpretation of Smith, but if one takes Wilson's dictum seriously, merely piling on the Providentialist and Deist passages fails to undermine the credibility of the (shall we say) more neo-Epicurean reading of Smith, which was, by the way, Reid's (see this nice paper by David Fate Norton J.C. Stewart-Robertson, although Reid's argument is different).
Last night, Michelle
Obama presented the award for best picture at the Oscars. She said
all the usual inspirational stuff about movies making us laugh and cry and teaching
us something important about the human spirit. In Hollywood’s America, it doesn't matter what
you look like (wink, wink - race), where you come from (wink, wink -
immigration), or who you love (wink, wink - gay marriage), if you believe in
yourself, you can make your dreams come true. We all know it’s bullshit, and yet… hey, it’s
But wait a second! Isn't Michelle Obama the First Lady of the United States? The wife of the President? And who are those smiling white people standing behind her in military pomp and little bow ties? Is she actually speaking from the White House? Presenting an entertainment award? I know that's kind of weird, and yet... she looks great! Her bangs are a little heavy, but it works.
At the APA Central* I went to the book session on Jesse Prinz's Beyond Human Nature.** I also went to Mohan Matthen's talk on multi-modal perception. I'll try to bring the two together in this post in discussing the ontological status of the withdrawn generative matrix which seems to fit both Prinz's view of biological mechanisms that allow cultural traits and Matthen's view of retained isotropic images that allow perspectival images.
Here I am, back from my vacation and trying desperately to catch up with the accumulated work and all the interesting events in internet-world of the last week. At NewAPPS alone there are quite a few posts I want to react to, in particular Eric’s post on the genealogy of genealogy. But let me start by commenting on the ‘hot topic’ of the moment, at least among philosophy geeks: L.A. Paul’s draft paper on how decision theory is useless when it comes to making life-transforming decisions such as having a child. Eric and Helen already have nice posts up reacting to the paper, but I hope there is still room for one more NewAPPS post on the topic.
Perhaps the first thing to notice, which comes up only at the end of Paul’s paper, is that the very idea of having children being a matter of choice/decision is a very recent one. For the longest part of human history, and for the largest portion of the human population (excluding, for example, some of those who took up religious vows), finding a partner and procreating was simply the normal course of events, no questions asked. (Indeed, Christian faith even views it as a moral obligation.) It is only fairly recently, possibly only towards the end of the 20th century, that having a child became a matter of choice at least for some people, in some parts of the planet. Contributing factors are the availability of contraceptive methods, and a wider range of life options which are now deemed ‘acceptable’, or at least more acceptable than before. (People who choose to remain child-free, in particular women, are still often looked at with suspicion.)
[cross-posted from our Psychology Today blog]
In the supernatural thriller Memory, written by Bennett Joshua Davlin, Dr. Taylor Briggs, who is the leading expert on memory, examines a patient found nearly dead in the Amazon. While checking on the patient, Taylor is accidentally exposed to a psychedelic drug that unlocks memories of a killer that committed murders many years before Taylor was born. The killer turns out to be his ancestor. Taylor’s memories, despite being of events Taylor never experienced, are very detailed. They contain the point-of-view of his ancestor and the full visual scenario experienced by the killer.
Though the movie is supernatural, it brings up an interesting question. Is it possible to inherit our ancestors’ memories? The answer is not black and white. It depends on what we mean by ‘memory’. The story of the movie is farfetched: there is no evidence or credible scientific theory suggesting that we can inherit specific episodic memories of events that our ancestors experienced. In other words, it’s highly unlikely that you will suddenly remember your great-great-grandfather’s wedding day or your great-great-grandmother’s struggle in childbirth.
Eric has recently attention to this wonderful paper by L.A. Paul. The paper focuses on the question of how we make decisions that can transform our lives, and whether we can ever do so rationally. Her paper looks at the decision whether or not to have children, but it applies to other potentially life-transforming decisions, such as whether or not to go to graduate school or get involved romantically with someone.
Here, I don't want to focus on Paul's claims about the extent to which we have knowledge about what's it like to be a parent. I think, like Eric, this depends a lot on cultural context, and westerners seem to be in a particularly impoverished epistemic position because of the rarity of children, and the cultural ideals that surround it. Parenthood is described in unrealistic romantic, language (e.g., when I was pregnant, friends and family assured me that I would be in a blissful and rosy cloud like state after the birth of my child; breastfeeding would be easy and a wonderful way to connect to my baby; I would forget the pain of childbirth the moment I held her in my arms - all claims that turned out, at least for me, false and made me wonder if anything was wrong with me).
But I think that Paul is nevertheless right that decision theory does not provide us with the right tools to make potentially life-transforming decisions. When westerners today have children, Paul observes that there is a cultural ideal to "think carefully and clearly about what they want before deciding that they want to start a family." How do we do this? According to standard decision theory "we first partition the logical space by determining the possible states that are the outcomes of each act we might perform. After we have the space of possible outcomes, we assign each outcome a value (or utility), and determine the probability of each outcome’s occurring, given the performance of the act." However, she goes on arguing, convincingly, that this model fails, as it is impossible to calculate expected value based on preferences about what it would be like to have one's own child.
There is a curious anonymous document being circulated at the APA Central Division, and posted even on its official notice-board (where I found it). It is entitled "The Report of the APA Committee on the Status of the Profession in 2042." In a footnote, it states: "The Committee was not created by the APA in 2013."
Here are some of the Committee's findings:
"In the US, there will be 15-20 Ph. D. programs, each producing 5 Ph. D.s per year."
"There will be 30-50 three year graduate programs, producing MAPhT (MA in Philosophy Teaching) students. The programs will integrate two years of course work of the familiar kind with a year of training in teaching with practice."
We tend to associate the practice of genealogy, especially in its unmasking variety, with Nietzsche (or the emulators of Foucault). But teaching Toland's (1704) Letters to Serena reminded me that genealogy has a genealogy that precedes Nietzsche. One of Toland's genealogies focuses on the idea of the immortality of the soul, (the subject of the second letter). In paragraph 1, of Letter 2, the "immortality of the soul" is treated as a "truth" known to classical sources independent from and preceding Biblical revelation (p. 20; in fact, he insists that the doctrine is unknown to the Hebrew Bible (Letter II, p. 56)). In the very next paragraph (2) on the very next page, Toland speaks in his own voice and offers a concise statement of his methodology:
To persons less knowing and unprejudiced than Serena, it would [be] found strange perhaps to hear me speak of the soul's immortality, as of an opinion, which, like some others in philosophy, had a beginning at a certain time, or from a certain author who was the inventor thereof, and which was favoured or opposed as peoples' persuasion, interest or inclination led them. Letters II.2 (p. 21 [I have modernized spelling to some degree--ES].)
Serena is the official addressee of the Letters; she is a high status, educated interlocutor. The preface to the Letters has, in fact, a resounding defense of intellectual, gender equality. Toland suggests that it is either "inveterate custom" or the "design in the men" that causes female exlusion from the "world of learning." In general, Toland thinks nurture is responsible for much of our (very flawed) "second nature" in women and men. So, while Toland accepts a universal human nature, it is according to him extremely plastic. Echoing Plato and Malebranche, he suggests that belief and character formation starts in the womb and is developed (or degenerated by) our major social institutions (family, church, universities, etc.)--he treats our acculturation as inevitable, but as practiced as a form of social disease (cf. "infection.")
Posted by Eric Schliesser on 22 February 2013 at 07:32 in Analytic - Continental divide (and its overcoming), Deleuze (and Guattari, sometimes), Early modern philosophy, Eric Schliesser, Foucault, History of philosophy, Politics, Spinoza, Women in philosophy | Permalink | Comments (2) | TrackBack (0)
| | | | |
This review by Jill Vance Buroker concludes with: "This book should not have been published because it adds nothing to the literature. It is difficult to imagine a Kant specialist recommending its publication."
The review looks fair to me because it offers a whole number of arguments for the conclusion. (I have some minor quibbles, but I am no Kant expert.) While I might not have used those exact words, I believe that critical book-reviewing plays an essential role in the discipline's quality control. What do readers think? I prefer signed comments on this one.
Reviewed by the excellent Scott McLemee at IHE. The original London production in 1936 (thus two years before the publication of The Black Jacobins) starred Paul Robeson as Toussaint! So two giants of the 20th century, James and Robeson, portraying a giant of world history, Toussaint.
James was, Høgsbjerg stressed, “acutely conscious of the need to challenge the mythological British nationalist narrative of abolition, one that glorified the role played by British parliamentarians such as Wilberforce. Indeed, in the original version of the playscript C.L.R. James mentioned Wilberforce himself in passing, but then later in a handwritten revision (one that I have respected) decided to remove the explicit mention of the abolitionist Tory MP. "The revision was almost certainly made “to help bring home the essential truth about abolition -- that it was the enslaved who abolished slavery themselves -- to a British audience who would almost certainly be hearing such a truth for the first time.”
Res Philosophica has just announced a cfp "on the topic of transformative experiences for a special issue of the journal." The invited speaker line-up is fantastic. Papers are invited that explore "the implications of the possibility that certain major life experiences are phenomenologically transformative: that is, they are relevantly just like Mary’s when she leaves her black and white room." (For a refresher on Mary see here.) One of the invited papers, "What Mary can’t expect when she’s expecting," by the eminent metaphysician, L.A. Paul, was first called to my attention after my post on the lack of available vocabulary for the emotional life of fatherhood. Paul argues that:
[H]aving one’s own child is an epistemically transformative experience. If it is impossible for me to know what it is like to have the transformative experience of seeing and touching my own child, to know what emotions, beliefs, desires, and dispositions will be caused by having a child, and by extension to know what is like to have the emotions, beliefs, desires, and dispositions caused by having my child, it is impossible for me to gauge the expected value, in phenomenal terms, of having a child. If I cannot gauge the expected value of having my child, I cannot compare this value to the value of remaining childless. And if I cannot compare it to the value of remaining childless, I cannot—even approximately—determine which act would result in the highest expected value. And thus, on the standard model, I cannot use our ordinary, phenomenal-based approach to rationally choose to have my child, nor can I rationally choose to remain childless.
Now the point of the paper is not to argue that becoming a parent is fundamentally an irrational act; Paul allows that there may be ways of thinking about rationality far removed from standard rational choice models that can capture the rationality of such a decision. Paul's paper also allows that models that merely capture the extrinsic features of having children (predator-prey models in ecology, Malthusian growth models in economics and ecology, etc.) do a good job explaining or predicting observed regularities. Paul's approach is even compatible with the possibility that we can experimentally induce utility curves for prospective parents to estimate their willingness to pay for a child.
Some recent findings make me think the answer might be "yes." First, farmers in India have had amazing success increasing crop yields using a method called System of Root Intensification (SRI), showing that we don't need GMOs to "get on with feeding the world" (to use Mark Lynas's phrase). Second, it seems as though most GMO crops really don't have higher yields after all, and pesticide and herbicide use continues to increase, not decrease, with GMOs. In other words, GMOs aren't living up to their promises.
On the political front, the U.S. Food and Drug Administration extended the comment period another 60 days for AquaBounty's proposed genetically engineered salmon, apparently in response to overwhelming opposition to their approval. And in addition to a number of state labeling efforts (including, not surprisingly, Alaska), there is now some movement in the U.S. Congress as well.
There is a very useful thread up at Feminist Philosophers on the unwritten rules of the game of being a professional philosopher, as they apply to publishing, collegiality, teaching etc. A lot of knowledge we acquire as academics is tacit, not systematically taught or collected, and we have to discover it piecemeal over time. Some supervisors and programs help to make some of the tacit knowledge explicit (e.g., by organizing workshops on how to put together a good job application), but many do not. In any case, even if we take that into account, I think most knowledge transmission about academia is still informal.
Yet such knowledge is vital to thrive as an academic. Is it OK for a grad student to approach a specialist in her field she have never met before to ask for feedback on her unpublished paper? Is it acceptable to use someone else's syllabus as a basis for your own course? When is it appropriate to contact a journal editor to gently remind them about your paper? On the thread is a lot of useful knowing-how information, but next to that there is also a lot of tacit knowing-that information that more experienced academics have.
For instance, as an undergrad I did not appreciate the difference between professors and various forms of adjunct faculty and postdocs. I simply saw them all as professors, cozily tenured until retirement. And this is a common mistake: an author for the Chronicle of Higher Education recounts how students at a liberal arts college thought her title 'visiting assistant professor' meant she was a distinguished tenured philosopher, visiting from another faculty. They assumed after her contract ended she would safely return to her home institution. The bleakness of the job market often only becomes apparent to people in their final years of graduate school.
Last week I presented a paper* on "Plato, Political Affect, and Lullabies"** at a wonderful conference at CUNY. One key point is Plato's claim that habits of transgression formed from repeated petty misdeeds can ripple up to bad effect in a polity (788b-c). In the plus ça change category, I read this AP story on "zero tolerance" school policies in the morning paper. Some key grafs:
Zero tolerance traces its philosophical roots to the "broken windows" theory of policing, which argues that if petty crime is held in check, more serious crime and disorder are prevented.[***]
Some nice meditations HERE. For the very reasons that Harman gives, I've started to wonder if I'd do a better job on hiring committees if I just didn't read the reference letters at all.
My first problem is that one has to try to triangulate with respect to the reference writer's personality (and nationality) in order to get anything at all from the letter, and I'm not sure that it is really possible to do this well enough to be fair to the job candidates. The second problem is that I just don't think reference writers are very good predictors of what the candidate is going to be like. Having been on hiring committees for over a decade now, I'm able to follow the careers of many of the people whose letters I've read and I've just seen to many "best philosopher I've ever taught" not publish very much and, on the other hand, lots of people with less effusive praise do amazing things.
I still do read them, though I only look for three things: (1) a better sense of what the person's research is about, especially if she has other fires in the iron besides the dissertation and writing sample (which suggests at least a little that the person will be tenurable, not stop after tenure, and also be a good philosophical conversationalist), (2) a sense of whether the person will be selfless about picking up service work, and (3) red flags. Again though, I don't know if this is fair, given Harman's concerns and my two worries. I try to discount how famous the letter writer is, which seems to me to be the most common source of people putting too much stock in these things. Though again, this isn't always possible as some famous people write shorter letters as a matter of course, and one has to factor this in to be charitable to the applicant (though, again, this is unfair to the candidate with the non-famous letter writer who also writes short letters as a matter of course).
On February 7, 2013, Mississippi officially ratified the 13th Amendment. That's right. Eleven days ago.
Apparently, they had voted to ratify the amendment in 1995, but someone forgot to file the paperwork.
It took Dr. Ranjan Batra, an associate professor of neurobiology and anatomical sciences at the University of Mississippi Medical Center, to set the wheels in motion for the state’s eventual ratification of the amendment to abolish slavery.
Dr. Batra saw the film, "Lincoln," and wondered about the rest of the story. He did some googling and discovered that Mississippi had “ratified the amendment in 1995, but because the state never officially notified the US Archivist, the ratification is not official.”
Recently I posted about some fine 'law & economics'-style reasoning in Thomas More's Utopia. In the midst of a critical treatment of the practice of executing convicted thieves, a further argument is added (during an exchange between Raphael Hythloday and John Morton, Archbishop and Cardinal of Canterbury, and at that time also Lord Chancellor of England):
To be short, Moses' law, though it were ungentle and sharp, as a law that was given to bondmen; yea, and them very obstinate, stubborn, and stiff-necked; yet it punished theft by the purse, and not with death. And let us not think that God in the new law of clemency and mercy, under the which he ruleth us with fatherly gentleness, as his dear children, hath given us greater scope and license to execute cruelty, one upon another. Utopia
In context, Hythloday ("talker of nonsense") offers a battery of arguments against the severity of the criminal law of England. Now one of these arguments appeals to the authority of revelation to insist that much capital punishment may be immoral. As if Hythloday is a traditional natural law thinker, he claims that even a "law made by the consent of men" does not make it moral. (More is writing in a period in which the Tudor dynasty has just consolidated its rise to power by the force of arms.) After all, "why may it not likewise by man's constitutions be determined after what sort whoredom, fornication and perjury may be lawful?"
But Hythloday's appeal to revelation is also a bit unusual. For, in the passage quoted at the top of this post Hythloday relies on an explicit and a more daring implicit contrast. The official contrast (i) is between Moses's legal code and the more gentle rule of Christianity. But there is also an implied contrast (ii) between the way one rules a barbarous people (recently liberated from tyranny) and the way one rules a more civilized people. (The implied contrast between the barbarous and civilization, which is extremely popular in later early modern writing, would have been familiar to More from Aristotle's Politics.) The implied contrast (ii) effectively historicizes the Bible, whose commandments are now understood as fitted to a people at a particular time and place in need of strict rule. This strategy is pursued more relentlessly in Spinoza's Theologic0-Political Treatise (e.g., chapter 5; III/75).
Just got through working on a grant application to the Social Sciences and Humanities Research Council of Canada (or SSHRC, as it is known). SSHRC is famous for its horrible on-line application process, and even more so for the fervour with which they defend it. I won't go in to all of that now, but here's an example of one of the many irritations you face when you apply: