In an earlier post, I took some initial steps toward reading Foucault’s last two lecture courses, The Government of Self and Others (GS) and The Courage of Truth(CT), in which he studies the ancient Greek concept of parrhesia. As I noted last time, one of the things Foucault finds is a concern on the part of the Greeks that philosophy achieve effects in the world, and not remain at the level of “mere logos.”
Here, I want to say more (warning: lots more. Long post coming!) about that framework and discussion, in Foucault’s discussion of Plato in GS. In particular, I want to look at his reading of Plato’s Seventh Letter. I have to confess that I hadn’t read the Letter until this week, despite having read quite a bit of ancient Greek philosophy. I suspect that I’m not alone. This is in part because the authorship has been contested, but also no doubt because the text is completely at odds with most of the rest of Plato’s corpus. On the surface of things, the Letter is a sort of apologia: Plato is explaining his own conduct in relation to Dion and Dionysius of Syracuse, where he consents to offer advice – parrhesia – and becomes embroiled in the feuding between Dion and Dionysius by trying to mediate on Dion’s behalf. Why did he respond to the call? Because:
Foucault’s last lecture courses at the Collège de France – recently published as The Government of Self and Others[GS] and The Courage of Truth [CT] – are interesting for a number of reasons. One is of course they offer one of the best glimpses we have of where his thought was going at the very end of his life; he died only months after delivering the last seminar in CT, and there is every reason to believe that he both knew that he was dying, and why. There’s a lot to think about in them, at least some of which I hope to talk about here over a periodic series of posts. Here I want to say something introductory about the material, and look at Foucault’s critique of Derrida in it.
The lectures contain a sustained investigation of parrhesia, the ancient Greek ethical practice of truth-telling. “Truth to power” is the closest modern term we have for such a practice, though you don’t have to get very far into the lectures to realize how richly nuanced the topic is, and how many different ways it manifest itself in (largely pre-Socratic) Greek thought and literature. The lectures also contain a number of references to contemporary events and people (from the beginning: GS starts with Kant, before going back to the Greeks), and it’s hard to put CT down without a sense that, had there been another year of lectures, Foucault would have been more explicit in assessing the implications of the study of Greek parrhesia today.
In this section, I pitch genealogy against its close cousin archeology in order to argue that genealogy really is what is needed for the general project of historically informed analyses of philosophical concepts that I am articulating. And naturally, this leads me to Foucault. As always, comments welcome! (This is the first time in like 20 years that I do anything remotely serious with Foucault's ideas: why did it take me so long? Lots of good stuff there.)
I hope to have argued more or less convincingly by now that, given the specific historicist conception of philosophical concepts I’ve just sketched, genealogy is a particularly suitable method for historically informed philosophical analysis. In the next section, a few specific examples will be provided. However, and as mentioned above, I take genealogy to be one among other such historical methods, so there are options. Why is genealogy a better option than the alternatives? In order to address this question, in this section I pitch genealogy against one of its main ‘competitors’ as a method for historical analysis: archeology. Naturally, this confrontation leads me directly to Foucault.
Daniel Zamora’s interview in Jacobin (following the publication of a book he edited), in which he claims that Foucault ended up de facto endorsing neoliberalism, has generated a lot of renewed discussion about Foucault’s late work. Over at An und für sich, Mark William Westmoreland has organized a series of posts responding to Zamora. I’m one of the contributors; the others are Verena Erlenbusch (Memphis), Thomas Nail (Denver), and Johanna Oksala (Helsinki). My contribution is cross-posted below, but you really should start with the interview and then read Erlenbusch’s post – she lays out the context of the controversy, and discusses the book (which came out fairly recently, and which hasn’t been translated yet) in considerable detail.
I’ll update with links to Nail’s and Oskala’s contributions when they’re up.
As some readers may recall (see this blog post with a tentative abstract -- almost 2 years ago!), I am working on a paper on the methodology of conceptual genealogy, which is the methodology that has thus far informed much of my work on the history and philosophy of logic. Since many people have expressed interest in this project, in the next couple of days I will post the sections of the paper that I've already written. Feedback is most welcome!
Today I post Part I, on the traditionally a-historical conception of philosophy of analytic philosophers. Tomorrow I will post Part II.1, on Nietzschean genealogy; on Thursday and Friday I will post Part II.2, on the historicity of philosophical concepts, in two installments.
Wiliams (2002) and Craig (2007) fittingly draw a distinction between genealogies that seek to expose the reprehensible origins of something and thereby decrease its value, and genealogies that seek to glorify their objects by exposing their ‘noble’ origins. The former are described as ‘subversive’, ‘shameful’ or ‘debunking’, while the latter may be dubbed ‘vindicatory’. (I will have much more to say on this distinction later on.) Nietzsche’s famous genealogical analysis of morality is the archetypal subversive genealogy, and has given rise to a formidable tradition of deconstruction of concepts, values, views, beliefs etc. by the exposure of their pudenda origo, their shameful origins. As described by Srinivasan (2011, 1),
Nietzsche’s innovation prompted a huge cultural shift towards subversive genealogical thinking – what might be called the ‘Genealogical Turn’ – including Freudian analysis, 20th-century Marxism, Foucault’s historical epistemology, certain strands of postcolonial and feminist theory, and much of what goes by the label ‘postmodernism’. These ideological programmes operate by purporting to unmask the shameful origins – in violence, sexual repression, gender or racial hegemony and economic and social oppression – of our concepts, beliefs and political structures.
We continue awaiting the decision of a grand jury on whether or not to indict Darren Wilson, a white police officer, who shot and killed Michael Brown, an unarmed black teenager, exactly 15 weeks ago today on a suburban street in Ferguson, Missouri. News reporters from across the globe have been camped out in Ferguson for months, their expectation of an announcement teased and disappointed several times in the last week alone. On Monday, Missouri Governor Jay Nixon declared a state of emergency and activated the National Guard in advance of the grand jury's decision. Yesterday, President Barack Obama, in what can only be judged to be an anticipation of Wilson's non-indictment, preemptively urged protesters not to use Ferguson as an "excuse for violence." In the meantime, demonstrators of various ilk remain on standby, rallying their troops, refining their organizational strategies, painting their oppositional signs, standing vigilantly at the ready for whatever may come.
But what are we waiting for, really, as we wait for Ferguson?
As I’ve suggested here before, one of the undertheorized aspects of biopower is the relation between biopower and the juridical power it supposedly supplants. Now, I think it’s a mistake to think that biopower simply replaces juridical power, at least not on Foucault’s considered view (for the sorts of reasons given in papers such as this one; nor do I think the relation should be read that way, whatever Foucault thought), but to say that is to then pose a problem concerning their interrelations.
This paper by Jack Balkin (law, Yale) offers some help in disentangling the various threads. Balkin’s concern is to outline the features of what he calls the “national surveillance state,” which he proposes is our current mode of governance, having taken over and transformed the governmental apparatus from the mid-century Welfare and National Security states. The former developed through the implementation of New Deal programs, and the latter through the Cold War. The two of them together, plus developments in computing power, enable the surveillance state, which is a “way of governing” that has developed over the last half of the twentieth-century (and thus long predates 9/11 and its aftermath):
Cloud computing – where users keep their data (and often their applications) online - poses significant theoretical and regulatory problems. Many of these concern jurisdiction: it’s very hard to even know at a given moment where data is kept, and it’s often unclear (in the case of privacy, for example), which jurisdiction’s privacy and data protection rules should apply (the one for the data subject? the company that collected the data? the companies processing it? etc.). Not only that, U.S. and EU law are wildly inconsistent on the point, even though any large big data company has to serve multiple jurisdictions.
A recent piece by Paul M. Schwartz does some valuable work disentangling these issues; here, I want to focus on one moment. Schwartz notes that cloud computing will likely induce significant changes in how firms are structured, and how they structure their data handling. Back in 1937, Ronald Coase proposed that companies will decide between doing something in house and outsourcing it based on a comparison of the costs of each. If it’s more efficient to do something in-house, using the hierarchical control structure of the firm and avoiding the complexities of dealing with markets, that’s what we can expect. If, on the other hand, it turns out that it’s more efficient to hire somebody else to do the job, we can expect companies to do that. Companies have to balance the difficulties of managing a project in-house versus the costs of negotiating contracts with independent vendors.
Several months ago, I argued here that big data is going to make a big mess of privacy – primarily because of a distinction between “data,” understood as the effluvia of daily life, generated by such activities as moving around town or making phone calls, and “information,” which implies some sort of meaning. Privacy protects the disclosure of “information,” since this can be an intentional act; big data allows surveillance of areas traditionally considered private without any act of disclosure, since the analytic computers will take care of turning the data into information. My standard talking-point here is a recent study of Facebook likes which determined that all sorts of non-trivial correlations could be deduced from what people “like:”
Yesterday's post about the the extent that mainstream feminist thinking is implicated in trans exclusinary radical feminism generated some great comments. In particular, my impression that Women and Gender theorists overwhelmingly defined gender differences as being in the contingent realm of culture and sex differences as being in the realm of nomic necessity was mistaken. However, nobody took up the main point I was trying to make (and it should be clear that no one has an obligation to do so) so I'll try to frame it more generally.
First, with respect to gender, it's not enough to problematize the gender/sex distinction merely by arguing that sexual difference itself is imbued with cultural and epigenetic factors. Has the debate gone beyond that sort of generic culturally relativist move? It was not clear from the comments. The challenge by Serano and Garcia is in part from the other direction; denying that aspects of gender difference are in the realm of nomic necessity leads to other forms of oppression. From Sullivan's post, the denial of this by many feminist activists involves systematically ignoring or dismissing the testimony of many trans people, and this suppression accounts for much of the acrimony between TERFs and transgender people.
Second, the gender/sex issue wasn't a little bit orthogonal to the problem I tried to pose, which was that much feminist theory (at least the stuff I studied seven years ago) wasn't able to navigate a Scylla and Charibdis between politics of identity and difference. Serano and Garcia argue that even recent feminist theorists (who are aware of the danger) end up denigrating femininity and telling women that they should have traditionally masculine traits. But if the alternative is Carol Gilligan or Glover type theory, no thanks. Glover critiques the "final girl" in horror movies (the last possible victim who survives and kills the killer) as a "male adolescent in drag" in part because the final girl has "masculine" attributes such as planning and use of reason. As far as infantalizing condescension goes, this is about on par with pesticide companies giving pink teddy bears to women with breast cancer.
You can find here the latest iteration of quotes from a philosopher cleverly juxtaposed with incongruous pictures.
I think maybe that philosophers divide into those whose prose works well for this kind of thing and those for whom it doesn't. Anything even slightly portentous works, and if you are skilfull in choice of images, I think that anything with technical vocabulary would probably be ripe, but the result would be funny for different reasons.
Some philosophers' work can be illustrated in a non-ironic way. Peter Singer once said that the pictures in his animal cruelty book convinced a lot more people than the actual arguments. Probably any non-trivial work of ethics could benefit from this kind of illustration. And, finally, visual artists have been appropriating philosophical sentences for decades. I forget the guy who put a sentence from Davidson next to all of his paintings (I can't find this because there is a guy who does watercolors of flowers also named Donald Davidson). It was cool stuff. More recently (due in part to the labors of the Rays, Negerestani and Brassier, as well as Armen Avinessian and Graham Harman) lots of artists are doing things with respect to Speculative Realism.
I wonder what it is about philosophy such that our sentences work so well in conjunction with pictures, both in ironic contraposition and non-ironically. In any case, we should probably be happy to provide the service.
I’m teaching a course on privacy and surveillance this fall, and one of the things I’ve been doing is reading up on aspects of privacy theory that I didn’t know much about, such as the feminist critique of privacy. The basic feminist argument is that “family privacy” has been historically used as a cover to shield domestic abuse from legal scrutiny (and not only against women – see this disturbing Supreme Court case about a stepfather who beat a four year old into serious and permanent cognitive disability; the Rhenquist Court argues that state social services had no enforceable obligation to intervene because of family privacy). It is in this context that I ran across Reva Siegel’s (Law, Yale) fantastic article on the way that claims of domestic privacy emerged out of the collapse of a husband’s legal right to “chastise” (beat) his wife. Siegel’s larger purpose is to study the ways that legal reforms can serve to “modernize” status regimes, a process in which old hierarchies are given new justifications and (perhaps) weakened, but not eliminated. It’s not that the legal reforms don’t achieve anything – it’s that it’s very, very difficult to dismantle regimes of social privilege, and that (as Foucault noted), power always entails resistance.
Here, I want to focus briefly on the move from chastisement to privacy, because I think it suggests something important for our understanding of biopolitics. As Siegel outlines it, the basic story is that, over the course of the nineteenth century, a couple of groups made substantial inroads into the old common law right of chastisement: temperance groups used stories of horrific abuse of women by drunk husbands to advocate banning alcohol, and feminist groups use the same stories to advocate for the banning of wife-beating. The feminists eventually won, and a pair of state supreme court cases around 1870 (one in Alabama and one in North Carolina) emphatically – perhaps a little too emphatically – pronounced wife beating to be the unwelcome vestige of a primitive, bygone era.
There's been a good bit conversation recently about the merits and demerits of "public philosophy" and, as someone who considers herself committed to public philosophy (whatever that is). I'm always happy to stumble across a piece of remarkably insightful philosophical work in the public realm. Case in point: Robin James (Philosophy, UNC-Charlotte) posted a really fascinating and original short-essay on the Cyborgology blog a couple of days ago entitled "An attempt at a precise & substantive definition of 'neoliberalism,' plus some thoughts on algorithms." There, she primarily aims to distinguish the sense in which we use the term "neoliberalism" to indicate an ideology from its use as a historical indicator, and she does so by employing some extremely helpful insights about algorithms, data analysis, the mathematics of music, harmony, and how we understand consonance and dissonance. I'm deeply sympathetic with James' underlying motivation for this piece, namely, her concern that our use of the term "neoliberalism" (or its corresponding descriptor "neoliberal") has become so ubiquitous that it is in danger of being evacuated of "precise and substantive" meaning altogether. I'm sympathetic, first, as a philosopher, for whom precise and substantive definitions are as essential as hammers and nails are to a carpenter. But secondly, and perhaps more importantly, I'm sympathetic with James' effort because as Jacques Derrida once said "the more confused the concept, the more it lends itself to opportunistic appropriation." Especially in the last decade or so, "neoliberalism" is perhaps the sine qua non term that has been, by both the Left and the Right, opportunistically appropriated.
James' definition of neoliberalism's ideological position ("everything in the universe works like a deregulated, competitive, financialized, capitalist market") ends up relying heavily on her distinction of neoliberalism as a particular type of ideology, i.e., one "in which epistemology and ontology collapse into one another, an epistemontology." In sum, James conjectures that neoliberal epistemontology purports to know what it knows (objects, beings, states of affairs, persons, the world) vis-a-vis "the general field of reference of economic anaylsis."
I am increasingly convinced that any Foucauldian effort to understand neoliberalism needs to focus on it as a strategy of subjectification (more specifically, it’s the strategy of subjectification specific to contemporary biopower, and it says that the truth of the human being is as homo economicus). One reason I think this is that one finds repeated examples of where policy or governmental prescriptions specific to neoliberalism conflict with neoliberalism as a strategy of subjectification; in such cases, the strategy of subjectification generally seems to win. Let me explain with an example which will hopefully serve as proof of concept of the admittedly very big thesis I’ve just announced.
An important and somewhat neglected topic is what happens when biopolitics intersects with juridical power in courts of law. Today, we got a good example of one way it can happen. Several years ago, the Supreme Court ruled that states could not execute the “intellectually disabled.” They also let the states decide what that meant. Today, they specified (5-4, with the usual lineup for a “liberal” Kennedy opinion) that, although using an IQ score of 70 or below as evidence of such disability is ok, it’s not ok to draw a bright line cutoff at a score of 70 because one had to take into account the 5 point margin of error in the test itself. In so doing, the SCOTUS spared the life of a Florida inmate with a measured IQ of 71.
There is a lot to say here (and for me, quibbling about where the IQ cutoff should be distracts from the larger point, which is that we shouldn’t be executing people. And, IQ testing is its own set of problems), but I do think it’s notable the extent to which the decision is expressly biopolitical, and not juridical. Recall Foucault’s claim one symptom of the emergence of biopower is a decline in the death penalty (History of Sexuality 1, p. 138). Here, we see how that decline can manifest itself even within the judicial system.
Biopolitics – even when understood in its narrow sense of life itself being a political issue – comes in at least two different strands. The first, which historically precedes the second, was concerned with what Foucault called a “politics of public health.” In so doing, it takes on standard biopolitical issues of population optimization, public health and so forth as mass issues. The resulting policies included mass vaccination campaigns, the installation of proper municipal sewage systems, and so forth. These programs resulted in demonstrable and substantial gains in typical measures of public health, such as life expectancy.
I am aware of exactly two comments Foucault made on Vico. From Discipline and Punish, with regard to the description of the 'spectacular' and famously brutal execution of Damiens: 'As Vico remarked this old jurisprudence was "an entire poetics"' (Discipline and Punish, trans. Alan Sheridan, Penguin, 1977: p. 45). Then from 'What is Enlightenment':
The present may also be analyzed as a point of transition toward the dawning of a new world. That is what Vico describes in the last chapter of La Scienza Nuova; what he sees "today"; is "a complete humanity ... spread abroad through all nations, for a few great monarchs rule over this world of peoples"; it is also "Europe ... radiant with such humanity that it abounds in all the good things that make for the happiness of human life"
Gary Becker, the Nobel laureate economist at the University of Chicago, has died.
Becker is perhaps best known for "human capital" theory, which talks about how one might, for example, come to think of education as an investment in one's future earnings. As the absolute normalcy of a statement like this would suggest, I think it's probably hard to overstate how influential Becker has been on the development of the neoliberal world we all inhabit. Foucault's analysis in Birth of Biopolitics is essential, as are the exchanges (here and here) between Becker, Bernard Harcourt (whose Illusion of Free Markets ought to be required reading), and the Foucauldian Francois Ewald.
As readers of this blog will know, I'm no fan of neoliberalism. But, as I tell my students, if you don't see neoliberalism at least as a temptation, you didn't get it.
A few days ago, I used the lack of historical figures in its top-20-pernicious list to propose that Leiter’s poll about pernicious philosophers said a lot about the politics of academic philosophy, and not so much about anything else. “Pernicious,” in other words, is a political designation. In the comments, Jon Cogburn wonders:
“You had me up until the historical construct bit. Aren't we in danger of presupposing that something can't both be a political act of boundary policing *and* a statement with a truth value? I mean I think that it's objectively false that Heidegger is a pernicious philosopher. I also think that calling one's colleagues charlatans in public forums is objectively pernicious. Maybe I [am] trying to police a boundary here, but aren't some boundaries objectively worth policing?”
This is a fair question; let me try to pursue and answer in three slightly different ways.
There’s a discussion going on over at Leiter about the results of his latest poll: which modern philosopher had the “most pernicious influence” on philosophy? Heidegger was the strong #1, both in terms of the number of people who hated him, and the intensity of their hatred. This doesn’t seem that surprising, given that Leiter’s readers, um, lean analytic and since Leiter took their Derrida option off the table.
Much more interesting, it seems to me, is the historical skew of the results. Most of the figures in the top 20 are 20th century philosophers, and all but three (Descartes, Berkeley, and Kant) are 19th or 20th century (and it wouldn’t be conceptually wrong to put Kant in with the 19c). Does this reflect poor historical training? Do influential but controversial positions get absorbed into the ‘mainstream?’
With a provocative title such as this, it is easy to imagine how the rest of the story will go. Philosophy, one will read, no longer has an effective role to play in society. One could perhaps draw on the authority of Stephen Hawking and argue, as Hawking does, that philosophy is dead and serves no purpose for it is now physics that best provides the answers to the questions that were once the focus of philosophers. The title may also lead one to anticipate the economic argument where philosophy is portrayed as being one of the most useless of the humanities degrees with the subsequent encouragement that one pursue, for the sake of their professional future, a more economically viable degree.
If either of these arguments are what the “philosophy has no future” title intends, then there are counter-arguments at the ready. With respect to the first, there is plenty of room to argue, as many have (see Laurie Paul’s essay for example), that the physics Hawking encourages presupposes a metaphysics that leaves plenty of opportunity for traditional philosophical questions to gain traction and in turn foster cooperative engagement between philosophy and science (Roberta’s excellent post along with Eric’s post on dark matter are cases in point of just such cooperation). There is also plenty of evidence to challenge the common assumption that philosophy is not a good degree to pursue in order to get a lucrative job upon graduation. Far from being a hindrance to future economic success, philosophy majors on the whole earn more than graduates with other degrees (see this story [h/t Catarina]). Philosophy majors also outperform students from other majors when it comes to standardized tests – e.g., LSAT, GRE (see this).
These counter-arguments are persuasive and as far as I’m concerned definitively undermine the two assumptions that may appear to motivate the title of this post. These assumptions, however, are not what motivated the title. What motivated it instead is not the notion that philosophy has no future because it has been displaced by competing forces that have now taken over the future that philosophy could once claim, but rather that the very attitude that philosophy ought to have such a future is itself derivative of a philosophy that has no future.
I would propose defending, to state the thesis more directly, a contemporary reworking of Camus’ philosophy of the absurd.
In a famous essay, Deleuze suggests that our society has moved beyond Foucauldian disciplinary power to a more fluid “control society,” where the various sites of disciplinary control merge into a modulated network of interlocking sites of power, the primary technique of which is access control. As Deleuze notes, the move is “dispersive,” and “the factory has given way to the corporation.” Hence, “the family, the school, the army, the factory are no longer distinct analogical spaces that converge towards an owner – state or private power – but coded figures – deformable and transformable – of a single corporation that now has only stockholders.” (6) The most vivid image of such a society he attributes to Guattari, who:
“has imagined a city where one would be able to leave one’s apartment, one’s street, one’s neighborhood, thanks to one’s (dividual) electronic card that raises a given barrier; but the card could just as easily be rejected on a given day or between certain hours; what counts is not the barrier but the computer that tracks each person’s position – licit or illicit – and effects a universal modulation” (7)
This thesis has been most widely applied to surveillance and security and is easily evidenced by things like NSA “don’t fly” lists and the number of passwords one has to generate online. That said, I would like to suggest here that, at least in one respect, we’re moving past the control society. Or, perhaps, we’re seeing the truth of the control society in an unexpected way. One feature of the move from the dungeon to the panopticon is regulatory efficiency: it costs a lot less to get people to police themselves than to coerce them with brute force. The move to control is similarly efficient in that multiple, closed panoptic systems are much less efficient than a more modular arrangement where panoptic technologies are (as Foucault said they would be) completely diffused into society and work together, rather than separately.
In his critique of Posner’s economic analysis of law, the late Ed Baker offers some remarks that might help us to understand current developments in educational policy. Posner defends what we will now recognize as a number of the core commitments of neoliberal policy, in particular the fundamental efficiency of markets and the price mechanism for the optimal allocation of social goods. The more people want something, the more they are willing to pay, and so goods get bought and sold (as they move from those who value them less – sellers – to those who value them more – buyers) until everyone is as happy as they can be, given constraints on resources.
Schliesser thought he could escape the Borg, but a senior philosopher elsewhere has tracked him down for us here. In this very interesting reflection, he writes about the head-lice inspection all Dutch kids undergo at school, and connnects it to Foucaultian analyses of biopolitics (or, with less fancy terms, that government rationality that licences, among other things, involvement in public health). But, as Schliesser recognizes, it's hard to be simply "against" public health -- what, you *want* your kids and other kids to have lice?
Also, any objections, like his about evidence of the effectiveness of school level inspection, share much the same rationality -- what's the most effective means of obtaining a health-managed population? Now we could do some sort of neoliberal twist here: some sort of market in private insurance against the costs of head lice treatment with a tax penalty for non-compliance might fit -- a AHLIA (Affordable Head Lice Inspection Act), if you will -- but would this neoliberalization not still fit within a biopolitical horizon?* Or, if you prefer more direct means, do we continue at the level of schools or centralize ("up") to the level of the city or state, or further de-centralize ("down") to the level of the household with say, random house visits?**
In a second court ruling on the NSA’s metadata collection program, Judge Pauley rejected virtually all of the arguments raised by the ACLU and other plaintiffs against the program. This opinion thus stands opposed to Judge Leon’s ruling of a few weeks before (my analysis of that is here). Here I want to look at Judge Pauley’s opinion, in the context of my original question about data and information as concepts in thinking about privacy in the era of big data.
In a previous post, I suggested that the concept of privacy is going to prove inadequate as a protection against big data. This is the case for structural reasons: the concept of privacy is designed to protect information (generally, either information that is thought to be inherently intimate, or in the sense of control over the dissemination of information), whereas big data operates at what one might call a sub-information level: it siphons up enormous amounts of data, which becomes meaningful information only after it is analyzed in the context of vast amounts of other data. As a result, big data knows everything about us, even though we have neither consented nor not-consented to the release of the information that condemns us.
Today I want to leave that aside for the moment, and develop some background by way of a Foucauldian reading of Judge Leon’s recent decision issuing a preliminary injunction against the NSA’s collection of vast amounts of telephone metadata on American citizens. In subsequent posts, I will offer a reading of Judge Pauley’s decision upholding the NSA program and an earlier Supreme Court decision that gets at the issue before returning to the question of privacy. Although the analysis here is based on court cases and government programs, the intention is ultimately to make a more general point.
A few days ago, while trying to open the interwebs thingy to allow me to start entering my grades, I was prevented from doing so by a pop-up menu that referenced LSU's Policy Statement 67. The text included unsubstantiated and highly dubious claims such as that most workplace problems are the result of drugs and alcohol abuse by workers. And this was only a few weeks after all of the chairs at LSU had to provide verification that every single faculty member had read a hysterical message from our staff and administrative overlords that justified expanding the extension of pee-tested employees at LSU to now include faculty. The wretched communiqué justified pee-testing faculty because of new evidence showing that marijuana is harmful to 13 year olds.*
Anyhow, when I scrolled to the bottom of the popup, I had to click a button saying not only that I read the document but also that I "agreed" with it.
I honestly don't get this. Are my beliefs a condition of employment at LSU? There was no button that said I read it but didn't agree with it.
All nine of the Schock winners thus far were or are eminent philosophers, and most of us can only aspire to emulate the quality of their work as best we can. Even if one allows that "The Schock" only seems to go to male, analytical philosophers, each winner is an important and interesting philosopher, deserving of significant honor. Having said that, The Schock Prize judges had four or five chances to honor David Lewis, and failed to do so. (Lewis died in the Fall of 2001.) Lewis is arguably the most significant and influential (analytical) philosopher of the last quarter of the 20th century. (Perhaps, Deleuze is the only contemporary that will match his enduring significance, but he and Foucault died before the Schock got up and running.) So, while one can excuse the members of the Royal Swedish Academy of Sciences (RSAS) to play it safe and not award the prize to, say, Derrida (and, thus, avoid the predictable outcry), not giving it to Lewis means they failed to grasp the nature of analytical philosophy in their own time. That in addition, they passed on Gadamer, Ricœur, Goodman, and, thus far, Habermas suggests that the Schock has a long way to go before it can establish itself as the ultimate arbiter of general philosophical excellence.
Last week I received a widely distributed announcement on a conference celebrating "The 'Stanford School' of Philosophy of Science." The 'core' members of this school are taken to be: Nancy Cartwright (Durham), John Dupré (Exeter), Peter Galison (Harvard), Peter Godfrey-Smith (CUNY), Patrick Suppes (Stanford). The parenthesis are the current affiliation of the 'core' members; this immediately suggests that if there is a 'school' at all we are either dealing with a historical phenomenon or very distributed one. Scanning the list of the 'next generation' confirms that Stanford is not the current base of the purported school.
First, I adore much of the work done by many in the 'core,' but the idea that this group is a 'school' is deeply flawed. For, Suppes is far better understood (as he does himself) as belonging to the first generation (including Kyburg, Pap, Isaac Levi) intellectual off-spring of Ernest Nagel, who successfully created American analytical philosophy by combining the Scientific wing of Pragmatism with the new approaches emanating from Vienna, especially, and Cambridge (recall and here). In his autobiography, Suppes describes how assimilated from Nagel the significance of history of science.
Over lunch my dad asked me why the use of chemical weapons is thought morally worse than other weapons (some of which capable of tremendous carnage and deaths). I couldn't do much better than, "it's against international law." Upon reflection my answer is not entirely silly (I return to that below). It is worth nothing that as so-called 'weapons of mass destruction' go chemical weapons are by no means the worst in killing potential. First I turn to Owen Schaefer, who wrote a very thoughtful blog post with a purported answer:
It is indeed generally worse to be killed by a chemical weapon than a conventional one. Chemical agents such as nerve gas typically cause significant suffering before death – choking, vomiting, chemical burns, defecation, convulsions and the like. For those lucky enough to survive, chronic neurological damage can be expected. Conventional weapons are not pleasant either, to be sure, and can similarly cause severe burns, painful wounds, infections, loss of limbs and so on. Nevertheless, the suffering is pretty much inevitable in a chemical attack, whereas at least those killed by conventional weapons may be killed quickly, even instantly. What’s more, chemical weapons are more dispersive than most conventional weapons, more likely to cause collateral damage to noncombatants. These factors indicate we have strong pro tanto reasons to prefer, if a conflict is going to occur at all, that conventional rather than chemical weapons be used.--Schaefer.
I doubt this answer will fully satisfy my dad (a Holocaust survivor who has seen his share of horrors). For Schaefer does not really address the hypocrisy charge given the fact that the legal status quo on the use of Nuclear weapons is -- pace this advisory opinion -- far more permissive. (I am no expert, so feel free to correct me.) Because I am skeptical about the 'more likely to cause collateral damage' claim, it appears that the main moral rationale for focused outrage over the use of chemical weapons is that we prefer our mass killing without prior suffering to the killed. It is, thus, in the spirit of the displacement of torture and needless suffering in 'civilized' penal codes (suggested by Foucault) since the late Enlightenment.
It is, of course, no argument against this anti-suffering ethic to note that it fits a common anesthized (or aestheticized) picture of war. Yet, if consequences matter (as they clearly do in Schaefer's analysis), we also need to ask ourselves if we are taking all the relevant consequences into considerations.