Developments this week highlight the problems with the neoliberal decision to privatize medicine in the U.S. Certainly the Affordable Care Act (ACA), which entrenches responsibility for access to healthcare to private insurance companies and then attempts to contrive a market for patients to shop between insurance plans as some sort of proxy for shopping for doctors, is the most famous recent example of this decision. Never mind that in market terms medicine is a classic credence good: you may not know either before or after purchase whether you are getting a good deal, and the barriers to knowing this are nearly insuperable owing to the inherent complexity of medicine and the inherent uncertainty behind most medical judgment, even perfectly executed by brilliant practitioners. Medical care just isn’t one of those things that works well as a market good.
Meantime, this week, President Obama signed the 21st Century Cures Act. This bipartisan bill makes an enormous investment in research into urgent health problems from Alzheimer’s, to opiate addition, to cancer, and continues to fund the promising developments in “precision medicine.” This is an obvious good. What could be wrong? The tradeoff, beyond the now stale point that there is no investment in the social determinants of health – like urban infrastructure – is that it loosens regulatory requirements for drug approval even further. As the Washington Post reports:
What does the Trump election mean for neoliberalism as a doctrine? Adam Kotsko over at An und für sich has some interesting thoughts on the matter; what follows is intended as a constructive engagement. As I posted last week, I think Trump’s victory is inseparable from what Foucault calls state racism, and the appointment of Steve Bannon and nomination of Jeff Sessions certainly adds evidence to the theory that his will be a government of White Supremacy (I am not going to engage in the parlor game of distinguishing “white nationalism,” “white supremacy,” and so on. It’s a parlor game that requires white privilege even to play, and all the iterations mean the same basic thing: white people should be in charge). One of my points there is that the system is structurally rigged against cities and other places where non-Trump voters live. At current count, Clinton - garden variety neoliberal - is up by nearly 1.7 million votes in the popular vote count, and that number is growing. This means that more people who voted want neoliberalism than want Trumpism, for what that’s worth. At the very least, it means that we need to think about neoliberalism as a dispositif of biopolitics, and how that intersects with the 1930s version that Foucault’s remarks on state racism address and that Trump seems to channel.
Kotsko thinks that we should grant that Trump isn’t a neoliberal, and think about the ramifications for neoliberalism. All of this is thus necessarily a speculative exercise. Still, I think a couple of points are worth noting, beyond the more general one that if neoliberalism can survive the financial crisis intact, then we should always be skeptical about reports of its death. Here are two reasons I’m not convinced that Trump and Trumpism aren’t neoliberal in a fundamental way.
Foucault famously proposed that biopolitics - the power to foster life, or allow it to die - tended to produce its own outside in the form of state racism: not only might life be allowed to die, but there might be those who must die, literally or metaphorically, so an inside “we” could live. That is, it is primarily a way of introducing a break into the domain of life that is under power’s control: the break between what must live and what must die” (Society must be Defended, 254). Note the subtle elision: there is life that is allowed to die, and then there is also life that must die. Thus, “if you want to live, they must die” (255) becomes the message. In other words, biopolitics produces two forms, almost simultaneously. Foucault is thinking of 1930s fascism, where (for example), the German emphasis on the health of the ethnically-German population was coupled with the extermination of European Jews.
But there’s an analogue, however imprecise, in the Presidential election last week. In it, we saw two versions of biopolitics. On the one hand, Clinton ran on a campaign of building a better life together, with a particular emphasis on fostering the lives of children and families. The Affordable Care Act would be improved. Paid leave for working parents. And so on. Even her negative ads against Trump emphasized the positive biopolitics: our children are watching. What kind of President do you want them to see? On the Trump side, we saw nothing but Herrenvolk biopolitics: Mexicans, Muslims, African-Americans and women were taking over, making America not great. This had to stop. Law and Order. Our country is at its nadir, thanks to an ineffective, losing President who was probably born in deepest, darkest Kenya anyway. He also somehow founded ISIS, which by the way is winning. China is winning. Everyone but America is winning. But if we keep the Mexican rapists out, and all the Muslims, maybe something good can happen. We will be strong. We will win again. In Messianic tones that Masha Gessen reminds us (this piece is a must read) we should take very seriously, he proclaimed that “I alone” can save you. That almost none of that narrative was true became irrelevant, in the same haunted house in which Clinton’s email server somehow became a darker mark against her character than his many business failings, tax evasions, failures to pay subcontractors, etc.
Two version of biopolitics. In Foucauldian terms, Trump was advocating the return of state racism. At one level, this is an obvious point, given his endless racist rhetoric about Mexicans and Muslims in particular. But liberal commentators, including myself, have tried very hard to explain the Trump victory in other ways. I have decided it can’t be done. The Trump election is fundamentally about the maintenance of White Supremacy, something that women and people of color said a week ago.
One of the important parts in understanding neoliberalism as a particular dispositive of power (or perhaps a mode of biopower – that sort of distinction doesn’t matter here) lies in understanding the various techniques it deploys. After all, there is no “neoliberalism” or “neoliberal power” existing in the abstract; as Foucault repeatedly demonstrates, power can only be fully understood by digging down to the mircro-level, to all the little practices and techniques that add up to a particular social regime of power. Attention to these details has been one of my interests for a while (for example, in the case of privacy notices, or the emergence of best practices).
At least since Althusser, we’ve been accustomed to recognize the schools as part of the ideological state apparatus, and Foucault’s Discipline and Punish underscored the point. The locus classicus of neoliberalism in K-12 education is of course the rise of standardized testing regimes such as those imposed by No Child Left Behind. Another area of focus has been the rise of semi-privatized charter schools. Here, I want to take note of another, more subtle: the use of online homework assignments. Recall that one of the central aspects of neoliberalism at work is the erasure of the work/home boundary and the devolution of technological minutiae to employees; the result is what Ian Bogost calls “hyper-employment,” and the necessary parallel rise of what David Graeber calls “bullshit jobs,” a phenomenon brought about by the fact that we don’t actually have 24 hours of useful work a day to do. On the job, workers are subject to nearly unlimited surveillance, and things like employee wellness programs extend that surveillance into the home. It is only to be expected that this surveillant, time-wasting product of the neoliberal thought collective will be visited on our children.
In a new paper, Maximilian Fochler conducted a series of structured interviews with scientists to make an STS point: when we think of capitalism as a system that depends on “accumulation,” there are many different kinds of things that one can accumulate, many of them non-financial. I think Fochler makes an important point, but I also think it should be pushed in a somewhat different, more critical direction.
First, though, the results of the interviews. Fochler interviewed both academic and non-academic scientists in Austria. On the academic side, he looked at those in charge of labs, and the post-docs who do most of the actual bench science. Both are engaged in a race to accumulate. The leaders have to produce peer-reviewed publications in order to get grants, which they need to then get more peer-reviewed publications (Fochler’s interview subjects were Austrian, but it should be noted that in this country, many of those scientists have to get grants to cover their salary. No grant, no paycheck). The post-docs are in perhaps the most dire situation: there are a lot more post-docs than there are positions for them, and so they have to engage in a competitive race to accumulate publications as well, in order to continue in their careers (or as Becker would say, adding a polite veneer, “invest in their human capital”), either by extending their current position or gaining another one. Adding to the stress, postdoc positions typically last 2-3 years, which is not enough time to accumulate a significant publication record (I will leave it to readers to draw the connections between this situation and that faced by the humanities precariat).
On the corporate side, we find the CEO’s of start-ups trying to generate peer-reviewed publications, positive lab results, and other indicia that their particular research program – and its endpoint product – is worthy of continued venture capital funding, with the goal of (eventually) selling the start-up to a larger pharmaceutical company. Since the scientific process apparently takes about 10 years, and the VC funding cycle is two or three years, this is a continuous worry. The scientists, on the other hand, much to their surprise (and mine, as I read the paper) work in a collaborative, non-competitive environment. This is because successes and failures are attributable to the entire company. Of course, the downside of this is that these scientists don’t accumulate anything they can use to parlay into their next job.
The simple point I would like to add is that, despite all of the accumulation, no one is making any real money. Not the post-docs, especially, though a move into to a faculty position adds some salary and a little job security, but also adds to the need to publish. The CEO’s and employees of the start-ups aren’t likely to get rich either: 90% of start-ups fail generally; pharmaceuticals don’t do that much better; and one study reported that “97% of drugs in preclinical tests never make it to market, and nor do 95% of the molecules in phase 1 clinical trials and 88% of molecules in phase 2. Not until phase 3 do their prospects get much better: Of the ones that make it that far, 56% are approved” (summative quote from here).
Big Data theorists have, for a while, been warily eyeing the growth of the “Internet of Things” (IoT), which is when “smart” technology is integrated into ordinary household devices like refrigerators and toasters. New fridges all have warning lights that remind you to change the water filter; IoT fridges will order the new filter for you. “Smart” utility meters are another example: they can monitor your utility usage moment by moment, making adjustments, say, to the HVAC to optimize power (or to prevent brownouts by automatically raising the temperature of everybody’s house a degree or two during peak hours). Such smart meters are obviously key if those with rooftop solar are going to sell their surplus capacity to the power company. They also enable very detailed surveillance of people’s home lives: they apparently know when you’re using power for the dishwasher, the shower, the TV, and so on.
Capital knows opportunity when it arrives; if your dishwasher is using more power than the average dishwasher, expect advertising for a new, energy-efficient model. If you routinely have lights on until very late at night, maybe you need some medicine to help you sleep, delivered to your web browser. Your boss sees opportunity as well: if you routinely disarm the alarm, turn on the lights and open the fridge at 3:30am, maybe you’ve been out clubbing too late to be a good worker, and you need to have your desk cleared by 5:00 today. This inference will be assisted by the fact that clubs now keep networked electronic records - ostensibly for security purposes - of who goes in and out (and who is banned: if you get thrown out of a club, all the other clubs on that network can refuse you entrance). What if your boss buys the data from the club networks, and the utility company and crunches it to measure productivity? Or, sells it to the insurance company, where you’re told that your new wellness initiative requires you to allow your devices to report that you come home and stay there by midnight every night, under penalty of punitive premiums? Your auto insurance bill will almost certainly go up too, because you’ll have installed the vehicle tracking devices that will, by then, be necessary to avoid punitive insurance rates.
But all of that is about surveilling the human. In a fascinating new paper, Kevin Haggerty and Daniel Trottier extend the study of surveillance to nature, noting that the practice is both pervasive and growing, on the one hand, and nearly completely ignored, on the other, with the partial exceptions of Latour and Haraway. I suspect that this is a paper destined to have a big impact; Haggerty in particular is a very significant surveillance theorist, and in a 2000 paper, he and Richard Ericson made a very influential push to orient surveillance studies around the Deleuzian notion of an “assemblage,” arguing that the Foucauldian “panopticon” had become dated. In the current paper, Haggerty and Trottier look at several ways that we now surveil nature that they expect to grow exponentially with developing technologies. None of them are exactly new, but things like RFID tags will make them a lot cheaper, easier, and more commonplace: the representation of ever-more-remote aspects of nature, often turning it into spectacle; using animals as agents (for example, as the Germans did during WWI, attaching cameras to homing pigeons); the increased use of biosentinels (where we rely on an animal’s response to the environment to infer information about that environment. The canary in the coal mine or the drug-sniffing dog are the textbook examples); and taking surveillance inspiration from nature (looking at insect eyes to develop cameras that can see a full 360 degrees, for example). They then suggest three implications for research into surveillance: (1) there are non-technological aspects of surveillance that need highlighting and study; (2) not all surveillance is of humans (contrary to what most of the literature talks about); and (3) we need to look carefully at inspirations for surveillance. They close by highlighting that the human/nature boundary has never been a particularly bright one, and it’s likely to get less so as we move on.
People in the UK today are voting on whether to leave the EU, in what has universally become known as the “Brexit.” Current polling shows the referendum will be very, very close, and the political situation is extremely volatile. Over the weekend, a liberal, pro-Europe MP was brutally murdered by a member (or at least supporter) of a far right party who gave his name as “Death to Traitors” in his first court appearance. Ironically, the murder may have hurt the exit campaign. On the other hand, the BBC is now running a story that if the Brexit succeeds, it may prompt London – which will almost certainly vote to stay – to demand its own exit from the UK; Northern Ireland and Scotland might follow suit. I haven’t seen anyone say that further devolution is likely, but it would be on the table for discussion. In the meantime, British far right parties like UKIP have supported the exit, claiming that there is too much immigration and too many regulations emanating from Brussels. It’s an occasion for right-wing nationalism to gain political power and prominence. In other words, Brexit is the UK’s Donald Trump, with two primary differences: the Brexit vote looks like it’s going to be close, and the new mayor of London really is Muslim.
I’ve lived in England on two separate occasions – once in London on a semester-abroad as an undergraduate, in Fall 1992, and for a year in graduate school (1997-98), reading in the Bodleian library in Oxford. Fall 1992, of course, was when the Maastricht treaty establishing the EU and setting the groundwork for the common currency was debated and ratified. The UK joined, though it stipulated that it would not join the Euro, and demanded a number of other specific concessions as conditions for membership. One of the main anti-Europe arguments was that there were too many regulations emanating from Brussels, and the no-campaign selected British Beef as a good example of the sort of industry that did not require foreign regulation. Not long after that, Bovine Spongiform Encephalopathy, aka “Mad Cow Disease,” went from a minor to a major news item. BSE, which one contracts mainly by eating infected meat, is invariably fatal, has a very long incubation period of several years, is essentially undetectable prior to symptoms (I will never be able to donate blood because I lived in England when I did), and is virtually impossible to destroy – it withstands temperatures of 600 degrees. It also turns out to have started in England, where the British Beef industry had been feeding rendered carcasses to cattle as a protein supplement. The EU banned such feeding practices in 1994, having previously banned beef from England into other member states.
In critical work on neoliberalism, there’s probably two or three main schools of thought. One approaches the subject as a matter of political economy. David Harvey, whose analysis is explicitly Marxian, is the most well-known figure in this approach; another prominent author in that camp is Philip Mirowksi. The other major school is broadly Foucauldian, taking its cue from Foucault’s Birth of Biopolitics lectures. A third group, represented by autonomist Marxists like Paolo Virno, Franco Berardi, and of course Michael Hardt and Antonio Negri, attempt a synthesis (I won’t have much to say about them here). All sides have methodological critiques of the other; here I just want to note that the Foucauldians generally tend to be concerned with a topic that seems neglected in political economy: granted that neoliberalism expects us all to behave as homo economicus, defined as a risk-calculating, utility-maximizing investor in himself (gendered pronoun deliberate), how does neoliberalism get people to actually do this? After all, it is not a natural human set of behaviors. More specifically, not just how does neoliberalism get people to do this, but how does it get them to do so enthusiastically, treating the definition of the human as homo economicus as the true, correct and only way to be human? In other words, Foucauldians insist that critiques of neoliberalism need an account of subjectification.
Wendy Brown’s new(ish) Undoing the Demos: Neoliberalism’s Stealth Revolution (Zone Books, 2015) makes a substantial contribution to the Foucauldian camp by focusing on “Foucault’s innovation in conceiving neoliberalism as a political rationality” (120). The political rationality is “governance” as “the decentering of the state and other centers of rule and tracks in its place the specifically modern dispersal of socially organizing powers throughout the order and of powers ‘conducting’ and not only constraining or overtly regulating the subject” (125).
In an interesting new piece, Jim Thatcher, David O'Sullivan and Dillonn Mahmoudi propose that big data functions in the context of capital as “accumulation by dispossession,” which is David Harvey’s term for what Marx called “primitive accumulation,” the process by which capital adds to its wealth by taking goods from others and adding them to the system. Marx: “so-called primitive accumulation, therefore, is nothing else than the historical process of divorcing the producer from the means of production” (Capital I, 875 [I am using the Penguin edition]). Perhaps the best example of this is the one detailed by Marx: the enclosure movement in England involved the privatization of agricultural common spaces in England, such that it was no longer possible to graze sheep on lands held by the community in common; the result was that a lot of peasants, who ended up with no or inadequate amounts of private property, lost everything of value they had and became “free labor,” forced to sell themselves to the emerging factories. As Marx sums up the process:
“The spoliation of the Church’s property, the fraudulent alienation of the state domains, the theft of the common lands, the usurpation of feudal and clan property and its transformation into modern private property under circumstances of ruthless terrorism, all these things were just so many idyllic methods of primitive accumulation. They conquered the field for capitalist agriculture, incorporated the soil into capital, and created for the urban industries the necessary supplies of free and rightless proletarians” (895)
I am very sympathetic to the thesis, and there is something profoundly right about it, insofar as Thatcher et al. rely on the separation of the valued information from the person who produces it. But I also think it needs tweaking, for reasons that emerge in the paper itself: the data trail that a person leaves is generally itself without value, and only becomes valuable when aggregated with a lot of other data. In other words, as I tried to argue a while ago, data is itself without value; it is only when it becomes information that it realizes that value.
It seems to me that the accumulation processes of big data is involved in a much earlier stage, the commodification of data into information itself, which involves both the elevation of exchange value over use value, and the conversion of qualitatively different items of data into commensurable units of information. These are, to an extent, equivalent processes, as Marx notes: “as use values, commodities differ above all in quality, while as exchange values, they can only differ in quantity, and therefore do not contain an atom of use-value” (128). Still, I think it’s worth teasing the two threads apart here.
There are a couple of emerging narratives about Donald Trump. One is that he is the unreconstructed id of middle-aged, white American men who were left behind by the economy. They aren’t quite sure who they’re mad at, but the list probably includes everybody who doesn’t look like them, women in general, and all those libruls who insist on the “political correctness” of being civil whilst in civil society. It also includes the Republican establishment, which Trump supporters have finally realized not only has virtually nothing in common with them, but also does not care about their actual interests. So the base devolves to all it has left: a generalized rage. The other narrative says that Trump is a European-style nationalist: you can have your social services, but you can’t have people who don’t belong to your tribe running around and using those services. These narratives have in common the idea that Trump is appealing because he is racist.
One should never underestimate the explanatory power of racism in American politics, but there’s a third narrative about Trump that belongs in the picture, because I think it adds some explanatory value that the other two don’t: Trump is also the perfect embodiment of contemporary capitalism, by which I mean brand capitalism. I want to take a little time to explain this via a detour into Saussure, but if you don’t want to go there, here’s the gist of it: Trump doesn’t have policy positions because he’s not selling any product other than himself, and there isn’t anything to him other than his being the embodiment of TrumpTM.
I’m currently teaching an ethics and public policy course, and for this week we read Kaplow and Shavell’s Fairness vs. Welfare (actually, we read the first 70 pages of the NBER paper that became the much bigger book). Their central claim is that to pick fairness as the dominant principle in policy-making is by definition to make some people worse-off than they were, and that there are numerous cases where the priority on fairness would make literally everyone worse off. An important subtext is that they don’t think “fairness” means anything, except as a poor, error-inducing proxy for “welfare;” the argument is like reading chapter 5 of J.S. Mill’s Utilitarianism on justice.
The argument is a preference-based one, and it interprets “welfare” broadly – there’s no correcting of preferences here (or apparent awareness of problems with adaptive preferences). They also allow for a “taste for fairness” – i.e., the preference many people feel for a situation they believe is fair. More on that in a minute. It’s a little unclear who their target is, as well: it sounds like Rawls, but of course Rawls is quite clear that his version of rationality is lifted directly from economics. Kant is the only person I can think of who spends a lot of time separating preferences (heteronomous desires) from what reason demands, so he’s as good a target as any. In any case, I want to focus briefly on the claim that fairness-based policies can make everyone worse off.
Since we’re in the interregnum between “sign up for health insurance” time and “eat yourself into a stupor” time, it’s appropriate to notice something about pastoral power and our healthcare system. First, we’ll go back in time. Foucault proposes that pastoral power under medieval Christianity:
“Gave rise to an art of conducting, directing, leading, guiding, taking in hand, and manipulating men, an art of monitoring them and urging them on step by step, an art with the function of taking charge of men collectively and individually throughout their life and every moment of their existence.” (Security, Territory, Population (=STP), 165)
He then urges that this is not the same as political power, the power used to educate children, nor is it persuasion (“in short, the pastorate does not coincide with politics, pedagogy, or rhetoric” (164)). The pastorate does not disappear with the rise of modern power forms, as he emphasizes in a couple of places (STP 148, 150). Indeed, he makes a much stronger claim: “I think this is where we should look for the origin, the point of formation, of crystallization, the embryonic point of the governmentality whose entry into politics … marks the threshold of the modern state” (165).
As Melinda Cooper notes (recall here), one of the reasons Gary Becker – as opposed to other neoliberal theorists – was interesting to Foucault because of his emphasis on microeconomics, particularly the quotidian institutions through which micropower functions, such as the family. At the same time, Becker’s human capital theory has become increasingly important in neoliberal constructions of human nature. In a late essay, Becker applies himself to health economics. The result, I think, offers a very clear demonstration of neoliberal thinking and how it works nearly inexorably to distract from social problems, generally by constructing them as individual problems and ignoring the social determinants of an individual’s situation.
In her contribution to recent the Vatter/Lemm-edited collection of essays on biopolitics, Melinda Cooper argues that Foucault’s work on neoliberalism needs to be read in the context of his interest in the Iranian revolution. If she’s right, this stands current complaints about Foucault’s engagement with neoliberalism on its head. The standard complaint about the work on biopolitics is that Foucault ends up supporting (deliberately or otherwise) neoliberalism. The merits of that claim have been debated ad nauseam, particularly in light of the Zamora book last year, and I have no interest in revisiting them here (plus, Vatter’s paper in the same book does a great job on the topic, and I think he ups the bar considerably for future discussions). Cooper’s paper is of interest because she makes what is essentially the opposite claim: Foucault was so disturbed by the general diffusion of the oikos into the polis that defines neoliberalism (and really classical liberalism, too) that he found the Iranian revolution interesting precisely because it focused on restoring some sort of classic oikonomia. There’s thus two main steps to the argument in its most condensed form: (a) The Iranian revolution was premised on getting women out of the public sphere after Shah Pahlevi introduced a number of reforms that greatly expanded their integration into the full economy; and (b) Foucault thought that it would be a good thing if there was some sort of restoration of the law of the household as a bulwark against neoliberalism.
In the current issue of Philosophy and Rhetoric, Kelly Happe has an interesting paper interpreting Occupy Wall Street (or at least the Zuccotti Park component) as an example of cynical parrhesia. In a time when all expression is always already co-opted by neoliberal capital as a source of surplus value (this point has been canvassed extensively by the autonomist Marxists as “complete subsumption,” and I’m going to take it for granted here. I summarize it here in my discussion of Hardt and Negri’s Empire), it becomes hard to know what kind of speech would count as protest. Anyone who has seen the branding of Che Guevera T-Shirts has some idea what the problem is. It’s also one that has been very difficult to address; in Empire, for example, which lays out the problem quite clearly, we are offered the somewhat discouraging example of Coetzee’s Michael K, a character who drops out and nearly starves to death in caves.
Happe’s move is to suggest that Occupy succeeds in avoiding cooption by way of its rejection of politically expressive speech. As she puts it, “what is striking is the time and space devoted to the material culture and everyday life of public, communal living. Indeed, in the various accounts of the Zuccotti moment of Occupy, the radical imagination is inseparable from the otherwise unremarkable practices of day-to-day living in an encampment” (214). That is, it is in the rejection of symbolic and explicitly “political” speech that Occupy evades neoliberal cooption. Such speech, she proposes, is a good example of the sort of ethical parrhesia that Foucault recounts in the ancient Cynics. For the Cynics, it is precisely the extent to which their speech is unintelligible to politics that makes it radical, suspends its subsumption into the political apparatus, and presents the contingency of a new way of life. Happe writes:
It must be summer: Facebook has released a controversial study of its users. Last year, it was the demonstration that the emotional contagion effect did not require direct contact, and could in fact spread across social networks without direct, face-to-face contact (the controversy wasn’t in the result, it was in the fact that FB did the study by manipulating its users’ Newsfeeds to present more happy content) This time, Facebook’s research wing published a paper in Science purporting to demonstrate that Facebook wasn’t responsible for whatever online echo-chamber effect its users might demonstrate. Or, at least, if the site did contribute to an echo-chamber, it wasn’t the main contributor. From the FB blog discussing the paper:
We’ve all heard that regulations are bad, because they interfere with businesses doing what they want (rules about dumping toxic chemicals get in the way of dumping toxic chemicals. Laws against murder hamper the business model of assassins. And so on.). New North Carolina Senator Thom Tillis made the media rounds this week for some odd remarks he made on the topic. When asked to name a regulation he thought was bad, he came up with… the rule that restaurant employees wash their hands after visiting the toilet. He then proposed that it would be better to have restaurants state whether or not employees have to wash their hands, and then “let the market” take care of it.
There’s two obvious problems here, both of which have been pointed out a lot. One is that there’s a public health issue. The other is that he hasn’t actually reduced regulation: he’s just replaced a public health rule with a rule about signage. I actually think the second point is interesting, well beyond the “gotcha!” treatment it got, because it perfectly illustrates something about neoliberalism: it doesn’t think regulations that create markets are regulations (or, if you prefer, regulating to create markets is good, other regulations are bad. This is the same mindset that concludes that the hyper-regulated Chicago futures markets are unregulated). The cleanliness of restaurant operations is not something consumers can know much about on their own, since they don’t do things like follow employees to the restroom. In this sense restaurant sanitation is a credence good (you have to believe the restaurant; you can’t inspect the product before you buy it). Since dirty food preparation can make people very sick, rational consumers should be willing to pay more for the knowledge that their food is safely prepared. But since they won’t be in any position to know about food safety, except (maybe) for places they’ve eaten before, we can expect market failure until some mechanism arrives to help consumers make their decisions.
Daniel Zamora’s interview in Jacobin (following the publication of a book he edited), in which he claims that Foucault ended up de facto endorsing neoliberalism, has generated a lot of renewed discussion about Foucault’s late work. Over at An und für sich, Mark William Westmoreland has organized a series of posts responding to Zamora. I’m one of the contributors; the others are Verena Erlenbusch (Memphis), Thomas Nail (Denver), and Johanna Oksala (Helsinki). My contribution is cross-posted below, but you really should start with the interview and then read Erlenbusch’s post – she lays out the context of the controversy, and discusses the book (which came out fairly recently, and which hasn’t been translated yet) in considerable detail.
I’ll update with links to Nail’s and Oskala’s contributions when they’re up.
At the end of my time in high school, I worked part-time bagging groceries. There was some modest union influence on the job, and its scheduling was pretty predictable: the longer you’d been there, the better schedule you’d get. Your first few weeks, you knew you’d be working late into the evening, especially on Friday and Saturday. After a while, the late shifts would taper off as somebody newer than you would get slotted into them. You could depend on a pretty predictable schedule week-in and week-out. It was a service sector job with a factory-like scheduling.
I mention this because one of the more nefarious uses of big data got emphasized last week in the context of discussion Black Friday’s steady march backwards into Thanksgiving day. Last week’s news highlighted one way that data analytics can be used to introduce further precarity into the lives of low-wage workers. The transfer of risk and precarity to employees more generally is of course something neoliberalism does pretty well, but the process is even more intense for low-wage workers due to the introduction of scheduling software that produces unsteady, uneven, just-in-time scheduling, so that employers don’t have to pay for employees who aren’t absolutely necessary. Since a disproportionate number of those affected by these programs have children to care for, and since many of them are minorities, it’s also a case of disparate impact on poor, minority women. As stores open earlier and earlier for Black Friday, more and more workers – again, mostly women – are being put into the position of not knowing whether they’ll have Thanksgiving off until a day or two before. It’s a good example of the general problem.
Judge Richard Posner’s well-known application of law and economics to privacy yields results that appear, well, ideological. First, he considers what individuals do with informational privacy. What is an interest in privacy of information, he asks? Well, it’s an interest in enforcing an information asymmetry in markets. Information asymmetry is presumptively bad because it causes distortion in the price mechanism; the price mechanism is in turn the reason that markets can claim to be both epistemically and normatively justified. They are epistemically justified because market price signals the social value of something much better than any sort of centralized planning process would do, and it does so without introducing all the inefficiencies of an enormous state apparatus. The price mechanism is normatively justified because it presents no special intrusion into the lives of individuals: we are all free to do what we want and signal (with our willingness to pay) what is important to us. In the case of privacy, for example, if I present myself or some good I am selling to you, “privacy” basically means that I’m trying to withhold relevant information about that good from you. If I apply for a job and hide a criminal record, then I’m trying to get you to overvalue me as a potential employee by keeping you ignorant of my past. Accordingly, the law should not protect such refusals to disclose, and in some cases ought to compel disclosure. Thus the first part of Posner’s article.
For they know they are not animals. And at the very moment when they discover their humanity, they begin to sharpen their weapons to secure its victory. --Frantz Fanon, The Wretched of the Earth America has been and remains an apartheid state. That sad but increasingly undeniable fact was made apparent last night in Ferguson, Missouri to a group of peaceful protesters amidst tanks, deafening LRADs, a haze of tear gas and a firestorm of rubber (and real) bullets. The other tragic fact made apparent in Ferguson last night is that America is only ever a hair's-breadth away from a police state... if we understand by "police"not a regulated body of law-enforcement peacekeepers empowered to serve and protect the citizenry, but rather a heavily-armed, extra-constitutional, militarized cadre of domestic soldiers who provoke and terrorize with impunity. Much of the time, we are able to forget or ignore these unfortunate truths about contemporary America-- and by "we" I mean our elected officials, our bureaucrats and financiers, and a lot of self-delusionally "post-racial," though really white, people-- but the mean truth of gross inequality, both de facto and de jure, remains ever-present in spite of our disavowals, simmering steadily just below the allegedly free and fair democratic veneer of our polis.
Greg Howard, journalist and parrhesiates, said it about as plainly as it can be said this past Tuesday in his article for Deadspin: America is not for black people. The truth of "American apartheid" should make us all ashamed, saddened, angry, deeply troubled as moral and political agents. And, what is more, it should frighten us all.
There's been a good bit conversation recently about the merits and demerits of "public philosophy" and, as someone who considers herself committed to public philosophy (whatever that is). I'm always happy to stumble across a piece of remarkably insightful philosophical work in the public realm. Case in point: Robin James (Philosophy, UNC-Charlotte) posted a really fascinating and original short-essay on the Cyborgology blog a couple of days ago entitled "An attempt at a precise & substantive definition of 'neoliberalism,' plus some thoughts on algorithms." There, she primarily aims to distinguish the sense in which we use the term "neoliberalism" to indicate an ideology from its use as a historical indicator, and she does so by employing some extremely helpful insights about algorithms, data analysis, the mathematics of music, harmony, and how we understand consonance and dissonance. I'm deeply sympathetic with James' underlying motivation for this piece, namely, her concern that our use of the term "neoliberalism" (or its corresponding descriptor "neoliberal") has become so ubiquitous that it is in danger of being evacuated of "precise and substantive" meaning altogether. I'm sympathetic, first, as a philosopher, for whom precise and substantive definitions are as essential as hammers and nails are to a carpenter. But secondly, and perhaps more importantly, I'm sympathetic with James' effort because as Jacques Derrida once said "the more confused the concept, the more it lends itself to opportunistic appropriation." Especially in the last decade or so, "neoliberalism" is perhaps the sine qua non term that has been, by both the Left and the Right, opportunistically appropriated.
James' definition of neoliberalism's ideological position ("everything in the universe works like a deregulated, competitive, financialized, capitalist market") ends up relying heavily on her distinction of neoliberalism as a particular type of ideology, i.e., one "in which epistemology and ontology collapse into one another, an epistemontology." In sum, James conjectures that neoliberal epistemontology purports to know what it knows (objects, beings, states of affairs, persons, the world) vis-a-vis "the general field of reference of economic anaylsis."
In the most anticipated Copyright decision this term, the Supreme Court today ruled, 6-3 (opinion by Breyer, dissent Scalia) that Aereo’s service for watching broadcast TV online violates the Copyright Act. Briefly: Aero operates a large number of tiny antennas. Subscribers pick a program they want to watch, and get exclusive access to an antenna. That antenna then receives the broadcast in question, sets it up on a private folder for that user in the cloud, and then streams it to him/her over the Internet. The broadcast networks sued, claiming that Aereo’s actions constituted an infringing public performance of their content.
There is and will be endless discussion about this case, because it may very well have enormous implications for cloud computing (the opinion tries very hard to limit itself: it includes an entire section about why it doesn’t apply to cloud computing, and the argument hinges on an analogy to cable TV and specific statutory language adopted in 1976 to deal with cable TV). But there’s something else more interesting, I think, under the radar. I sort of saw it in the opinion, but it came into sharp focus in Scalia’s dissent, so I’ll start there.
I am increasingly convinced that any Foucauldian effort to understand neoliberalism needs to focus on it as a strategy of subjectification (more specifically, it’s the strategy of subjectification specific to contemporary biopower, and it says that the truth of the human being is as homo economicus). One reason I think this is that one finds repeated examples of where policy or governmental prescriptions specific to neoliberalism conflict with neoliberalism as a strategy of subjectification; in such cases, the strategy of subjectification generally seems to win. Let me explain with an example which will hopefully serve as proof of concept of the admittedly very big thesis I’ve just announced.
Thomas Frank has a nice analysis up on Salon.com on college tuition and debts. In it, he points out that the crisis is of long duration, and people have been asking for more than a generation when the “college bubble” will burst. Along the way, he shows that a number of standard explanations (overpaid professors, insatiable student demand for gymnasiums, etc.) don’t make any sense, at least not on their own. His concluding point, though, seems vitally important. Here’s a good-sized chunk of text (with significant ellipses); I’ll follow with a couple of additional thoughts:
I’m currently teaching a summer gen-ed class on the topic of “Ethical Issues: Technology,” and when I teach this class, I always make a point to discuss Facebook early-on. Specifically, this time we’re talking about the “is Facebook making us lonely” question, using a piece from the Atlantic and a critique of it that appeared a few days later on Slate. But I always try to include time for talking about Facebook in general. And my students say more or less what the research says: almost all of them are on it, and they use it mainly to keep up with and enhance offline social networks. They gain considerable social capital from their use of it. But they also don’t like it all that much. They resent the constantly changing settings, and they’re getting fairly cynical about FB as a business. They don’t much like having to untag themselves from photos all the time. They tend to think FB either takes them for granted, or even takes advantage of them. More than one said they’d leave if they could figure out how. And they do worry about privacy. All of that is anecdotal, of course, but it’s been a pretty consistent response for a few years now.
A great deal of the value of a company like FB is network: like telephones, the more people who use them, the more valuable yours is. This is part of why students don’t have much of an exit option, as leaving FB would basically give them the SNS equivalent of a one-phone system. FB then rubs it in: there’s no way to export all of your material from it to another system, so a decision to leave is a decision to leave however many hours of socializing and networking behind. This sort of state of affairs led Tiziana Terranova to note – before FB – that websites extract a lot of surplus value from the users who produce them simply because of this network effect.
Biopolitics – even when understood in its narrow sense of life itself being a political issue – comes in at least two different strands. The first, which historically precedes the second, was concerned with what Foucault called a “politics of public health.” In so doing, it takes on standard biopolitical issues of population optimization, public health and so forth as mass issues. The resulting policies included mass vaccination campaigns, the installation of proper municipal sewage systems, and so forth. These programs resulted in demonstrable and substantial gains in typical measures of public health, such as life expectancy.
In a famous essay, Deleuze suggests that our society has moved beyond Foucauldian disciplinary power to a more fluid “control society,” where the various sites of disciplinary control merge into a modulated network of interlocking sites of power, the primary technique of which is access control. As Deleuze notes, the move is “dispersive,” and “the factory has given way to the corporation.” Hence, “the family, the school, the army, the factory are no longer distinct analogical spaces that converge towards an owner – state or private power – but coded figures – deformable and transformable – of a single corporation that now has only stockholders.” (6) The most vivid image of such a society he attributes to Guattari, who:
“has imagined a city where one would be able to leave one’s apartment, one’s street, one’s neighborhood, thanks to one’s (dividual) electronic card that raises a given barrier; but the card could just as easily be rejected on a given day or between certain hours; what counts is not the barrier but the computer that tracks each person’s position – licit or illicit – and effects a universal modulation” (7)
This thesis has been most widely applied to surveillance and security and is easily evidenced by things like NSA “don’t fly” lists and the number of passwords one has to generate online. That said, I would like to suggest here that, at least in one respect, we’re moving past the control society. Or, perhaps, we’re seeing the truth of the control society in an unexpected way. One feature of the move from the dungeon to the panopticon is regulatory efficiency: it costs a lot less to get people to police themselves than to coerce them with brute force. The move to control is similarly efficient in that multiple, closed panoptic systems are much less efficient than a more modular arrangement where panoptic technologies are (as Foucault said they would be) completely diffused into society and work together, rather than separately.
In comment #9 at this post, Susan makes a kind of canonical case I've heard from lots of assessment people.
First, I should say that I agree with 95% of the intended answers to Susan's rhetorical questions. We should be much clearer about what we want our students to get out of their degrees, and we should put in the hard work of assessing the extent that we are successful.
But "assessment" in contemporary American bureaucracies almost always accomplishes exactly the opposite of the laudable goals that Susan and I share. And there are deep systematic reasons for this. Below, I will first explain three fallacies and then explain why everyone involved in assessment faces enormous pressure to go along with these fallacies. Along the way I hope to make it clear how this results in "assessment" making things demonstrably worse.**
Why do things like "professional development," "continuing education," "team-building," and (yes, this too) "assessment" always have to tend towards infantalizing the poor people subjected to them?
It's one thing to bureaucratically humiliate people by making them waste huge gobs of time. But this business of making them engage in ritualistic idiotic performances (which always involve to some extent enthusiastically presupposing that everyone is not in fact wasting time) is a much higher echelon of evil. How can the adult human beings in this video (courtesy Washington Post) have any self-respect?*
Mark my words. First they came for the high school teachers. . .**
[*To be fair, everyone involved in making the video and smuggling it to the Washington Post gained back their self-respect fourfold.
**If I was doing my normal thing and putting a rock video in the upper right hand corner, it would probably have been Jane's Addiction's "Idiots Rule." But I realized that it didn't scan because even if team-builder/professional development/assessment types are self-deluded enough to believe in the rightness of what they make the rest of us do, it takes quite a bit of intelligence to get people so complicit in their own immiseration.]