I’m currently teaching an ethics and public policy course, and for this week we read Kaplow and Shavell’s Fairness vs. Welfare (actually, we read the first 70 pages of the NBER paper that became the much bigger book). Their central claim is that to pick fairness as the dominant principle in policy-making is by definition to make some people worse-off than they were, and that there are numerous cases where the priority on fairness would make literally everyone worse off. An important subtext is that they don’t think “fairness” means anything, except as a poor, error-inducing proxy for “welfare;” the argument is like reading chapter 5 of J.S. Mill’s Utilitarianism on justice.
The argument is a preference-based one, and it interprets “welfare” broadly – there’s no correcting of preferences here (or apparent awareness of problems with adaptive preferences). They also allow for a “taste for fairness” – i.e., the preference many people feel for a situation they believe is fair. More on that in a minute. It’s a little unclear who their target is, as well: it sounds like Rawls, but of course Rawls is quite clear that his version of rationality is lifted directly from economics. Kant is the only person I can think of who spends a lot of time separating preferences (heteronomous desires) from what reason demands, so he’s as good a target as any. In any case, I want to focus briefly on the claim that fairness-based policies can make everyone worse off.
This thesis is best understood with an example, and later chapters in the book apply their theory to various areas of law and policy. In class, we talked about criminal policy, because that struck me as most accessible. The gist of the argument is that fairness shows up as retributive justice, where if I steal $100, then I somehow need to suffer $100 worth in retribution. The fair penalty for theft, that is, is then to cause the thief an amount of disutility equal to the utility he gains. Now, we know a couple of other things, too (I’m not including their numbers, but they include some to make the point). Running a criminal justice system and conducting trials costs money, and so everyone has to contribute to that through taxation. Incarceration (or other forms of penalty) is expensive too, and we have to pay for that. Finally, incarceration imposes substantial costs on those incarcerated. Assume for the moment that there is a 25% chance of getting caught in your plan to take $100. We then arrive at the following options:
- The fair penalty is $100. To implement this rule, a fairness-based policy costs quite a bit, because a lot of crime will continue, and large numbers of people will be in jail.
- A welfare-based policy would correctly set a penalty of $400 (because that’s the risk calculation for the criminal: 25% of $400 = $100, the expected utility gain from the crime). In such a case, nobody would commit the crime, and the net social cost is the cost of passing the law. And literally everybody would be better off: the deterrent works, so there’s no crime. There’s no trials, and no incarceration, and so the cost of the policy is less than the cost of the fairness policy. QED.
There’s a lot of simplifying assumptions built into this, but it was becoming clear to me over the course of the class that most of them apply to both versions, and so the abstract model does seem to say that the single-minded pursuit of fairness can make everybody worse off.
But back to the taste for preferences. People like fairness as a rule. In fact, you can show this with economic games theory. In the Ultimatum Game, player #1 is given $100 and told to allocate some amount to player 2. If player 2 accepts, the deal goes through. If player 2 rejects the offer, then nobody gets anything. Economically-speaking, player 1 should offer $.01, and player 2 should accept. But that isn’t what happens. Over and over again, player 2 is very likely to walk if the offer is less than about $30. In other words, we are pretty strongly wired to a sense of fairness. The research isn’t cross-cultural, I don’t think, but as one of my students pointed out, it’s culturally adaptable: in a gift economy, an initial offer that’s too high would be perceived as unfair, because it would player 2 in too much debt. So let’s accept for now that people have an economically irrational taste for fairness.
It seems to me that this is a very, very serious problem for Kaplow and Shavell. Imagine a society where everyone has a strong taste for fairness. Depending on the strength of that taste, the attempt to adopt any welfare-enhancing policy that did not strike people as fair would make (almost?) everyone worse off, because the disutility of perceived unfairness would outweigh whatever utility gains the policy might achieve. Kaplow and Shavell would probably concede the point in the abstract, but if the ultimatum game is any guide, then the baseline for wondering about this problem occurs at a much less unequal distribution than one might think.
Why, then, is the U.S. on track to have such a terrible income distribution? Because the other big assumption of a lot of welfarism – that people are rational economic actors, possessed of correct information that they can then deploy rationally turns out to be false as well. By this, I don’t just mean that people in general aren’t homo economicus (I take it that this critique has been established), but that other things specific to views about distribution like racism, and people’s inability to connect government policy to income distribution are significant variables.