Rawls famously argued for the use of maximin behind the veil of ignorance. It's a rule that maximizes one's own minimum payoff. Now much debate has centered on to what degree this rule is too Conservative. But what's interesting about Rawls is that he is explicit that maximin is a terrible rule in context of dealing with genuine uncertainty (TJ 133; see also 135 & 72). (As an aside, that's because Rawls was a serious reader of Frank Knight, something I learned from David Levy, who bought many of Rawls' library holdings with marginalia on economics online. [See here for an undergrad thesis that explores the philosophic issues!] Some other time I will explore how Rawls' books got put on sale.)
One recurring feature of this weekly blog, is the displacement of genuine Knightian uncertainty in post WWII economics and the formal revolution that accompanied it. As recounted one route was into contingent commodities (in Arrow-Debreu-Mackenzie; the main workhorse of economic theory); another route was treating uncertainty as randomness. The two approaches were eventually combined when the contingent commodity was treated in context of perfect markets that were understood to be random walks (i.e., the great 60s-70s Cambridge-Chicago synthesis associated with Samuelson, Fama, Black-Scholes, and all that [I hope to blog about this before long because the real issue is how two concepts with different modal features were combined]). In reflecting on a lovely draft paper by Nicola Giocoli, I realized there was a third -- in some sense more basic -- route by which uncertainty got displaced within economics.
The fundamental notion of rationality within economics is consistency as understood in terms of Von Neumann and Morgenstern. (They did so by drawing on important work by one of my favorite philosophers, Ramsey.) Abraham Wald, who during WWII worked together in a statistical group with Milton Friedman and Jacob Wolfowitz [the father of Paul Wolfowitz], built on this to show that Minimax was a Bayes rule with a least favorable prior. (Minimax = maximizing the minimum gain.) In 1954 Savage then gave a canonical formulation of subjective expected utility. Now because Savage articulated all of this in terms of subjective terms, he may be thought to be addressing conditions of Knightian uncertainty. And Savage is about as core to economics as it gets. Matters now turn on a non-trivial issue.
Minimax relies on the application of Bayes. And it is only pessimistic if the priors that one sets are really terrible. To put my point simply: Von Neumann and Savage created a decision algorithm that lets people pretend that they have done worst case modeling. (Yes, I know Bayesians will respond by claiming that I am unfairly maligning a perfectly respectable technique. After all guns don't kill people, people kill people. [For thorough critique of Bayesianism, see John Norton's work here.]) To echo something taught to by David Levy: you get precision in the dimensions you can see at the expense of dimensions you can't see. And the precise value will lead a life of its own until the philosophical-economist technicians that sold it to us are long out of sight.
Recent Comments