In a famous article “Agreeing to Disagree,” Robert J. Aumann (1976) proved with a very elegant Bayesian argument that people with the same priors cannot agree to disagree. (Aumann went on to win a Nobel prize in economics.) His "result implies that the process of exchanging information on the posteriors for A will continue until these posteriors are equal." (1238) And this result fits a widespread intuition that in science (unlike say much of the history of philosophy) there is essentially agreement. (This idea has had unfortunate impact on economists who tried to make their field look scientific by suppressing disagreement.) Aumann tacitly assumes an efficient (scientific) 'market' for information exchange. As regular readers of this blog know by now, I am no fan of the uses people put the Bayesian machinery, so I return to Aumann's argument below. The economist Ali Khan reminded me of Aumann's piece after he read a draft-paper I co-authored with a terrific young Flemish PhD Student, Merel Lefevere, “Private Epistemic Virtue, Public Vices: Moral Responsibility in Policy Sciences.” (What follows draws on our joint research.) A lot of philosophers also assume without argument that scientific communities are like Aumann's efficient market for information exchange.
For example, in her terrific and instantly influential (2009) book, Science, policy and the value-free ideal, Heather Douglas assumes that “scientists work in such communities, in [i] near constant communication and [ii] competition with other scientists.” (83) It should be clear that Douglas presupposes something like an efficient market in scientific ideas. But a moment’s reflection suggests that policy-sciences are not always going to be very open to both features (constant communication and competition) simultaneously when we are dealing with, say, classified (e.g., defense-related) or sponsored research that often have non-disclosure requirements associated with. This is not an idle thought: financial trading houses try to keep their trading strategies and the consequences of their proprietary financial products a secret for competitive advantage—often these presuppose non-trivial technical and technological improvements that will not be available and, thus, not well understood by the larger community, especially the policy economists working as regulator. This is in my view a near-fatal argument against the very idea that regulators can grasp systemic risk (about which some other time more). This issue generalizes more widely; in medical sciences and engineering it is quite common to keep new techniques secret by patenting before publishing results. Some important, inconvenient results never get published when the financial stakes are high enough.
Moreover, policy scientists, in particular, are not always transparent about the explicit or more subtle tacit financial incentives of their consulting work. Some fields have very diverging practices when it comes to replicating results or sharing data. It is, thus, by no means obvious that all scientific fields are essentially a communicative community. In such cases, it may be unreasonable to expect Douglas’ approach to apply without some considerable finessing. (Lefevere and I use these observations to argue for a very different approach than Douglas' to analyze moral responsibility of policy scientists.) To put our position in economics jargon: even if we assume that scientists are individually pure truth-seekers, imperfections in scientific markets can produce non-epistemic externalities.
Either way, the above suggests that there are plenty of reasons for thinking that the conditions that would make Aumann's proof actual for real scientific communities sometimes need not exist. In those cases it could well be rational to disagree even in the exact sciences.
Recent Comments