In response to my recent post on double-counting in economics and climate science, Marcel Boumans explains when and how the same data may be used twice in model calibration in economics. His comments bring out out the crucial issue: economist work in data-rich but relatively theory-poor environments. This is why in my 2005 piece (which contained mild criticism of Boumans), I had honed in on the lack of robust constants in economics. As I said in the earlier post on this, Charlotte Werndl has convinced me that sometimes we need to double-count in order to reveal a difference that makes a difference in the data (that absent double-counting would have remained invisible to us). But given that it is a probabilistic argument, we now have a pressing question: when is double counting safe? I have two intuitions on this: i) when we can be confident that background conditions are stable. Stable constants are a good proxy for this assumption. In economics we do not have them, but maybe we can be more confident in climate science. (I don't know.) ii) When we have independent ways of testing the nature of underlying distributions.
Now in conversation and correspondence with Werndl, it became clear that she sees herself as responding to folk (i.e., Worrall) that worry about double-counting because some other theory/model might be true (or closer to the truth). Werndl has neat confirmation-theory arguments against those concerns. However, I am worried about circumstances in which no (available) theory is even approximately true. So, I am worried about expert overconfidence in the face of genuine uncertainty. In fact, I worry that confirmation theory reinforces overconfidence among experts, but about that some other time more. (The nub of my concern is this: confirmation theory encourages scientists and philosophers to see data as evidence; confirmation theory is a very bad guide to help us think about how a theory/model can be a good research engine that guides inquiry that generates high quality evidence.)
Recent Comments