In 1% Skepticism, I suggest that it's reasonable to have about a 1% credence that some radically skeptical scenario holds (e.g., this is a dream or we're in a short-term sim), sometimes making decisions that we wouldn't otherwise make based upon those small possibilities (e.g., deciding to try to fly, or choosing to read a book rather than weed when one is otherwise right on the cusp).
But what about extremely remote possibilities with extremely large payouts? Maybe it's reasonable to have a one in 10^50 credence in the existence of a deity who would give me at least 10^50 lifetimes' worth of pleasure if I decided to raise my arms above my head right now. One in 10^50 is a very low credence, after all! But given the huge payout, if I then straightforwardly apply the expected value calculus, such remote possibilities might generally drive my decision making. That doesn't seem right!
I see three ways to insulate my decisions from such remote possibilities without having to zero out those possibilities.
First, symmetry: My credences about extremely remote possibilities appear to be approximately symmetrical and canceling. In general, I'm not inclined to think that my prospects will be particularly better or worse due to their influence on extremely unlikely deities, considered as a group, if I raise my arms than if I do not. More specificially, I can imagine a variety of unlikely deities who punish and reward actions in complementary ways -- one punishing what the other rewards and vice versa. (Similarly for other remote possibilities of huge benefit or suffering, e.g., happening to rise to an infinite Elysium if I step right rather than left.) This indifference among the specifics is partly guided by my general sense that extremely remote possibilities of this sort don't greatly diminish or enhance the expected value of such actions. I see no reason not to be guided by that general sense -- no argumentative pressure to take such asymmetries seriously in the way that there is some argumentative pressure to take dream doubt seriously.
Second, diminishing returns:
Third, loss aversion: I'm loss averse rather than risk neutral. I'll take a bit of a risk to avoid a sure or almost-sure loss. And my life as I think it is, given non-skeptical realism, is the reference point from which I determine what counts as a loss. If I somehow arrived at a one in 10^50 credence in a deity who would give me 10^50 lifetimes of pleasure if I avoided chocolate for the rest of my life (or alternatively, a deity who would give me 10^50 units of pain if I didn't avoid chocolate for the rest of my life), and if there were no countervailing considerations or symmetrical chocolate-rewarding deities, then on a risk-neutral utility function, it might be rational for me to forego chocolate evermore. But foregoing chocolate would be a loss relative to my reference point; and since I'm loss averse rather than risk neutral, I might be willing to forego the possible gain (or risk the further loss) so as to avoid the almost-certain loss of life-long chocolate pleasure. Similarly, I might reasonably decline a gamble with a 99.99999% chance of death and a 0.00001% chance of 10^100 lifetimes' worth of pleasure, even bracketing diminishing returns. I might even reasonably decide that at some level of improbability -- one in 10^50? -- no finite positive or negative outcome could lead me to take a substantial almost-certain loss. And if the time and cognitive effort of sweating over decisions of this sort itself counts as a sufficient loss, then I can simply disregard any possibility where my credence is below that threshold.
These considerations synergize: the more symmetry and the more diminishing returns, the easier it is for loss aversion to inspire disregard. Decisions at credence one in 10^50 are one thing, decisions at credence 0.1% quite another.
Recent Comments