Intuitive physics works great for picking berries, throwing stones, and walking through light underbrush. It's a complete disaster when applied to the very large, the very small, the very energetic, or the very fast. Similarly for intuitive biology, intuitive cosmology, and intuitive mathematics: They succeed for practical purposes across long-familiar types of cases, but when extended too far they go wildly astray.
How about intuitive ethics?
I incline toward moral realism. I think that there are moral facts that people can get right or wrong. Hitler's moral attitudes were not just different from ours but actually mistaken. The twentieth century "rights revolutions" weren't just change but real progress. I worry that if artificial intelligence research continues to progress, intuitive ethics might encounter a range of cases for which it is as ill prepared as intuitive physics was for quantum entanglement and relativistic time dilation.
Intuitive ethics was shaped in a context where the only species capable of human-grade practical and theoretical reasoning was humanity itself, and where human variation tended to stay within certain boundaries. It would be unsurprising if intuitive ethics were unprepared for utility monsters (capable of superhuman degrees of pleasure or pain), fission-fusion monsters (who can merge and divide at will), AIs of vastly superhuman intelligence, cheerfully suicidal AI slaves, conscious toys with features specifically designed to capture children's affection, giant virtual sim-worlds containing genuinely conscious beings over which we have godlike power, or entities with radically different value systems. We might expect human moral judgment to be be baffled by such cases and to deliver wrong or contradictory or unstable verdicts.
Or maybe morality is constructed from our judgments and folkways, so that whatever moral facts there are, they are just the moral facts that we (or idealized versions of ourselves) think there are? Much like an object's being red, on a certain view of the nature of color, consists in its being such that ordinary human perceivers in normal conditions would experience it as red, maybe an action's being morally right just consists in its being such that ordinary human beings who considered the matter carefully would regard it as right? (This is a huge, complicated topic in metaethics, e.g., here and here.) If we take this approach, then morality might change as our sense of the world changes -- and as who counts as "we" changes. Maybe we could decide to give fission-fusion monsters some rights but not other rights, and shape future institutions accordingly. The unsettled nature of our intuitions about such cases, then, might present an opportunity for us to shape morality -- real morality, the real (or real enough) moral facts -- in one direction rather than another, by shaping our future reactions and habits.
Maybe different social groups would make different choices with different consequences for group survival, introducing cultural evolution into the mix. Moral confusion might open into a range of choices for moral architecture.
However, the range of legitimate choices is, I'm inclined to think, constrained by certain immovable moral facts, such as that it would be a moral disaster if the most successful future society constructed human-grade AIs, as self-aware as we are, as anxious about their future, and as capable of joy and suffering, simply to torture, enslave, and kill them for no good reason.
---------------------------------------------- Related posts:
---------------------------------------------- Thanks to Ever Eigengrau for extensive discussion.
Recent Comments