In a series of fascinating recent articles, philosopher Susan Schneider argues that
(1.) Most of the intelligent beings in the universe might be Artificial Intelligences rather than biological life forms.
(2.) These AIs might entirely lack conscious experiences.
Schneider's argument for (1) is simple and plausible: Once a species develops sufficient intelligence to create Artificial General Intelligence (as human beings appear to be on the cusp of doing), biological life forms are likely to be outcompeted, due to AGI's probable advantages in processing speed, durability, repairability, and environmental tolerance (including deep space). I'm inclined to agree. For a catastrophic perspective on this issue see Nick Bostrom. For a polyannish perspective, see Ray Kurzweil.
The argument for (2) is trickier, partly because we don't yet have a consensus theory of consciousness. Here's how Schneider expresses the central argument in her recent Nautilus article:
Further, it may be more efficient for a self-improving superintelligence to eliminate consciousness. Think about how consciousness works in the human case. Only a small percentage of human mental processing is accessible to the conscious mind. Consciousness is correlated with novel learning tasks that require attention and focus. A superintelligence would possess expert-level knowledge in every domain, with rapid-fire computations ranging over vast databases that could include the entire Internet and ultimately encompass an entire galaxy. What would be novel to it? What would require slow, deliberative focus? Wouldn’t it have mastered everything already? Like an experienced driver on a familiar road, it could rely on nonconscious processing.
On this issue, I'm more optimistic than Schneider. Two reasons:
First, Schneider probably underestimates the capacity of the universe to create problems that require novel solutions. Mathematical problems, for example, can be arbitrarily difficult (including problems that are neither finitely solvable nor provably unsolvable). Of course AGI might not care about such problems, so that alone is a thin thread on which to hang hope for consciousness. More importantly, if we assume Darwinian mechanisms, including the existence of other AGIs that present competitive and cooperative opportunities, then there ought to be advantages for AGIs that can outthink the other AGIs around them. And here, as in the mathematical case, I see no reason to expect an upper bound of difficulty. If your Darwinian opponent is a superintelligent AGI, you'd probably love to be an AGI with superintelligence + 1. (Of course, there are other paths to evolutionary success than intelligent creativity. But it's plausible that once superintelligent AGI emerges, there will be evolutionary niches that reward high levels of creative intelligence.)
Second, unity of organization in a complex system plausibly requires some high-level self-representation or broad systemic information sharing. Schneider is right that many current scientific approaches to consciousness correlate consciousness with novel learning and slow, deliberative focus. But most current scientific approaches to consciousness also associate consciousness with some sort of broad information sharing -- a "global workspace" or "fame in the brain" or "availability to working memory" or "higher-order" self-representation. On such views, we would expect a state of an intelligent system to be conscious if its content is available to the entity's other subsystems and/or reportable in some sort of "introspective" summary. For example, if a large AI knew, about its own processing of lightwave input, that it was representing huge amounts of light in the visible spectrum from direction alpha, and if the AI could report that fact to other AIs, and if the AI could accordingly modulate the processing of some of its non-visual subsystems (its long-term goal processing, its processing of sound wave information, its processing of linguistic input), then on theories of this general sort, its representation "lots of visible light from that direction!" would be conscious. And we ought probably to expect that large general AI systems would have the capacity to monitor their own states and distribute selected information widely. Otherwise, it's unlikely that such a system could act coherently over the long term. Its left hand wouldn't know what its right hand is doing.
I share with Schneider a high degree of uncertainty about what the best theory of consciousness is. Perhaps it will turn out that consciousness depends crucially on some biological facts about us that aren't likely to be replicated in systems made of very different materials (see John Searle and Ned Block for concerns). But to the extent there's any general consensus or best guess about the science of consciousness, I believe it suggests hope rather than pessimism about the consciousness of large superintelligent AI systems.
Related:
Possible Psychology of a Matrioshka Brain (Oct 9, 2014)
If Materialism Is True, the United States Is Probably Conscious (Philosophical Studies 2015).
Susan Schneider on How to Prevent a Zombie Dictatorship (Jun 27, 2016)
Recent Comments