Enclose the sun inside a layered nest of thin spherical computers. Have the inmost sphere harvest the sun's radiation to drive computational processes, emitting waste heat out its backside. Use this waste heat as the energy input for the computational processes of a second, larger and cooler sphere that encloses the first. Use the waste heat of the second sphere to drive the computational processes of a third. Keep adding spheres until you have an outmost sphere that operates near the background temperature of interstellar space.
Congratulations, you've built a Matrioshka Brain! It consumes the entire power output of its star and produces many orders of magnitude more computation per microsecond than all of the current computers on Earth do per year.
Here's a picture:
(Yes, it's black. Maybe not if you shine a flashlight on it, though.)
So...
Let's begin by considering a Matrioshka Brain planfully constructed by intelligent designers. The designers might have aimed at creating only a temporary entity -- a brief art installation, maybe, like a Buddhist sand mandala. These are, perhaps, almost entirely beyond psychological prediction. But if the designers wanted to make a durable Matrioshka Brain, then broad design principles begin to suggest themselves.
Perception and action. If the designers want their Brain to last, it probably needs to monitor its environment and adjust its behavior in response. It needs to be able to detect, say, an incoming comet that might threaten its structure, so that it can take precautionary measures (such as deflecting the comet, opening a temporary pore for it to pass harmlessly through, or grabbing and incorporating it). There will probably be engineering trade-offs between at least three design features here: (1) structural resilience, (2) ability to detect things in its immediate environment, and (3) ability to predict the future. If the structure is highly resilient, then it might be able to ignore threats. Maybe it could even lack outer perception entirely. But such structural resilience might come at a cost: either more expensive construction (at least fewer options for construction) or loss of desirable computational capacity after construction. So it might make sense to design a Brain less structurally resilient but more responsive to its environment -- avoiding or defeating threats, as it were, rather than just always taking hits to the chin. Here (2) and (3) might trade off: Better prediction of the future might reduce the need of here-and-now perception; better here-and-now perception (coupled with swift responsiveness) might reduce the need of future prediction.
Prediction and planning. Very near-term, practical "prediction" might be done by simple mechanisms (hairs that flex in a certain way, for example, to open a hole for the incoming comet) but long-term prediction and prediction that involves something like evaluating hypothetical responses for effectiveness starts to look like planful cognition (if I deflected the comet this way, then what would happen? if I flexed vital parts away from it in this way, then what would happen?). Presumably, the designers could easily dedicate at least a small portion of the Matrioshka Brain to planning of this sort -- that seems likely to be a high-payoff use of computational resources, compared to having the giant Brain just react by simple reflex (and thus possibly not in the most effective or efficient way).
Unity or disunity. If we assume the speed of light as a constraint, then the Brain's designers must choose between a very slow, temporally unified system or a system with fast, distributed processes that communicate their results across the sphere at a delay. The latter seems more natural if the aim is to maximize computation, but the former might also work as an architecture, if a lot is packed into every slow cycle. A Brain that dedicates too many resources to fighting itself might not survive well or effectively serve other design purposes (and might not even be well thought of as a single Brain), but some competition among the parts might prove desirable (or not), and I see no compelling reason to think that its actions and cognition need be as unified as a human being's.
Self-monitoring and memory. It seems reasonable to add, too, some sort of self-monitoring capacities -- both of its general structure (so that it can detect physical damage) and of its ongoing computational processes (so that it can error-check and manage malfunction) -- analogs of proprioception and introspection. And if we assume that the Brain does not start with all the knowledge it could possibly want, it must have some mechanism to record new discoveries and then later have its processing informed by those discoveries. If processing is both distributed and interactive among the parts, then parts might retain traces of their recent processing that influence reactions to input from other parts with which they communicate. Semi-stable feedback loops, for example, might be a natural way to implement error-checking and malfunction monitoring. This in turn suggests the possibility of a distinction between high-detail, quickly dumped, short-term memory, and more selective and/or less detailed long-term memory -- probably in more than just those two temporal grades, and quite possibly with different memories differently accessible to different parts of the system.
Preferences. Presumably, the Matrioshka Brain, to the extent it is unified, would have a somewhat stable ordering of priorities -- priorities that it didn't arbitrarily jettison and shuffle around (e.g., structural integrity of Part A more important than getting the short-term computational outputs from Part B) -- and it would have some record of whether things were "going well" (progress toward satisfaction of its top priorities) vs. "going badly". Priorities that have little to do with self-preservation and functional maintenance, though, might be difficult to predict and highly path-dependent (seeding the galaxy with descendants? calculating as many digits of pi as possible? designing and playing endless variations of Pac-Man?).
The thing's cognition is starting to look almost human! Maybe that's just my own humanocentric failure of imagination -- maybe! -- but I don't think so. These seem to be plausible architectural features of a large entity designed to endure in an imperfect world while doing lots of computation and calculation.
A Matrioshka Brain that is not intentionally constructed seems likely to have similar features, at least if it is to endure. For example, it might have merged from complex but smaller subsystems, retaining the subsystems' psychological features -- features that allowed them to compete in evolutionary selection against other complex subsystems. Or it might have been seeded from a similar Matrioshka Brain at a nearby star. Alternatively, though, maybe simple, unsophisticated entities in sufficient numbers could create a Matrioshka Brain that endures via dumb rebuilding of destroyed parts, in which case my current psychological conjectures wouldn't apply.
Wilder still: How might a Matrioshka Brain implement long-term memory, remote conjecture, etc.? If it is massively parallel because of light-speed constraints, then it might do so by segregating subprocesses to create simulated events. For example, to predict the effects of catapulting a stored asteroid into a comet cloud, it might dedicate a subpart to simulate the effects of different catapulting trajectories. If it wishes to retain memories of the psychology of its human or post-human creators -- either out of path-dependent intrinsic interest or because it's potentially useful in the long run to have knowledge about a variety of species' cognition -- it might do so by dedicating parts of itself to emulate exactly that psychology. To be realistic, such an emulation might have to engage in real cognition; and to be historically accurate as a memory, such an emulated human or post-human would have to be ignorant of its real nature. To capture social interactions, whole groups of people might be simultaneously emulated, in interaction with each other, via seemingly sensory input.
Maybe the Brain wouldn't do this sort of thing very often, and maybe when it did do it, it would only emulate people in large, stable environments. The Brain, or its creators, might have an "ethics" forbidding the frequent instantiation and de-instantiation of deluded, conscious sub-entities -- or maybe not. Maybe it makes trillions of these subentities, scrambles them up, runs them for a minute, then ends them or re-launches them. Maybe it has an ethics on which pleasure could be instantiated in such sub-entities but suffering would always only be false memory; or maybe the Brain finds it rewarding to experience pleasure via inducing pleasure in sub-entities and so creates lots of sub-entities with peak experiences (and maybe illusory memories) but no real past or future.
Maybe it's bored. If it evolved up from merging social sub-entities, then maybe it still craves sociality -- but the nearest alien contacts are lightyears away. If it was constructed with lots of extra capacity, it might want to "play" with its capacities rather than have them sit idle. This could further motivate the creation of conscious subentities that interact with each other or with which it interacts as a whole.
This is one possible picture of God.
------------------------------------
Related posts:
Recent Comments