Or, why I don’t fear the p-zombie apocalypse.
This post analyzes concerns about how evolution, in the absence of a powerful singleton, might, in the distant future, produce what Nick Bostrom calls a “Disneyland without children”. I.e. a future with many agents, whose existence we don’t value because they are missing some important human-like quality.
The most serious description of this concern is in Bostrom’s The Future of Human Evolution. Bostrom is cautious enough that it’s hard to disagree with anything he says.
Age of Em has prompted a batch of similar concerns. Scott Alexander at SlateStarCodex has one of the better discussions (see section IV of his review of Age of Em).
People sometimes sound like they want to use this worry as an excuse to oppose the age of em scenario, but it applies to just about any scenario with human-in-a-broad-sense actors. If uploading never happens, biological evolution could produce slower paths to the same problem(s) [1]. Even in the case of a singleton AI, the singleton will need to solve the tension between evolution and our desire to preserve our values, although in that scenario it’s more important to focus on how the singleton is designed.
These concerns often assume something like the age of em lasts forever. The scenario which Age of Em analyzes seems unstable, in that it’s likely to be altered by stranger-than-human intelligence. But concerns about evolution only depend on control being sufficiently decentralized that there’s doubt about whether a central government can strongly enforce rules. That situation seems sufficiently stable to be worth analyzing.
I’ll refer to this thing we care about as X (qualia? consciousness? fun?), but I expect people will disagree on what matters for quite some time. Some people will worry that X is lost in uploading, others will worry that some later optimization process will remove X from some future generation of ems.
I’ll first analyze scenarios in which X is a single feature (in the sense that it would be lost in a single step). Later, I’ll try to analyze the other extreme, where X is something that could be lost in millions of tiny steps. Neither extreme seems likely, but I expect that analyzing the extremes will illustrate the important principles.