existential risks

All posts tagged existential risks

Book review: Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom.

This book is substantially more thoughtful than previous books on AGI risk, and substantially better organized than the previous thoughtful writings on the subject.

Bostrom’s discussion of AGI takeoff speed is disappointingly philosophical. Many sources (most recently CFAR) have told me to rely on the outside view to forecast how long something will take. We’ve got lots of weak evidence about the nature of intelligence, how it evolved, and about how various kinds of software improve, providing data for an outside view. Bostrom assigns a vague but implausibly high probability to AI going from human-equivalent to more powerful than humanity as a whole in days, with little thought of this kind of empirical check.

I’ll discuss this more in a separate post which is more about the general AI foom debate than about this book.

Bostrom’s discussion of how takeoff speed influences the chance of a winner-take-all scenario makes it clear that disagreements over takeoff speed are pretty much the only cause of my disagreement with him over the likelihood of a winner-take-all outcome. Other writers aren’t this clear about this. I suspect those who assign substantial probability to a winner-take-all outcome if takeoff is slow will wish he’d analyzed this in more detail.

I’m less optimistic than Bostrom about monitoring AGI progress. He says “it would not be too difficult to identify most capable individuals with a long-standing interest in [AGI] research”. AGI might require enough expertise for that to be true, but if AGI surprises me by only needing modest new insights, I’m concerned by the precedent of Tim Berners-Lee creating a global hypertext system while barely being noticed by the “leading” researchers in that field. Also, the large number of people who mistakenly think they’ve been making progress on AGI may obscure the competent ones.

He seems confused about the long-term trends in AI researcher beliefs about the risks: “The pioneers of artificial intelligence … mostly did not contemplate the possibility of greater-than-human AI” seems implausible; it’s much more likely they expected it but were either overconfident about it producing good results or fatalistic about preventing bad results (“If we’re lucky, they might decide to keep us as pets” – Marvin Minsky, LIFE Nov 20, 1970).

The best parts of the book clarify many issues related to ensuring that an AGI does what we want.

He catalogs more approaches to controlling AGI than I had previously considered, including tripwires, oracles, and genies, and clearly explains many limits to what they can accomplish.

He briefly mentions the risk that the operator of an oracle AI would misuse it for her personal advantage. Why should we have less concern about the designers of other types of AGI giving them goals that favor the designers?

If an oracle AI can’t produce a result that humans can analyze well enough to decide (without trusting the AI) that it’s safe, why would we expect other approaches (e.g. humans writing the equivalent seed AI directly) to be more feasible?

He covers a wide range of ways we can imagine handling AI goals, including strange ideas such as telling an AGI to use the motivations of superintelligences created by other civilizations

He does a very good job of discussing what values we should and shouldn’t install in an AGI: the best decision theory plus a “do what I mean” dynamic, but not a complete morality.

I’m somewhat concerned by his use of “final goal” without careful explanation. People who anthropomorphise goals are likely to confuse at least the first few references to “final goal” as if it worked like a human goal, i.e. something that the AI might want to modify if it conflicted with other goals.

It’s not clear how much of these chapters depend on a winner-take-all scenario. I get the impression that Bostrom doubts we can do much about the risks associated with scenarios where multiple AGIs become superhuman. This seems strange to me. I want people who write about AGI risks to devote more attention to whether we can influence whether multiple AGIs become a singleton, and how they treat lesser intelligences. Designing AGI to reflect values we want seems almost as desirable in scenarios with multiple AGIs as in the winner-take-all scenario (I’m unsure what Bostrom thinks about that). In a world with many AGIs with unfriendly values, what can humans do to bargain for a habitable niche?

He has a chapter on worlds dominated by whole brain emulations (WBE), probably inspired by Robin Hanson’s writings but with more focus on evaluating risks than on predicting the most probable outcomes. Since it looks like we should still expect an em-dominated world to be replaced at some point by AGI(s) that are designed more cleanly and able to self-improve faster, this isn’t really an alternative to the scenarios discussed in the rest of the book.

He treats starting with “familiar and human-like motivations” (in an augmentation route) as an advantage. Judging from our experience with humans who take over large countries, a human-derived intelligence that conquered the world wouldn’t be safe or friendly, although it would be closer to my goals than a smiley-face maximizer. The main advantage I see in a human-derived superintelligence would be a lower risk of it self-improving fast enough for the frontrunner advantage to be large. But that also means it’s more likely to be eclipsed by a design more amenable to self-improvement.

I’m suspicious of the implication (figure 13) that the risks of WBE will be comparable to AGI risks.

  • Is that mainly due to “neuromorphic AI” risks? Bostrom’s description of neuromorphic AI is vague, but my intuition is that human intelligence isn’t flexible enough to easily get the intelligence part of WBE without getting something moderately close to human behavior.
  • Is the risk of uploaded chimp(s) important? I have some concerns there, but Bostrom doesn’t mention it.
  • How about the risks of competitive pressures driving out human traits (discussed more fully/verbosely at Slate Star Codex)? If WBE and AGI happen close enough together in time that we can plausibly influence which comes first, I don’t expect the time between the two to be long enough for that competition to have large effects.
  • The risk that many humans won’t have enough resources to survive? That’s scary, but wouldn’t cause the astronomical waste of extinction.

Also, I don’t accept his assertion that AGI before WBE eliminates the risks of WBE. Some scenarios with multiple independently designed AGIs forming a weakly coordinated singleton (which I consider more likely than Bostrom does) appear to leave the last two risks in that list unresolved.

This books represents progress toward clear thinking about AGI risks, but much more work still needs to be done.

Book review: Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, by Max Tegmark.

His most important claim is the radical Platonist view that all well-defined mathematical structures exist, therefore most physics is the study of which of those we inhabit. His arguments are more tempting than any others I’ve seen for this view, but I’m left with plenty of doubt.

He points to ways that we can imagine this hypothesis being testable, such as via the fine-tuning of fundamental constants. But he doesn’t provide a good reason to think that those tests will distinguish his hypothesis from other popular approaches, as it’s easy to imagine that we’ll never find situations where they make different predictions.

The most valuable parts of the book involve the claim that the multiverse is spatially infinite. He mostly talks as if that’s likely to be true, but his explanations caused me to lower my probability estimate for that claim.

He gets that infinity by claiming that inflation continues in places for infinite time, and then claiming there are reference frames for which that infinite time is located in a spatial rather than a time direction. I have a vague intuition why that second step might be right (but I’m fairly sure he left something important out of the explanation).

For the infinite time part, I’m stuck with relying on argument from authority, without much evidence that the relevant authorities have much confidence in the claim.

Toward the end of the book he mentions reasons to doubt infinities in physics theories – it’s easy to find examples where we model substances such as air as infinitely divisible, when we know that at some levels of detail atomic theory is more accurate. The eternal inflation theory depends on an infinitely expandable space which we can easily imagine is only an approximation. Plus, when physicists explicitly ask whether the universe will last forever, they don’t seem very confident. I’m also tempted to say that the measure problem (i.e. the absence of a way to say some events are more likely than others if they all happen an infinite number of times) is a reason to doubt infinities, but I don’t have much confidence that reality obeys my desire for it to be comprehensible.

I’m disappointed by his claim that we can get good evidence that we’re not Boltzmann brains. He wants us to test our memories, because if I am a Boltzmann brain I’ll probably have a bunch of absurd memories. But suppose I remember having done that test in the past few minutes. The Boltzmann brain hypothesis suggests it’s much more likely for me to have randomly acquired the memory of having passed the test than for me to actually be have done the test. Maybe there’s a way to turn Tegmark’s argument into something rigorous, but it isn’t obvious.

He gives a surprising argument that the differences between the Everett and Copenhagen interpretations of quantum mechanics don’t matter much, because unrelated reasons involving multiverses lead us to expect results comparable to the Everett interpretation even if the Copenhagen interpretation is correct.

It’s a bit hard to figure out what the book’s target audience is – he hides the few equations he uses in footnotes to make it look easy for laymen to follow, but he also discusses hard concepts such as universes with more than one time dimension with little attempt to prepare laymen for them.

The first few chapters are intended for readers with little knowledge of physics. One theme is a historical trend which he mostly describes as expanding our estimate of how big reality is. But the evidence he provides only tells us that the lower bounds that people give keep increasing. Looking at the upper bound (typically infinity) makes that trend look less interesting.

The book has many interesting digressions such as a description of how to build Douglas Adams’ infinite improbability drive.

Book review: Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat.

This book describes the risks that artificial general intelligence will cause human extinction, presenting the ideas propounded by Eliezer Yudkowsky in a slightly more organized but less rigorous style than Eliezer has.

Barrat is insufficiently curious about why many people who claim to be AI experts disagree, so he’ll do little to change the minds of people who already have opinions on the subject.

He dismisses critics as unable or unwilling to think clearly about the arguments. My experience suggests that while it’s normally the case that there’s an argument that any one critic hasn’t paid much attention to, that’s often because they’ve rejected with some thought some other step in Eliezer’s reasoning and concluded that the step they’re ignoring wouldn’t influence their conclusions.

The weakest claim in the book is that an AGI might become superintelligent in hours. A large fraction of people who have worked on AGI (e.g. Eric Baum’s What is Thought?) dismiss this as too improbable to be worth much attention, and Barrat doesn’t offer them any reason to reconsider. The rapid takeoff scenarios influence how plausible it is that the first AGI will take over the world. Barrat seems only interested in talking to readers who can be convinced we’re almost certainly doomed if we don’t build the first AGI right. Why not also pay some attention to the more complex situation where an AGI takes years to become superhuman? Should people who think there’s a 1% chance of the first AGI conquering the world worry about that risk?

Some people don’t approve of trying to build an immutable utility function into an AGI, often pointing to changes in human goals without clearly analyzing whether those are subgoals that are being altered to achieve a stable supergoal/utility function. Barrat mentions one such person, but does little to analyze this disagreement.

Would an AGI that has been designed without careful attention to safety blindly follow a narrow interpretation of its programmed goal(s), or would it (after achieving superintelligence) figure out and follow the intentions of its authors? People seem to jump to whatever conclusion supports their attitude toward AGI risk without much analysis of why others disagree, and Barrat follows that pattern.

I can imagine either possibility. If the easiest way to encode a goal system in an AGI is something like “output chess moves which according to the rules of chess will result in checkmate” (turning the planet into computronium might help satisfy that goal).

An apparently harder approach would have the AGI consult a human arbiter to figure out whether it wins the chess game – “human arbiter” isn’t easy to encode in typical software. But AGI wouldn’t be typical software. It’s not obviously wrong to believe that software smart enough to take over the world would be smart enough to handle hard concepts like that. I’d like to see someone pin down people who think this is the obvious result and get them to explain how they imagine the AGI handling the goal before it reaches human-level intelligence.

He mentions some past events that might provide analogies for how AGI will interact with us, but I’m disappointed by how little thought he puts into this.

His examples of contact between technologically advanced beings and less advanced ones all refer to Europeans contacting Native Americans. I’d like to have seen a wider variety of analogies, e.g.:

  • Japan’s contact with the west after centuries of isolation
  • the interaction between neanderthals and humans
  • the contact that resulted in mitochondria becoming part of our cells

He quotes Vinge saying an AGI ‘would not be humankind’s “tool” – any more than humans are the tools of rabbits or robins or chimpanzees.’ I’d say that humans are sometimes the tools of human DNA, which raises more complex questions of how well the DNA’s interests are served.

The book contains many questionable digressions which seem to be designed to entertain.

He claims Google must have an AGI project in spite of denials by Google’s Peter Norvig (this was before it bought DeepMind). But the evidence he uses to back up this claim is that Google thinks something like AGI would be desirable. The obvious conclusion would be that Google did not then think it had the skill to usefully work on AGI, which would be a sensible position given the history of AGI.

He thinks there’s something paradoxical about Eliezer Yudkowsky wanting to keep some information about himself private while putting lots of personal information on the web. The specific examples Barrat gives strongly suggests that Eliezer doesn’t value the standard notion of privacy, but wants to limit peoples’ ability to distract him. Barrat also says Eliezer “gave up reading for fun several years ago”, which will surprise those who see him frequently mention works of fiction in his Author’s Notes.

All this makes me wonder who the book’s target audience is. It seems to be someone less sophisticated than a person who could write an AGI.

Discussions asking whether “Snowball Earth” triggered animal evolution (see the bottom half of that page) suggest increasing evidence that the Snowball Earth hypothesis may explain an important part of why spacefaring civilizations seem rare.

photosynthetic organisms are limited by nutrients, most often nitrogen or phosphorous

the glaciations led to high phosphorous concentrations, which led to high productivity, which led to high oxygen in the oceans and atmosphere, which allowed for animal evolution to be triggered and thus the rise of the metazoans.

This seems quite speculative, but if true it might mean that our planet needed a snowball earth effect for complex life to evolve, but also needed that snowball earth period to be followed by hundreds of millions of years without another snowball earth period that would wipe out complex life. It’s easy to imagine that the conditions needed to produce one snowball earth effect make it very unusual for the planet to escape repeated snowball earth events for as long as it did, thus explaining more of the Fermi paradox than seemed previously possible.

The most interesting talk at the Singularity Summit 2010 was Shane Legg‘s description of an Algorithmic Intelligence Quotient (AIQ) test that measures something intelligence-like automatically in a way that can test AI programs (or at least the Monte-Carlo AIXI that he uses) on 1000+ environments.

He had a mathematical formula which he thinks rigorously defines intelligence. But he didn’t specify what he meant by the set of possible environments, saying that would be a 50 page paper (he said a good deal of the work on the test had been done last week, so presumably he’s still working on the project). He also included a term that applies Occam’s razor which I didn’t completely understand, but it seems likely that that should have a fairly non-controversial effect.

The environments sound like they imitate individual questions on an IQ test, but with a much wider range of difficulties. We need a more complete description of the set of environments he uses in order to evaluate whether they’re heavily biased toward what Monte-Carlo AIXI does well or whether they closely resemble the environments an AI will find in the real world. He described two reasons for having some confidence in his set of environments: different subsets provided roughly similar results, and a human taking a small subset of the test found some environments easy, some very challenging, and some too hard to understand.

It sounds like with a few more months worth of effort, he could generate a series of results that show a trend in the AIQ of the best AI program in any given year, and also the AIQ of some smart humans (although he implied it would take a long time for a human to complete a test). That would give us some idea of whether AI workers have been making steady progress, and if so when the trend is likely to cross human AIQ levels. An educated guess about when AI will have a major impact on the world should help a bit in preparing for it.

A more disturbing possibility is that this test will be used as a fitness function for genetic programming. Given sufficient computing power, that looks likely to generate superhuman intelligence that is almost certainly unfriendly to humans. I’m confident that sufficient computing power is not available yet, but my confidence will decline over time.

Brian Wang has a few more notes on this talk

The Global Catastrophic Risks conference last Friday was a mix of good and bad talks.
By far the most provocative was Josh‘s talk about “the Weather Machine”. This would consist of small (under 1 cm) balloons made of material a few atoms thick (i.e. needed nanotechnology that won’t be available for a couple of decades) filled with hydrogen and having a mirror in the equatorial plane. They would have enough communications and orientation control to be individually pointed wherever the entity in charge of them wants. They would float 20 miles above the earth’s surface and form a nearly continuous layer surrounding the planet.
This machine would have a few orders of magnitude more power over atmospheric temperatures to compensate for the warming caused by greenhouse gasses this century, although it would only be a partial solution to the waste heat farther in the future that Freitas worries about in his discussion of the global hypsithermal limit.
The military implications make me wish it won’t be possible to make it as powerful as Josh claims. If 10 percent of the mirrors target one location, it would be difficult for anyone in the target area to survive. I suspect defensive mirrors would be of some use, but there would still be serious heating of the atmosphere near the mirrors. Josh claims that it could be designed with a deadman switch that would cause a snowball earth effect if the entity in charge were destroyed, but it’s not obvious why the balloons couldn’t be destroyed in that scenario. Later in the weekend Chris Hibbert raised concerns about how secure it would be against unauthorized people hacking into it, and I wasn’t reassured by Josh’s answer.

James Hughes gave a talk advocating world government. I was disappointed with his inability to imagine that that would result in power becoming too centralized. Nick Bostrom’s discussions of this subject are much more thoughtful.

Alan Goldstein gave a talk about the A-Prize and defining a concept called the carbon barrier to distinguish biological from non-biological life. Josh pointed out that as stated all life fit Goldstein’s definition of biological (since any information can be encoded in DNA). Goldstein modified his definition to avoid that, and then other people mentioned reports such as this which imply that humans don’t fall within Goldstein’s definition of biological due to inheritance of information through means other than DNA. Goldstein seemed unable to understand that objection.

Book review: Global Catastrophic Risks by Nick Bostrom, and Milan Cirkovic.
This is a relatively comprehensive collection of thoughtful essays about the risks of a major catastrophe (mainly those that would kill a billion or more people).
Probably the most important chapter is the one on risks associated with AI, since few people attempting to create an AI seem to understand the possibilities it describes. It makes some implausible claims about the speed with which an AI could take over the world, but the argument they are used to support only requires that a first-mover advantage be important, and that is only weakly dependent on assumptions about that speed with which AI will improve.
The risks of a large fraction of humanity being killed by a super-volcano is apparently higher than the risk from asteroids, but volcanoes have more of a limit on their maximum size, so they appear to pose less risk of human extinction.
The risks of asteroids and comets can’t be handled as well as I thought by early detection, because some dark comets can’t be detected with current technology until it’s way too late. It seems we ought to start thinking about better detection systems, which would probably require large improvements in the cost-effectiveness of space-based telescopes or other sensors.
Many of the volcano and asteroid deaths would be due to crop failures from cold weather. Since mid-ocean temperatures are more stable that land temperatures, ocean based aquaculture would help mitigate this risk.
The climate change chapter seems much more objective and credible than what I’ve previously read on the subject, but is technical enough that it won’t be widely read, and it won’t satisfy anyone who is looking for arguments to justify their favorite policy. The best part is a list of possible instabilities which appear unlikely but which aren’t understood well enough to evaluate with any confidence.
The chapter on plagues mentions one surprising risk – better sanitation made polio more dangerous by altering the age at which it infected people. If I’d written the chapter, I’d have mentioned Ewald’s analysis of how human behavior influences the evolution of strains which are more or less virulent.
There’s good news about nuclear proliferation which has been under-reported – a fair number of countries have abandoned nuclear weapons programs, and a few have given up nuclear weapons. So if there’s any trend, it’s toward fewer countries trying to build them, and a stable number of countries possessing them. The bad news is we don’t know whether nanotechnology will change that by drastically reducing the effort needed to build them.
The chapter on totalitarianism discusses some uncomfortable tradeoffs between the benefits of some sort of world government and the harm that such government might cause. One interesting claim:

totalitarian regimes are less likely to foresee disasters, but are in some ways better-equipped to deal with disasters that they take seriously.

This post is a response to a challenge on Overcoming Bias to spend $10 trillion sensibly.
Here’s my proposed allocation (spending to be spread out over 10-20 years):

  • $5 trillion on drug patent buyouts and prizes for new drugs put in the public domain, with the prizes mostly allocated in proportion to the quality adjusted life years attributable to the drug.
  • $1 trillion on establishing a few dozen separate clusters of seasteads and on facilitating migration of people from poor/oppressive countries by rewarding jurisdictions in proportion to the number of immigrants they accept from poorer / less free regions. (I’m guessing that most of those rewards will go to seasteads, many of which will be created by other people partly in hopes of getting some of these rewards).

    This would also have a side affect of significantly reducing the harm that humans might experience due to global warming or an ice age, since ocean climates have less extreme temperatures, seasteads will probably not depend on rainfall to grow food, and can move somewhat to locations with better temperatures.
  • $1 trillion on improving political systems, mostly through prizes that bear some resemblance to The Mo Ibrahim Prize for Achievement in African Leadership (but not limited to democratically elected leaders and not limited to Africa). If the top 100 or so politicians in about 100 countries are eligible, I could set the average reward at about $100 million per person. Of course, nowhere near all of them will qualify, so a fair amount will be left over for those not yet in office.
  • $0.5 trillion on subsidizing trading on prediction markets that are designed to enable futarchy. This level of subsidy is far enough from anything that has been tried that there’s no way to guess whether this is a wasteful level.
  • $1 trillion existential risks
    Some unknown fraction of this would go to persuading people not to work on AGI without providing arguments that they will produce a safe goal system for any AI they create. Once I’m satisfied that the risks associated with AI are under control, much of the remaining money will go toward establishing societies in the asteroid belt and then outside the solar system.
  • $0.5 trillion on communications / computing hardware for everyone who can’t currently afford that.
  • $1 trillion I’d save for ideas I think of later.

I’m not counting a bunch of other projects that would use up less than $100 billion since they’re small enough to fit in the rounding errors of the ones I’ve counted (the Methuselah Mouse prize, desalinization and other water purification technologies, developing nanotech, preparing for the risks of nanotech, uploading, cryonics, nature preserves, etc).

Steve Omohundro has recently written a paper and given a talk (a video should become available soon) on AI ethics with arguments whose most important concerns resemble Eliezer Yudkowsky’s. I find Steve’s style more organized and more likely to convince mainstream researchers than Eliezer’s best attempt so far.
Steve avoids Eliezer’s suspicious claims about how fast AI will take off, and phrases his arguments in ways that are largely independent of the takeoff speed. But a sentence or two in the conclusion of his paper suggests that he is leaning toward solutions which assume multiple AIs will be able to safeguard against a single AI imposing its goals on the world. He doesn’t appear to have a good reason to consider this assumption reliable, but at least he doesn’t show the kind of disturbing certainty that Eliezer has about the first self-improving AI becoming powerful enough to take over the world.
Possibly the most important news in Steve’s talk was his statement that he had largely stopped working to create intelligent software due to his concerns about safely specifying goals for an AI. He indicated that one important insight that contributed to this change of mind came when Carl Shulman pointed out a flaw in Steve’s proposal for a utility function which included a goal of the AI shutting itself off after a specified time (the flaw involves a small chance of physics being different from apparent physics and how the AI will evaluate expected utilities resulting from that improbable physics).