Book review: The Precipice, by Toby Ord.
No, this isn’t about elections. This is about risks of much bigger disasters. It includes the risks of pandemics, but not the kind that are as survivable as COVID-19.
The ideas in this book have mostly been covered before, e.g. in Global Catastrophic Risks (Bostrom and Cirkovic, editors). Ord packages the ideas in a more organized and readable form than prior discussions.
See the Slate Star Codex review of The Precipice for an eloquent summary of the book’s main ideas.
Most of The Precipice is written for a fairly broad audience, but I expect that many readers will have difficulty with Ord’s analysis of the probabilities of events that have not yet happened. Those parts are a good deal easier to read if you understand the basics of the Bayesian approach to probability. It wouldn’t be very practical to point those readers to an expert such as E.T. Jaynes.
How much can we trust experts in any particular field (particularly AI researchers) to take appropriate precautions?
There’s often no good alternative to trusting them, but Ord documents evidence that experts have a history of carelessness when it comes to tiny risks of catastrophe. Some examples:
- They were careless about evaluating whether the first nuclear explosion would ignite the atmosphere.
- Precautions in the Apollo Program were inadequate to keep lunar microbes from contaminating Earth.
The current pandemic has demonstrated that nations often won’t prepare for risks unless many people remember something similar that caused memorable problems.
Can we hope for government to do any better? Ord writes:
Another political reason concerns the sheer gravity of the issue. When I have raised the topic of existential risk with senior politicians and civil servants, I have encountered a common reaction: genuine deep concern paired with a feeling that addressing the greatest risks facing humanity was “above my pay grade.”
My biggest disagreement with the book involves the framework of standard total utilitarianism, specifically the part relating to population ethics where we’re supposed to value people living billions of years in the future the same as we value people living today.
See Alex Mennen’s Against the Linear Utility Hypothesis and the Leverage Penalty for hints as to why total utilitarianism is likely to conflict with observed human preferences in situations such as Pascal’s Mugging.
Complete equality implies that nearly all of our available attention ought to be on how our actions affect the far future, unless we’re implausibly clueless at guessing the long-term effects of our actions.
I expect some of you are saying that our probability of usefully affecting the distant future is really tiny. Is it as remote as, say, getting hit by a meteorite, while being sucked up by a tornado, on the day that you win the Powerball? If so, then that’s still not extreme enough to be much of a defense against a utilitarian obligation to devote most of your life to helping distant future people.
I had previously thought that a discount rate was a good enough way to reconcile human preferences with utilitarianism. Ord convinced me that discount rates aren’t quite the right way to resolve this tension.
One answer that I toyed with is that additional lives are valuable in proportion to how much uniqueness they add to the world. In a world with 10^100 people, an additional person adds substantially fewer unique qualities than is the case when the population is as tiny as it is today. I also imagine that if I live a billion years without my personality frequently changing beyond recognition, then I’ll end up mostly repeating experiences. My intuition says that repeating experiences is better than non-existence, but less valuable than a life with some novel experiences.
But I haven’t been able to convince myself that adjusting for uniqueness will be enough to resolve the tension.
My main answer to questions like this is based on dealism.
I’m not willing to be a pure altruist. I value my own life more than I value the life of a person in a distant galaxy. There are plenty of ways to improve the world by creating agreements / cultures which move us in the general direction of valuing people more equally than we have been doing. In fact, that’s a nontrivial part of how societies become more civilized. But I don’t see that kind of argument being enough to generate perfect equality. Demanding full equality today for people of the distant future risks encouraging false pretenses of equality, without providing much hope of achieving genuine egalitarian values.
I still consider the so-called Repugnant Conclusion to be close to what we should aim for in the long run, and that it only needs modest adjustments.
When this book first came out, I intended to give it high priority. Yet I ended up delaying this review by nearly 8 months from my original plan, due to other tasks that felt more urgent, involving the pandemic and politics. But some of that was due to other people talking a lot about those topics, and to there being lots of new information on those topics. Those are not quite good enough reasons to prioritize them over existential risks.
The most disturbing news in The Precipice is that we haven’t yet observed any slowdown in the rate at which we’re discovering new x-risks. That suggest there’s a significant chance that there are important risks to which we haven’t started paying attention.
> They were careless about evaluating whether the first nuclear explosion would ignite the atmosphere.
Many year ago, by pure chance, I picked up a 1943 study about exactly this question, which had been recently declassified. They made a serious evaluation of this question, and concluded that there was a safety margin of about 10^7 (in ratio of energy liberated vs. energy needed to support a self-sustaining fusion reaction in the atmosphere), even assuming an igniting blast much larger than anything they knew how to create. So at least, judging from that single study, someone took it seriously enough, evaluated the risk, and found it low. Of course I have no knowledge of whether they left out something else important, did their physics or math wrong, etc.
Bruce,
Ord’s main reason for concern is the risk of model error. The group that made that study also made another estimate about nuclear ignition, specifically that lithium-7 would not ignite. The Castle Bravo test showed that was wrong, and it looks like that mistake caused at least one death.
I see. How wrong was their other estimate? They’d have to be pretty wrong for a ratio they estimated as 10^-7 to be more than 1.
(I’m not arguing any position here, since I have no knowledge about it — just curious.)
The bomb produced 2.5 times the expected energy. That implies lithium-7 released about as much energy as lithium-6, whereas they expected lithium-7 to produce approximately(?) zero energy.
That does sound concerning… is everything about it understood now? That is, is it understood in detail, what erroneous old model produced the old prediction, and is there a correct new model (not just experimental data) that agrees with what happened?
BTW, that is all just a detail about one anecdote — I completely acknowledge your and Ord’s central point, which I understand to be (in part) “prediction errors occur, and people in charge of projects are heavily biased towards making them happen as opposed to being cautious in ways that might delay or stop them”. Short of altering human nature, or a political/social revolution of an unprecedented kind (which we have no evidence is possible in a good direction), is there any realistic solution? I do consider space colonization realistic (at least technically) — is that the easiest/quickest solution?
The world is in the midst of what may be the most deadly pandemic of the past 100 years. Threats to humanity, and how we address them, define our time. We live during the most important era of human history. In the twentieth century, we developed the means to destroy ourselves – without developing the moral framework to ensure we won’t. This is the Precipice, and how we respond to it will be the most crucial decision of our time. Toby Ord explores the risks to humanity’s future, from the familiar man-made threats of climate change and nuclear war, to the potentially greater, more unfamiliar threats from engineered pandemics and advanced artificial intelligence. It was fun and scary at the same time Reading this book
Pingback: Existential Risk Persuasion Tournament | Bayesian Investor Blog