{"id":1636,"date":"2020-11-03T12:13:28","date_gmt":"2020-11-03T20:13:28","guid":{"rendered":"https:\/\/bayesianinvestor.com\/blog\/?p=1636"},"modified":"2023-02-12T09:42:52","modified_gmt":"2023-02-12T17:42:52","slug":"the-precipice","status":"publish","type":"post","link":"https:\/\/bayesianinvestor.com\/blog\/index.php\/2020\/11\/03\/the-precipice\/","title":{"rendered":"The Precipice"},"content":{"rendered":"\n<p>Book review: The Precipice, by Toby Ord.<\/p>\n\n\n\n<p>No, this isn&#8217;t about elections. This is about risks of much bigger disasters. It includes the risks of pandemics, but not the kind that are as survivable as COVID-19.<\/p>\n\n\n\n<p>The ideas in this book have mostly been covered before, e.g. in <a href=\"https:\/\/bayesianinvestor.com\/blog\/index.php\/2008\/09\/25\/global-catastrophic-risks\/\">Global Catastrophic Risks<\/a> (Bostrom and Cirkovic, editors). Ord packages the ideas in a more organized and readable form than prior discussions.<\/p>\n\n\n\n<p>See the <a href=\"https:\/\/slatestarcodex.com\/2020\/04\/01\/book-review-the-precipice\/\">Slate Star Codex review of The Precipice<\/a> for an eloquent summary of the book&#8217;s main ideas.<\/p>\n\n\n\n<!--more-->\n\n\n\n<p>Most of The Precipice is written for a fairly broad audience, but I expect that many readers will have difficulty with Ord&#8217;s analysis of the probabilities of events that have not yet happened. Those parts are a good deal easier to read if you understand the basics of the Bayesian approach to probability. It wouldn&#8217;t be very practical to point those readers to an expert such as <a href=\"https:\/\/bayesianinvestor.com\/blog\/index.php\/2010\/07\/31\/probability-theory\/\">E.T. Jaynes<\/a>.<\/p>\n\n\n\n<p>How much can we trust experts in any particular field (particularly AI researchers) to take appropriate precautions?<\/p>\n\n\n\n<p>There&#8217;s often no good alternative to trusting them, but Ord documents evidence that experts have a history of carelessness when it comes to tiny risks of catastrophe. Some examples:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>They were careless about evaluating whether the first nuclear explosion would ignite the atmosphere.<\/li><li>Precautions in the Apollo Program were inadequate to keep lunar microbes from contaminating Earth.<\/li><\/ul>\n\n\n\n<p>The current pandemic has demonstrated that nations often won&#8217;t prepare for risks unless many people remember something similar that caused memorable problems.<\/p>\n\n\n\n<p>Can we hope for government to do any better? Ord writes:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>Another political reason concerns the sheer gravity of the issue. When I have raised the topic of existential risk with senior politicians and civil servants, I have encountered a common reaction: genuine deep concern paired with a feeling that addressing the greatest risks facing humanity was &#8220;above my pay grade.&#8221;<\/p><\/blockquote>\n\n\n\n<p>My biggest disagreement with the book involves the framework of standard total utilitarianism, specifically the part relating to population ethics where we&#8217;re supposed to value people living billions of years in the future the same as we value people living today.<\/p>\n\n\n\n<p>See Alex Mennen&#8217;s <a href=\"https:\/\/www.lesswrong.com\/posts\/8FRzErffqEW9gDCCW\/against-the-linear-utility-hypothesis-and-the-leverage\">Against the Linear Utility Hypothesis and the Leverage Penalty<\/a> for hints as to why total utilitarianism is likely to conflict with observed human preferences in situations such as Pascal&#8217;s Mugging.<\/p>\n\n\n\n<p>Complete equality implies that nearly all of our available attention ought to be on how our actions affect the far future, unless we&#8217;re <a href=\"https:\/\/www.overcomingbias.com\/2010\/03\/further-than-africa.html\">implausibly clueless<\/a> at guessing the long-term effects of our actions.<\/p>\n\n\n\n<p>I expect some of you are saying that our probability of usefully affecting the distant future is really tiny. Is it as remote as, say, getting hit by a meteorite, while being sucked up by a tornado, on the day that you win the Powerball? If so, then that&#8217;s <a href=\"https:\/\/slatestarcodex.com\/2015\/08\/12\/stop-adding-zeroes\/\">still not extreme enough<\/a> to be much of a defense against a utilitarian obligation to devote most of your life to helping distant future people.<\/p>\n\n\n\n<p>I had previously thought that a <a href=\"https:\/\/plato.stanford.edu\/entries\/ramsey-economics\/#ZeroDiscFutuWellBein\">discount rate<\/a> was a good enough way to reconcile human preferences with utilitarianism. Ord convinced me that discount rates aren&#8217;t quite the right way to resolve this tension.<\/p>\n\n\n\n<p>One answer that I toyed with is that additional lives are valuable in proportion to how much uniqueness they add to the world. In a world with 10^100 people, an additional person adds substantially fewer unique qualities than is the case when the population is as tiny as it is today. I also imagine that if I live a billion years without my personality frequently changing beyond recognition, then I&#8217;ll end up mostly repeating experiences. My intuition says that repeating experiences is better than non-existence, but less valuable than a life with some novel experiences.<\/p>\n\n\n\n<p>But I haven&#8217;t been able to convince myself that adjusting for uniqueness will be enough to resolve the tension.<\/p>\n\n\n\n<p>My main answer to questions like this is based on <a href=\"https:\/\/bayesianinvestor.com\/blog\/index.php\/2017\/09\/13\/dealism\/\">dealism<\/a>.<\/p>\n\n\n\n<p>I&#8217;m not willing to be a pure altruist. I value my own life more than I value the life of a person in a distant galaxy. There are plenty of ways to improve the world by creating agreements \/ cultures which move us in the general direction of valuing people more equally than we have been doing. In fact, that&#8217;s a nontrivial part of how societies become more civilized. But I don&#8217;t see that kind of argument being enough to generate perfect equality. Demanding full equality today for people of the distant future risks encouraging false pretenses of equality, without providing much hope of achieving genuine egalitarian values.<\/p>\n\n\n\n<p>I still consider the so-called <a href=\"https:\/\/bayesianinvestor.com\/blog\/index.php\/2009\/09\/25\/turning-the-repugnant-conclusion-into-utopia\/\">Repugnant Conclusion<\/a> to be close to what we should aim for in the long run, and that it only needs modest adjustments.<\/p>\n\n\n\n<p>When this book first came out, I intended to give it high priority. Yet I ended up delaying this review by nearly 8 months from my original plan, due to other tasks that felt more urgent, involving the pandemic and politics. But some of that was due to other people talking a lot about those topics, and to there being lots of new information on those topics. Those are not quite good enough reasons to prioritize them over existential risks.<\/p>\n\n\n\n<p>The most disturbing news in The Precipice is that we haven&#8217;t yet observed any slowdown in the rate at which we&#8217;re discovering new x-risks. That suggest there&#8217;s a significant chance that there are important risks to which we haven&#8217;t started paying attention.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Book review: The Precipice, by Toby Ord. No, this isn&#8217;t about elections. This is about risks of much bigger disasters. It includes the risks of pandemics, but not the kind that are as survivable as COVID-19. The ideas in this book have mostly been covered before, e.g. in Global Catastrophic Risks (Bostrom and Cirkovic, editors). [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":"","jetpack_publicize_message":"","jetpack_is_tweetstorm":false,"jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false}}},"categories":[26,22,23],"tags":[67,128],"class_list":["post-1636","post","type-post","status-publish","format-standard","hentry","category-ai","category-books","category-life_univ_etc","tag-equality","tag-existential-risks"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p80O1l-qo","_links":{"self":[{"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/1636","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/comments?post=1636"}],"version-history":[{"count":1,"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/1636\/revisions"}],"predecessor-version":[{"id":1637,"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/1636\/revisions\/1637"}],"wp:attachment":[{"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/media?parent=1636"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/categories?post=1636"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/tags?post=1636"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}