Book review: What We Owe the Future, by William MacAskill.
WWOTF is a mostly good book that can’t quite decide whether it’s part of an activist movement, or aimed at a small niche of philosophy.
MacAskill wants to move us closer to utilitarianism, particularly in the sense of evaluating the effects of our actions on people who live in the distant future. Future people are real, and we have some sort of obligation to them.
WWOTF describes humanity’s current behavior as reckless, like an imprudent teenager. MacAskill almost killed himself as a teen, by taking a poorly thought out risk. Humanity is taking similar thoughtless risks.
MacAskill carefully avoids endorsing the aspect of utilitarianism that says everyone must be valued equally. That saves him from a number of conclusions that make utilitarianism unpopular. E.g. it allows him to be uncertain about how much to care about animal welfare. It allows him to ignore the difficult arguments about the morally correct discount rate.
When is Longtermism Important?
WWOTF ends with two examples of how longtermism supposedly helps us: AI safety, and solar power.
I’ve seen successes from those motivated explicitly by longtermist reasoning, too. I’ve seen the idea of “AI safety” … go from the fringiest of fringe concerns to a respectable area of research …
I see correlations between longtermist reasoning and an interest in AI safety. What’s causing that?
Nick Bostrom has played key roles in getting influential people to respect both AI safety and longtermism. Yet I don’t quite see signs that longtermism motivates his interest in AI safety.
Eliezer Yudkowsky has played a similar role, and has been fairly explicit that longtermism is not at all necessary to persuade us to focus on AI. Here’s how he sees the connection:
“What’s actually going on afaict:
- People who value life and sentience, and think sanely, know that the future galaxies are the real value at risk.
- Nobody else can act about AGI killing everyone very soon, because they’ve given up on life, or get distracted too easily.”
Or as Scott Alexander puts it: “In The Very Short Run, We’re All Dead”. Longtermism could easily distract people from AI risk.
What about solar? WWOTF tells us that farsighted Greens in Germany enabled the solar industry to scale up to where it became competitive.
I see conflicting evidence here, and feel confused. The affordability of solar panels depended on an experience learning curve, which means that we needed increased production volumes in order to get cheap panels. German demand must have caused some increase in production, so it must have sped up that process.
However, Swanson’s law shows fairly steady progress before the Greens acted. When I look closely, I see signs that production volumes increased a bit faster after Germany passed the relevant law. But it sure looks like there was enough demand from other sources that progress would have continued without Germany. Some of that progress was driven by Moore’s law, since the semiconductor industry uses technologies that are important in the solar industry.
So we’re likely adopting solar a year or two earlier than we should have expected, due to Germany’s mix of longtermism and virtue signaling. (It would have been more effective to install those solar panels near the equator, where they’d offset more fossil fuels than in Germany. But that’s maybe a small enough issue that I should overlook it).
I’ll count that as a modest benefit from longtermism.
WWOTF has plenty of tension between pointing to near-term reasons for doing good deeds, and then crediting longtermism, e.g.:
Once one accounts for air pollution, rapidly decarbonizing the world economy is justified by the health benefits alone. … Decarbonisation is a proof of concept for longtermism.
Nearly half of WWOTF’s chapter on extinction risk is about engineered pandemics. MacAskill cites expert opinion that there’s a 1% chance we’ll go extinct by 2100 due to an engineered pandemic. That seems high to me, if only because alternative approaches to malicious harm are easier and more reliable.
Longtermism undoubtedly urges us to devote more resources to pandemic prevention than does the belief that we should only care about the next 20 years. But the difference between the 20-year and longterm views seems small compared to how much those both differ from our current plans.
Effective Altruism
MacAskill often gives the impression that he’s pushing us to combine longtermism and Effective Altruism into one movement.
E.g. his interest in preventing pandemics makes sense as a relatively cheap way to save lives in the next few decades.
It’s not clear that becoming longtermist makes us more effective at helping future generations. WWOTF gives us the example of the Qin dynasty, whose plans for a 10,000 generation empire took a mere 15 years to fail. See also The Base Rate of Longtermism Is Bad.
Trying to care about a more distant future tends to push us into far mode thought, making us more hypocritical.
Have I digressed here? If I ignore WWOTF’s last chapter, then I’m tempted to say that the book is focused on abstract philosophy. In which case psychology could be considered outside the book’s scope. But sections such as “Career Choice” and “Building a Movement” convince me that these practical objections are relevant.
Determinism
WWOTF devotes a chapter to convincing us that history can be changed by a handful of people. MacAskill has some clear examples, such as an obviously arbitrary border between North and South Korea.
The obvious drawback with examples like that is the people who caused the change were pretty clueless about whether they helped future generations.
So MacAskill turns to asking why slavery was abolished. He identifies a few key Quakers who initiated the abolition movement. It’s easy to imagine they had an important influence on when slaves were freed. Although the evidence is fairly weak as to what would have happened without those Quakers.
Henrich has convinced me that most of the pressure to abolish slavery came from cultural changes centuries earlier that attracted Protestants to universal rules (MacAskill agrees that something like this might be true). Under that view, it’s pretty hard for the people who caused the key changes to predict the longterm effects. However, the breadth of the changes does weakly suggest that people can make better-than-random guesses about whether the effects will be good.
MacAskill is uncertain whether future societies will be more deterministic. WWOTF devotes a relatively long chapter to the risk that undesirable values will become permanently locked-in, suggesting that poses as much danger as extinction. I see plenty of examples where some sort of lock-in appears to cause arbitrary harm for decades or more. But concern about a lock-in becoming permanent seems to depend on AI having some novel properties. WWOTF isn’t clear enough about those properties to be compelling.
The Long Reflection
MacAskill wants to postpone irreversible changes as long as possible, until we’ve developed mature enough morality to be confident we’re heading in the best direction.
On one level, this is just plain common sense.
Yet it has provoked a good deal of negative reactions.
Some of that is due to the likelihood that the more obvious versions of the Long Reflection would require a powerful world government. For example, it’s hard to get all of civilization to agree to follow the One True Morality if multiple von Neumann probes have seeded colonies outside our solar system. And since it’s hard to predict how small a von Neumann probe could be, anything that looks like in might involve space travel would need to be regulated as strictly as the US regulates nuclear power.
That’s one small hint about how it’s really hard to put everything on hold without crippling innovation, possibly permanently.
My impressions from private conversations are that the smartest advocates of the Long Reflection expect that a single AI will inevitably take over the world, so that adding a Long Reflection isn’t likely to impose much in the way of avoidable costs. But MacAskill doesn’t consider such an AI take-over to be inevitable, nor is it something I consider the default outcome.
At a very different extreme, the Amish manage something vaguely resembling the Long Reflection, while maintaining an approximately anarchist approach to government. That minimizes many risks, at a cost of dramatically slowing progress.
If I squint hard enough, I can imagine a scenario where AIs persuade us to imitate key parts of the Amish approach. Maybe that’s enough to generate a highly desirable Long Reflection. But if that’s what MacAskill is thinking, he’s hiding it fairly well.
MacAskill is probably not endorsing anything dangerous here, but that’s mainly because he’s too abstract and vague for his advice to have much value.
Population Ethics
MacAskill convinced me that it’s pretty hard to be neutral about creating more people whose lives are worth living. He’s talking about intrinsic value here, not evaluating the effects of the additional people on others. Some fairly respectable philosophers have tried to justify the Intuition of Neutrality: “We are in favor of making people happy, but neutral about making happy people“.
I had previously felt a preference for treating additional lives as good, but I couldn’t pinpoint anything clearly bad about neutrality.
MacAskill provides a thought experiment:
A mother suffers from a mild vitamin deficiency, and is considering having a child.
If she has the child now, it will have a mostly good life, with occasional migraines.
If she waits a few months, she’ll have a different child, whose only important difference is no migraines.
The intuition of neutrality says we’re indifferent between her having no child, and having the child with migraines.
It says we’re indifferent between her having no child, and having the migraine-free child.
By transitivity, we’re indifferent between a child with migraines, and a migraine-free child. That can’t be right.
This seems like a strong argument that my endorsement of additional lives should be a bit more than just a personal preference. How much more? I’m unsure.
Someday it could play a role in deciding whether we colonize the galaxy. But my intuition says that decision will be determined almost entirely by other considerations.
Recall that this thought experiment assumes we can ignore how adding a person creates costs and benefits for existing people. That likely confuses people’s intuitions, since we care about the effects on others in most real-world choices that affect population size. There’s are lots of ideologically charged arguments pressuring us to favor big or small populations, due to effects on scientific progress, and resource constraints. MacAskill is fighting a difficult battle to shift our attention to issues which only excite a modest number of philosophers.
Conclusion
The book’s conclusions are more solid than I expected.
I frequently had modest disagreements with his reasoning. His explicit claims are cautious enough that they’re convincing even when supported by questionable arguments. My biggest doubts about the book involve implicit messages that aren’t quite clear enough to argue with.
I can’t help feeling a bit disappointed at how little WWOTF’s advice affects what I ought to be doing.
P.S. – Here are some odd tidbits from his insights into how the future looks bright:
Similarly, in the United States the Black-White happiness gap has closed by two-thirds since the 1970s.
Most of us can’t even imagine the happiest life that has been lived so far:
You all, healthy people, can’t imagine the happiness which we epileptics feel during the second before our fit… I don’t know if this felicity lasts for seconds, hours or months, but believe me, I would not exchange it for all the joys that life may bring.