existential risks

All posts tagged existential risks

Book review: If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, by Eliezer Yudkowsky, and Nate Soares.

[This review is written more than my usual posts with a Goodreads audience in mind. I will write a more LessWrong-oriented post with a more detailed description of the ways in which the book looks overconfident.]

If you’re not at least mildly worried about AI, Part 1 of this book is essential reading.

Please read If Anyone Builds It, Everyone Dies (IABIED) with Clarke’s First Law in mind (“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”). The authors are overconfident in dismissing certain safety strategies. But their warnings about what is possible ought to worry us.

I encourage you to (partly) judge the book by its cover: dark, implausibly certain of doom, and endorsed by a surprising set of national security professionals who had previously been very quiet about this topic. But only one Nobel Prize winner.

Will AI Be Powerful Soon?

The first part of IABIED focuses on what seems to be the most widespread source of disagreement: will AI soon become powerful enough to conquer us?

There are no clear obstacles to AIs becoming broadly capable of outsmarting us.

AI developers only know how to instill values that roughly approximate the values that they intend to instill.

Maybe the AIs will keep us as pets for a while, but they’ll have significant abilities to design entities that better satisfy what the AIs want from their pets. So unless we train the AIs such that we’re their perfect match for a pet, they may discard us for better models.

For much of Part 1, IABIED is taking dangers that experts mostly agree are real, and concluding that the dangers are much worse than most experts believe. IABIED’s arguments seem relatively weak when they’re most strongly disagreeing with more mainstream experts. But the book’s value doesn’t depend very much on the correctness of those weaker arguments, since merely reporting the beliefs of experts at AI companies would be enough for the book to qualify as alarmist.

I’m pretty sure that over half the reason why people are skeptical of claims such as IABIED makes is that people expect technology to be consistently overhyped.

It’s pretty understandable that a person who has not focused much attention on AI assumes it will work out like a typical technology.

An important lesson for becoming a superforecaster is to start from the assumption that nothing ever happens. I.e. that the future will mostly be like the past, and that a large fraction of claims that excite the news media turn out not to matter for forecasting, yet the media are trying to get your attention by persuading you that they do matter.

The heuristic that nothing ever happens has improved my ability to make money off the stock market, but the exceptions to that heuristic are still painful.

The most obvious example is COVID. I was led into complacency by a century of pandemics that caused less harm to the US than alarmists had led us to expect.

Another example involves hurricane warnings. The news media exaggerate the dangers of typical storms enough that when a storm such as Katrina comes along, viewers and newscasters alike find it hard to take accurate predictions seriously.

So while you should start with a pretty strong presumption that apocalyptic warnings are hype, it’s important to be able to change your mind about them.

What evidence is there that AI is exceptional enough that you should evaluate it carefully?

The most easy to understand piece of news is that Geoffrey Hinton, who won a Nobel Prize for helping AI get where it is today, worries that his life work was a mistake.

There’s lots of other evidence. IABIED points to many ways in which AI has exceeded human abilities as fairly good evidence of what might be possible for AI. Alas, there’s no simple analysis that tells us what’s likely.

If I were just starting to learn about AI, I’d feel pretty confused as to how urgent the topic is. But I’ve been following it for a long time. E.g. I wrote my master’s thesis in 1993 on neural nets, correctly predicting that they would form the foundation for AI. So you should consider my advice on this topic to be better than random. I’m telling you that something very important is happening.

How Soon?

I’m concerned that IABIED isn’t forceful enough about the “soon” part.

I’ve been convinced that AI will soon be powerful by a wide variety of measures of AI progress (e.g. these graphs, but also my informal estimates of how wide a variety of tasks it can handle). There are many trend lines that suggest AI will surpass humans in the early 2030s.

Others have tried the general approach of using such graphs to convince people, with unclear results. But this is one area where IABIED carefully avoids overconfidence.

Part 2 describes a detailed, somewhat plausible scenario of how an AI might defeat humanity. This part of the book shouldn’t be important, but probably some readers will get there and be surprised to realize that the authors really meant it when they said that AI will be powerful.

A few details of the scenario sound implausible. I agree with the basic idea that it would be unusually hard to defend against an AI attack. Yet it seems hard to describe a really convincing scenario.

A more realistic scenario would likely sound a good deal more mundane. I’d expect persuasion, blackmail, getting control of drone swarms, and a few other things like that. The ASI would combine them in ways that rely on evidence which is too complex to fit in a human mind. Including it in the book would have been futile, because skeptics wouldn’t come close to understanding why the strategy would work.

AI Company Beliefs

What parts of this book do leaders of AI companies disagree with? I’m fairly sure that they mostly agree that Part 1 of IABIED points to real risks. Yet they mostly reject the conclusion of the book’s title.

Eight years ago I wrote some speculations on roughly this topic. The main point that has changed since then is that believing “the risks are too distant” has become evidence that the researcher is working on a failed approach to AI.

This time I’ll focus mainly on the leaders of the four or so labs that have produced important AIs. They all seem to have admitted at some point that their strategies are a lot like playing Russian Roulette, for a decent shot at creating utopia.

What kind of person is able to become such a leader? It clearly requires both unusual competence and some recklessness.

I feel fairly confused as to whether they’ll become more cautious as their AIs become more powerful. I see a modest chance that they are accurately predicting which of their AIs will be too weak to cause a catastrophe, and that they will pivot before it’s too late. The stated plans of AI companies are not at all reassuring. Yet they likely understand the risks better than does anyone who might end up regulating AI.

Policies

I want to prepare for a possible shutdown of AI development circa 2027. That’s when my estimate of its political feasibility gets up to about 30%.

I don’t want a definite decision on a shutdown right now. I expect that AIs of 2027 will give us better advice than we have today as to whether a shutdown is wise, and how draconian it needs to be. (IABIED would likely claim that we can’t trust those AIs. That seems to reflect an important disagreement about how AI will work as it approaches human levels.)

Advantages of waiting a bit:

  • better AIs to help enforce the shutdown; in particular, better ability to reliably evaluate whether something violates the shutdown
  • better AIs to help decide how long the shutdown needs to last

I think I’m a bit more optimistic than IABIED about AI companies’ ability to judge whether their next version will be dangerously powerful.

I’m nervous about labeling IABIED’s proposal as a shutdown, when current enforcement abilities are rather questionable. It seems easier for AI research to evade restrictions than is the case with nuclear weapons. Developers who evade the law are likely to take less thoughtful risks than what we’re currently on track for.

I’m hoping that with AI support in 2027 it will be possible to regulate the most dangerous aspects of AI progress, while leaving some capability progress intact. Such as restricting research that increases AI agentiness, but not research that advances prediction ability. I see current trends as on track to produce superhuman predictions before it reaches superhuman steering abilities. AI companies could do more if they wanted to to increase the differences between those two categories (see Drexler’s CAIS for hints). And most of what we need for safety is superhuman predictions of which strategies have which risks (IABIED clearly disagrees with that claim).

IABIED thinks that the regulations they propose would delay ASI by decades. I’m unclear how confident they are about that prediction. It seems important to have doubts about how much of a delay is feasible.

A key component of their plan involves outlawing some AI research publications. That is a tricky topic, and their strategy is less clearly explained than I had hoped.

I’m reminded of a time in the late 20th century, when cryptography was regulated in a way that led to t-shirts describing the RSA algorithm being classified as a munition that could not be exported. Needless to say, that regulation was not very effective. This helps illustrate why restricting software innovation is harder than a casual reader would expect.

IABIED wants to outlaw the publication of papers such as the famous Attention Is All You Need paper that introduced the transformer algorithm. But that leaves me confused as to how broad a ban they hope for.

Possibly none of the ideas that need to be banned are quite simple enough to be readily described on a t-shirt, but I’m hesitant to bet on that. I will bet that would be hard for a regulator to recognize as relevant to AI. Matrix multiplication improvements are an example of a borderline case.

Low-level optimizations such as that could significantly influence how much compute is needed to create a dangerous AI.

In addition, smaller innovations, especially those that just became important recently, are somewhat likely to be reinvented by multiple people. So I expect that there are a nontrivial set of advances for which a ban on publication would delay progress for less than a year.

In sum, a decades-long shutdown might require more drastic measures than IABIED indicates.

The restriction on GPU access also needs some clarification. It’s currently fairly easy to figure out which chips matter. But with draconian long-term restrictions on anything that’s classified as a GPU, someone is likely to get creative about building powerful chips that don’t fit the GPU classification. It doesn’t seem too urgent to solve this problem, but it’s important not to forget it.

IABIED often sounds like its saying that a long shutdown is our only hope. I doubt they’d explicitly endorse that claim. But I can imagine that the book will nudge readers into that conclusion.

I’m more optimistic than IABIED about other strategies. I don’t expect we’ll need a genius to propose good solutions. I’m fairly convinced that the hardest part is distinguishing good, but still risky, solutions from bad ones when we see them.

There are more ideas than I have time to evaluate for making AI development safer. Don’t let IABIED talk you into giving up on all of them.

Conclusion

Will IABIED be good enough to save us? It doesn’t seem persuasive enough to directly change the minds of a large fraction of voters. But it’s apparently good enough that important national security people have treated it as a reason to go public with their concerns. IABIED may prove to be highly valuable by persuading a large set of people that they can express their existing concerns without being branded as weird.

We are not living in normal times. Ask your favorite AI what AI company leaders think of the book’s arguments. Look at relevant prediction markets, e.g.:

Continue Reading

A group of people from MIRI have published a mostly good introduction to the dangers of AI: The Problem. It is a step forward at improving the discussion of catastrophic risks from AI.

I agree with much of what MIRI writes there. I strongly agree with their near-term policy advice of prioritizing the creation of an off switch.

I somewhat disagree with their advice to halt (for a long time) progress toward ASI. We ought to make preparations in case a halt turns out to be important. But most of my hopes route through strategies that don’t need a halt.

A halt is both expensive and risky.

My biggest difference with MIRI is about how hard it is to adequately align an AI. Some related differences involve the idea of a pivotal act, and the expectation of a slippery slope between human-level AI and ASI.

Continue Reading

My recent post Are Intelligent Agents More Ethical? criticized some brief remarks by Scott Sumner.

Sumner made a more sophisticated version of those claims in the second half of this Doom Debate.

His position sounds a lot like the moral realism that has caused many people to be complacent about AI taking over the world. But he’s actually using an analysis that follows Richard Rorty’s rejection of standard moral realism. Which seems to mean there’s a weak sense in which morality can be true, but in a socially and historically contingent fashion. If I understand that correctly, I approve.

Continue Reading

This post is a response to a claim by Scott Sumner in his conversation at LessOnline with Nate Soares, about how ethical we should expect AI’s to be.

Sumner sees a pattern of increasing intelligence causing agents to be increasingly ethical, and sounds cautiously optimistic that such a trend will continue when AIs become smarter than humans. I’m guessing that he’s mainly extrapolating from human trends, but extrapolating from trends in the animal kingdom should produce similar results (e.g. the cooperation between single-celled organisms that gave the world multicellular organisms).

I doubt that my response is very novel, but I haven’t seen clear enough articulation of the ideas in this post.

Continue Reading

AI 2027 portrays two well thought out scenarios for how AI is likely to impact the world toward the end of this decade.

I expect those scenarios will prove to be moderately wrong, but close enough to be scary. I also expect that few people will manage to make forecasts that are significantly more accurate.

Here are some scattered thoughts that came to mind while I read AI 2027.

The authors are fairly pessimistic. I see four key areas where their assumptions seem to lead them to see more danger than do more mainstream experts. They see:

  • a relatively small capabilities lead being enough for a group to conquer the world
  • more difficulty of alignment
  • more difficulty of detecting deception
  • AI companies being less careful than is necessary

I expect that the authors are being appropriately concerned on about two of these assumptions, and a bit too pessimistic on the others. I’m hesitant to bet on which assumptions belong in which category.

They don’t focus much on justifying those assumptions. That’s likely wise, since prior debates on those topics have not been very productive. Instead, they’ve focused more on when various changes will happen.

This post will focus on aspects of the first two assumptions for which I expect further analysis to be relatively valuable.

Continue Reading

I’ve been creating prediction markets on Manifold in order to better predict AI strategies. Please trade them.

If I get a bit more trading in these markets, I will create more AI-related markets. Stay tuned here, or follow me on Manifold.

Book review: Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World, by Darren McKee.

This is by far the best introduction to AI risk for people who know little about AI. It’s appropriate for a broader class of readers than most laymen-oriented books.

It was published 14 months ago. In this rapidly changing field, most AI books say something that gets discredited by the time they’re that old. I found no clear example of such obsolescence in Uncontrollable (but read on for a set of controversial examples).

Nearly everything in the book was familiar to me, yet the book prompted me to reflect better, thereby changing my mind modestly – mostly re-examining issues that I’ve been neglecting for the past few years, in light of new evidence.

The rest of this review will focus on complaints, mostly about McKee’s overconfidence. The features that I complain about reduce the value of book by maybe 10% compared to the value of an ideal book. But that ideal book doesn’t exist, and I’m not wise enough to write it.

Continue Reading

Book review: Genesis: Artificial Intelligence, Hope, and the Human Spirit, by Henry A. Kissinger, Eric Schmidt, and Craig Mundie.

Genesis lends a bit of authority to concerns about AI.

It is a frustrating book. It took more effort for me read than it should have taken. The difficulty stems not from complex subject matter (although the topics are complex), but from a peculiarly alien writing style that transcends mere linguistic differences – though Kissinger’s German intellectual heritage may play a role.

The book’s opening meanders through historical vignettes whose relevance remains opaque, testing my patience before finally addressing AI.

Continue Reading

TL;DR:

  • Corrigibility is a simple and natural enough concept that a prosaic AGI can likely be trained to obey it.
  • AI labs are on track to give superhuman(?) AIs goals which conflict with corrigibility.
  • Corrigibility fails if AIs that have goals which conflict with corrigibility.
  • AI labs are not on track to find a safe alternative to corrigibility.

This post is mostly an attempt to distill and rewrite Max Harm’s Corrigibility As Singular Target Sequence so that a wider audience understands the key points. I’ll start by mostly explaining Max’s claims, then drift toward adding some opinions of my own.

Continue Reading