In key centers of power, there’s an important shift happening now of the Overton Window for AI dangers.
The first sign is a surprising reaction to the book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky and Nate Soares.
Their opinions have been hovering on the edge of what’s considered acceptable belief for the past few years. Influential people in Washington DC have been reluctant to talk about the possibility that AI might take over the world. Now former Fed chair Ben Bernanke has endorsed the book!
More importantly, influential national security professionals seem to have mostly decided that the topic needs to be publicized:
But among national security professionals, I think we only approached seven of them. Five of them gave strong praise, one of them (Shanahan) gave a qualified statement, and the seventh said they didn’t have time
I don’t like the book’s title, which reflects significant overconfidence in a simple scenario that doesn’t seem right to me. I’ve disagreed with the authors for quite a while about some of the details, and I expect an outcome that is messier and harder to predict. But the book likely comes closer than most other sources to proposing appropriate policies for handling AI. I’ve pre-ordered it, and expect to review it shortly after it is published.
The next sign is that on June 25 some elected officials started talking about their concerns that maybe job losses and China winning an AI race weren’t the biggest dangers. That maybe AI could take over the world. See the reporting from Shakeel Hashim and Peter Wildeford.
Advice
What should you be doing about this?
My top suggestion is to donate to The Center for AI Policy (CAIP). It seems to be the only group that is in a position to competently advise overworked congressional aides about how to evaluate new AI policies. As of last month, they were on the verge of shutting down due to lack of funding. I donated $30k, and am considering further donations. But without a few other donors giving similar amounts per month, my donations won’t be enough to keep their team from pursuing other careers.
I’ve also heard good things about Americans for Responsible Innovation (ARI), but haven’t found time to evaluate them. I get the impression that they have less expertise on the biggest AI risks than CAIP.
I expect there to be important overlap between Washington DC waking up to AI and the average investor waking up to AI. So far investor opinion seems to be changing more gradually than political opinion. It started shifting earlier than political discourse started shifting. I don’t know how much the shift in political discourse will influence markets, but it seems likely that some investors, particularly institutional investors who are somewhat risk-averse, will shift more toward AI-related stocks when they can point to clear evidence that transformative AI is no longer a fringe belief.
Nate and Eliezer have been pushing us all to pre-order If Anyone Builds It, Everyone Dies in order to get it on best-seller lists. I get the impression that it already has attracted enough interest that that’s not too important. They seem to be timing the publication adeptly to coincide with a surge in public interest. Instead, I’ll suggest buying it to help you understand the policy choices that we might face soon. You might want to have some inkling as to how they’ll affect your investments.
We live in interesting times.