Book review: Four Battlegrounds: Power in the Age of Artificial Intelligence, by Paul Scharre.
Four Battlegrounds is often a thoughtful, competently written book on an important topic. It is likely the least pleasant, and most frustrating, book fitting that description that I have ever read.
The title’s battlegrounds refer to data, compute, talent, and institutions. Those seem like important resources that will influence military outcomes. But it seems odd to label them as battlegrounds. Wouldn’t resources be a better description?
Scharre knows enough about the US military that I didn’t detect flaws in his expertise there. He has learned enough about AI to avoid embarrassing mistakes. I.e. he managed to avoid claims that have been falsified by an AI during the time it took to publish the book.
Scharre has clear political biases. E.g.:
Conservative politicians have claimed for years – without evidence – that US tech firms have an anti-conservative bias.
(Reminder: The Phrase “No Evidence” Is A Red Flag For Bad Science Communication.) But he keeps those biases separate enough from his military analysis that I don’t find those biases to be a reason for not reading the book.
This week we saw two interesting bank collapses: Silvergate Capital Corporation, and SVB Financial Group.
This is a reminder that diversification is important.
The most basic problem in both cases is that they got money from a rather undiverse set of depositors, who experienced unusually large fluctuations in their deposits and withdrawals. They also made overly large bets on the safety of government bonds.
Scott Alexander graded his predictions from 2018 and made new predictions for 2028.
I’m trying to compete with him. I’m grading myself as having done a bit worse than Scott.
Here’s a list of how I did (skipping a few where I agreed with Scott), followed by some predictions for 2028.
I’m having trouble keeping track of everything I’ve learned about AI and AI alignment in the past year or so. I’m writing this post in part to organize my thoughts, and to a lesser extent I’m hoping for feedback about what important new developments I’ve been neglecting. I’m sure that I haven’t noticed every development that I would consider important.
I’ve become a bit more optimistic about AI alignment in the past year or so.
I currently estimate a 7% chance AI will kill us all this century. That’s down from estimates that fluctuated from something like 10% to 40% over the past decade. (The extent to which those numbers fluctuate implies enough confusion that it only takes a little bit of evidence to move my estimate a lot.)
I’m also becoming more nervous about how close we are to human level and transformative AGI. Not to mention feeling uncomfortable that I still don’t have a clear understanding of what I mean when I say human level or transformative AGI.
A conflict is brewing between China and the West.
Beijing is determined to reassert control over Taiwan. The US, and likely most of NATO, seem likely to respond by, among other things, boycotting China.
We should, of course, worry that this will lead to war between China and the US. I don’t have much insight into that risk. I’ll focus in this post on risks about which I have some insight, without meaning to imply that they’re the most important risks.
Such a boycott would be more costly than the current boycott of Russia, and the benefits would likely be smaller.
How can I predict whether the reaction to China’s action against Taiwan will be a rerun of the response to the recent Russian attack on Ukraine?
I’ll start by trying to guess the main forces that led to the boycott of Russia.
In 1986, Drexler predicted (in Engines of Creation) that we’d have molecular assemblers in 30 years. They would roughly act as fast, atomically precise 3-d printers. That was the standard meaning of nanotech for the next decade, until more mainstream authorities co-opted the term.
What went wrong with that forecast?
In my review of Where Is My Flying Car? I wrote:
Josh describes the mainstream reaction to nanotech fairly well, but that’s not the whole story. Why didn’t the military fund nanotech? Nanotech would likely exist today if we had credible fears of Al Qaeda researching it in 2001.
I recently changed my mind about that last sentence, partly because of what I recently read about the Manhattan Project, and partly due to the world’s response to COVID.
Book review: Now It Can Be Told: The Story Of The Manhattan Project, by Leslie R. Groves.
This is the story of a desperate arms race, against what turned out to be a mostly imaginary opponent. I read it for a perspective on how future arms races and large projects might work.
What Surprised Me
It seemed strange that a large fraction of the book described how to produce purified U-235 and plutonium, and that the process of turning those fuels into bombs seemed anticlimactic.
The ESG investing movement (environmental, social, and corporate governance) is becoming potentially important, potentially good, and potentially corrupt.
I’ll walk through some of the sources of influence on it.
Book review: The Dawn of Everything: A New History of Humanity by David Graeber and David Wengrow.
This book is about narratives of human progress. I.e. the natural progression from egalitarian bands of maybe 20 people, to tribes, to chiefdoms, to states, with increasing inequality and domination by centralized bureaucracy. That progress is usually presumed to be driven by changes in occupations from foragers, to gardeners, to farmers, to industry.
Western intellectuals focus on debates between two narratives: Hobbesians, who see this mostly as advances from a nasty state of nature, and those following in Rousseau’s footsteps, who imagine early human societies as somewhat closer to a Garden of Eden. Both narratives suggest that farming societies were miserable places that were either small advances or unavoidable tragedies, depending on what you think they replaced.
Graeber and Wengrow dispute multiple aspects of these narratives. The book isn’t quite organized enough for me to boil their message down to a single sentence. But I’ll focus on what I consider to be the most valuable thread: we should be uncertain about whether humanity made (is making?) a big mistake by accepting oppression as an inevitable price of material wealth.
The Dawn of Everything asks us to imagine that humans could build (and may have been building) sophisticated civilizations without domination by powerful states, and maybe without depending on farming.
Book review: The Resilient Society, by Markus Brunnermeier.
This is a collection of loosely related chapters on current political topics such as pandemic response and macroeconomics. I haven’t read the whole book. But since each chapter is designed to stand alone, I feel comfortable reviewing a subset of the book.
They’re more readable than the comparable Wikipedia pages, but less rigorous.