Book review: Four Battlegrounds: Power in the Age of Artificial Intelligence, by Paul Scharre.
Four Battlegrounds is often a thoughtful, competently written book on an important topic. It is likely the least pleasant, and most frustrating, book fitting that description that I have ever read.
The title’s battlegrounds refer to data, compute, talent, and institutions. Those seem like important resources that will influence military outcomes. But it seems odd to label them as battlegrounds. Wouldn’t resources be a better description?
Scharre knows enough about the US military that I didn’t detect flaws in his expertise there. He has learned enough about AI to avoid embarrassing mistakes. I.e. he managed to avoid claims that have been falsified by an AI during the time it took to publish the book.
Scharre has clear political biases. E.g.:
Conservative politicians have claimed for years – without evidence – that US tech firms have an anti-conservative bias.
(Reminder: The Phrase “No Evidence” Is A Red Flag For Bad Science Communication.) But he keeps those biases separate enough from his military analysis that I don’t find those biases to be a reason for not reading the book.
What Dangers?
Scharre is mostly concerned with issues such as which military gets the fastest reaction times, uses data the best (e.g. matching soldiers with the best jobs), or saves money.
I never get the sense that he’s willing to extrapolate recent progress in AI to imagine them replacing humans at jobs they’re not currently capable of handling.
The dangers from AI aren’t the dangers science fiction warned us about. We needn’t fear robots rising up to throw off their human overlords, or at least not anytime soon.
Presumably science fiction made some misleading simplifications about the dangers. But if now isn’t the time to worry about the effects of smarter-than-human minds, when will it be time?
I guess Scharre’s answer is: a few decades from now. He suggests that in the long term, AI might have dramatic effects such as reliably predicting which side will win a war, which presumably would cause the losing side to concede. He analogizes AI to “a Cambrian explosion of intelligence”. But for the foreseeable future, he focuses exclusively on AI as a tool for waging war.
Strategy
The most valuable insight into how warfare will change involves swarming.
Humans are limited in how they can coordinate within a group of people. Evidence from sports teams suggests that 18 is about the maximum number of people can usefully move independently.
Military units almost always have a leader directly commanding less than 18 subordinates. AIs will likely have the capacity to coordinate a much larger set of units, which will presumably enable new tactics. How significant is that? My intuition says it will change war in some important way, but the book left me without any vision of that impact.
A Race with China?
Scharre wants us to believe that China and the US in a close competition to militarize AI. I feel almost as uncertain about this as I was before reading the book.
China leads the US in some important aspects of deploying AI (e.g. facial recognition, robot bellhops, or the number of cities with robocars).
I see few signs that China can match the US in research that’s producing the smartest and most general purpose systems. My intuition says this is mostly more important than is deploying the AI technology that existed in 2022.
There’s a possibly large disconnect between AI progress in leading US tech companies, and US military use of AI.
Leading companies such as Microsoft and Google are pursuing a very cosmopolitan strategy that involves US and Asian researchers cooperating toward shared goals. A generic anti-military stance among tech workers has ensured that those companies cooperate less with the US military. (It almost sounds as if Scharre is complaining about capitalists for subverting US imperialism.)
Scharre worries that that combination will leave China’s military ahead of the US military at adopting AI. I see nothing clearly mistaken about that concern. But I see restrictions on semiconductor sales to China as likely to matter more 3 to 5 years from now. Beyond 5 years, I see more importance in advances in the basic technology for AI.
Most of those concerns assume a moderate degree of competence in the US military’s efforts to adopt AI. Scharre describes signs that the US military isn’t functional enough to adopt much new technology. The leading example is a cloud computing contract called JEDI. Proposed in 2017, it was canceled in 2021 because forces who wanted it awarded to Amazon were able to veto any award to not-Amazon, and opposing forces were able to veto any award to Amazon.
Another example is the F-35 stealth fighter, which took 25 years to achieve partial deployment. It’s hard to see AI development slowing enough for that kind of approach to succeed.
I’ve seen hints elsewhere that OpenAI won’t allow military uses of GPT, and that the US military won’t figure out anytime soon whether that kind of AI can be made safe for military use.
Scharre suggests that the military will want to recreate AIs from scratch, due to the impracticality of analyzing the security risks of OpenAI’s training data. Conceivably that could mean that large countries could be outcompeted by a small country that bypassed such precautions (Israel?). Scharre ignores this scenario, probably because he sees much slower change in AI capabilities than I see.
To be clear, Scharre says the current tensions between the US and China do not at all qualify as an arms race, at least as the relevant experts define “arms race”.
Fanning the Flames
About a quarter of the book is devoted to enumerating the ways in which China oppresses people. Scharre also throws in a few platitudes about how the US is better due to democracy, and checks and balances. Those comments seem mostly true, but carefully selected to overstate the differences between the countries.
What’s the connection between Orwellian developments in China and the future of war? Scharre’s patterns here suggest that he’s mostly focused on convincing the US to go to war with China.
The fact that China oppresses people is not at all sufficient reason to go to war with China. I’m angry at Scharre for trying to steer us toward war without articulating a sensible justification for war.
Concluding Thoughts
This book will likely help the military keep from falling too far behind in its understanding of AI.
It’s unlikely to convince any large military to replace its current focus on manpower, ships, tanks, and planes as the main measures of military power. I’m about 95% sure that it will fail to instill an appropriate sense of urgency about understanding AI.