Book review: Prediction Machines: The Simple Economics of Artificial Intelligence, by Ajay Agrawal, Joshua Gans, and Avi Goldfarb.
Three economists decided to write about AI. They got excited about AI, and that distracted them enough that they only said a modest amount about the standard economics principles that laymen need to better understand. As a result, the book ended up mostly being simple descriptions of topics on which the authors had limited expertise. I noticed fewer amateurish mistakes than I expected from this strategy, and they mostly end up doing a good job of describing AI in ways that are mildly helpful to laymen who only want a very high-level view.
The book’s main goal is to advise business on how to adopt current types of AI (“reading this book is almost surely an excellent predictor of being a manager who will use prediction machines”), with a secondary focus on how jobs will be affected by AI.
The authors correctly conclude that a modest extrapolation of current trends implies at most some short-term increases in unemployment.
One example they use to help us visualize this is the effects of spreadsheets on bookkeepers. Spreadsheets automated the most time consuming parts of bookkeeping, so fewer bookkeepers were needed to do what bookkeepers did before spreadsheets. But by making bookkeepers more powerful, spreadsheets led to bookkeepers calculating many more results than before.
The book mentions horses, but then mostly dismisses, without any clear reason, the risk that humans will become as unemployable as horses did. I guess the authors are mainly imagining the kind of incremental increases in AI abilities that are likely in the next 5 or 10 years. But if we extrapolate to when AI exceeds human-level cognitive abilities, then it seems appropriate to worry about human wages becoming inadequate to feed humans, like what happened to horses a century ago.
1.
The book’s advice to business is fairly appropriate, but often borders on common sense.
The authors emphasize the distinction between prediction and judgment. Since they are complements, better prediction means that judgment become more valuable.
Prediction machines don’t provide judgment. Only humans do.
For the near-term changes on which the book mostly focuses, I expect this model to work rather well.
But many decades from now, what part of judgment will be hard to automate? Judgment generally requires more general-purpose models of human desires than most current AI hopes to achieve. But large increases in data and computing power seem likely to eventually enable automating most judgment, via something like having an AI predict what a human would say if the human analyzed the situation carefully.
2.
I’ll now mention some complaints I have, about areas where the authors seem to be outside their areas of competence. These aren’t particularly important aspects of the book, but I found them about as interesting as the book’s main points.
The discussion of trucking jobs seems a bit strange. They imagine that humans will need to ride with robotrucks to prevent theft. Would an unmanned truck convoy be at greater risk than are current trucks? I can’t quite see how. They’ll need good cameras for driving, and it seems like those would also serve as effective means to identify thieves. The robotruck industry may well employ plenty of humans for tasks such as loading and unloading, but I predict humans will rarely accompany robotrucks on long trips.
The authors ask what should happen when a robocar hands control to the human driver in an emergency, after pointing out related problems that associated with Air France Flight 447. My answer is that a robocar shouldn’t require that a human take control. I don’t see much reason to expect humans to do better than software in sudden emergencies. Yet the authors indicate that humans in robocars need to know how to drive, without appearing to consider the advantages of relying on software in emergencies. Trains have been run without drivers present, and the book mentions a mine where trucks operate without drivers. Doing that with cars on typical roads is much harder, but the benefits are also much larger, and I predict I’ll feel safer as a pedestrian 15 years from now if cars handle emergencies via software.
They can’t avoid mentioning the Trolley problem:
Someone will have to resolve the dilemma and program the appropriate response into the car. The problem cannot be avoided.
There are likely to be political and legal reasons why it can’t be avoided, but the authors hint that there are ethical reasons. I don’t see those reasons. I’d be happy to use a robocar that handled trolley problems roughly the way that humans do, i.e. treat any situation that kills a person as if it’s as close to infinitely bad as our minds can understand [1].
They note that having lots of data is somewhat important to making good predictions, which lead them to wonder: will China rule the world due to having more data? I see little strength to the idea that having more people means outperforming due to having more data. The parts of the world with European languages likely have more variety of data than China does, and variety of data is at least as important as quantity.
The more interesting hypothesis is that EU-style privacy controls might retard the regions that adopt them. But they mention (elsewhere) evidence from comparing Google to Yahoo/Microsoft which shows that such restrictions have had little effect.
The authors dismiss Bostrom’s Superintelligence in a way that indicates they don’t quite understand what Bostrom is claiming.
3.
I predict that a modest number of readers will benefit from this book, and a larger number will react favorably to it, while quickly forgetting what it says.
Footnote
[1] – I also endorse Brad Templeton’s explanations of why robocar people should pay less attention to this issue, and implement whatever answer puts robocars on roads soon:
It turns out the problem has a simple answer which is highly likely to be the one taken. In almost every situation of this sort, the law already specifies who has the right of way, and who doesn’t. The vehicles will be programmed to follow the law, which means that when presented with a choice of hitting something in their right-of-way and hitting something else outside the right-of-way, the car will obey the law and stay in its right-of-way.
Given a few reasonable assumptions, from a strict standpoint of counting deaths and injuries, almost any delay to the deployment of high-safety robocars costs lots of lives. And not a minor number. Delay it by a year, and anywhere from 10,000-20,000 extra people will die in the USA, a 300,000 to a million around the world. Delay it by a day and you condemn 30-80 unknown future Americans and 1,000 others to death, and many thousands more to horrible injury.
Trolley problems provide potentially valuable insights into how different ethical systems handle situations that are very different from what humans have experienced, but robocar designers should focus on situations that resemble what humans have experienced.