Book review: The Elephant in the Brain, by Kevin Simler and Robin Hanson.
This book is a well-written analysis of human self-deception.
Only small parts of this book will seem new to long-time readers of Overcoming Bias. It’s written more to bring those ideas to a wider audience.
Large parts of the book will seem obvious to cynics, but few cynics have attempted to explain the breadth of patterns that this book does. Most cynics focus on complaints about some group of people having worse motives than the rest of us. This book sends a message that’s much closer to “We have met the enemy, and he is us.”
The authors claim to be neutrally describing how the world works (“We aren’t trying to put our species down or rub people’s noses in their own shortcomings.”; “… we need this book to be a judgment-free zone”). It’s less judgmental than the average book that I read, but it’s hardly neutral. The authors are criticizing, in the sense that they’re rubbing our noses in evidence that humans are less virtuous than many people claim humans are. Darwin unavoidably put our species down in the sense of discrediting beliefs that we were made in God’s image. This book continues in a similar vein.
This suggests the authors haven’t quite resolved the conflict between their dreams of upholding the highest ideals of science (pursuit of pure knowledge for its own sake) and their desire to solve real-world problems.
The book needs to be (and mostly is) non-judgmental about our actual motives, in order to maximize our comfort with acknowledging those motives. The book is appropriately judgmental about people who pretend to have more noble motives than they actually have.
The authors do a moderately good job of admitting to their own elephants, but I get the sense that they’re still pretty hesitant about doing so.
Most people will underestimate the effects which the book describes.
A recent SlateStarCodex post on cost disease makes some of the book’s points better than the book does. Scott Alexander is aware that signaling might play some role in cost disease. But he seems too cautious to imagine the magnitude of signaling effects. If Scott had read Elephant in the Brain before writing that post, he’d have suggested that maybe cost disease is mostly explained  by consumers wanting more done for them, even if that means paying more for the “same” goods and services. Scott asks:
Do you think the average poor or middle-class person would rather:
a) Get modern health care
b) Get the same amount of health care as their parents’ generation, but with modern technology like ACE inhibitors, and also earn $8000 extra a year
The Elephant in the Brain presents a pretty strong case that most people would choose (a), because they value the social implications of the effort required for that care . Those implications are mostly independent of the health effects of the care.
Scott has for years been aware of, and has mostly agreed with, the book’s main points. Yet he didn’t imagine the breadth of the effects.
The book is more eloquent than prior versions of these ideas. That will make it a bit harder to downplay their significance. But I expect people will demonstrate plenty of conspicuous ingenuity at doing so.
Medicine is the area where I’m most discouraged by the book’s implications. It’s not just that we want wasteful efforts to be devoted to medical care. What disturbs me is our interest in medical health is weak enough to get lost in the noise. The authors do a good job of explaining how signaling interferes with the health-related components of medicine, but they seem to demonstrate that most of us have an even lower interest in health than can be explained by that signaling.
Implications for Society
I keep wondering whether MAPS is an effective charity. They have thoughtful hopes of getting FDA approval for therapies which would quickly cure many previously incurable instances of PTSD. But are those PTSD cases going uncured now due to the difficulty of finding effective therapies? Or is it because most of us act as if we prefer cures that require more effort? The book gives me reason to suspect that the cures proposed by MAPS will be ignored because they’re quick (so they don’t provide an excuse for generating a long-term alliance between doctor and patient?), and maybe even fun, so they don’t show that anyone is sacrificing in order to help the patients.
I want to promote the observation that medical care is not a synonym for health care. Treating medicine and health as synonyms gives medicine more status than it deserves.
There are plenty of non-medical services that affect our health enough to be compared to medical care:
- auto mechanics
- personal trainers
- TSA airport security
- janitors, soap makers
For all these professions, most of us only pay attention to a few specific health-related criteria, to which we expect others in our social circles are also paying attention. E.g. depending on the dietary opinions of our friends, we may pay attention to whether a chef labels her food as organic, or whether it’s conspicuously low-carb, but we probably don’t think about the potassium content of the food, or whether the chef washes her hands.
For TSA employees, the average voter seems to focus on a handful of highly visible incidents, and pay little attention to the many inconspicuous ways in which the TSA is probably hurting our health (e.g. interfering with our sleep by creating a need to get to the airport earlier).
For some of those professions, we don’t know of conspicuous health-related criteria, so we mostly don’t think about the health effects of those professions.
What distinguishes these professions from doctors? One important difference is that we turn to doctors mainly when we’re in trouble, whereas we mostly go to the other professions under relatively routine conditions. In that sense, doctors are more like lawyers. That’s a key part of why we often want to devote almost arbitrarily large amounts of resources to those professions.
Here is a good summary of a key issue:
we have a limited budget for self-improvement. Some of us might be tempted to swear off hypocrisy all at once, and vow always to act on the ideals we most admire. But this would usually go badly. In all likelihood, our mind’s Press Secretary issued this “zero hypocrisy” edict without sufficient buy-in and support from the rest of our mental organization. Better to start with just one area, like charity
That seems 95% right – a gradualist approach is often more feasible than a zero-tolerance policy, and I understand my motives better due to thinking of my mind as a complex association of agents.
But I suggest that instead of focusing mainly on an area such as charity, we focus more on first being increasingly honest with those people we trust the most, and gradually expanding the number of people with whom we’re honest about our motives. The social consequences of honesty are what matter, and much of the variation I see in how people react is a function of a person’s general attitudes toward self-deception. So if I can comfortably talk with a specific person or group of people about my selfish reasons for donating to charity, it doesn’t seem hard to also talk with them about my status-related reasons for blogging. Whereas I feel a big difference between talking about those things at a rationalist meetup versus doing so in a hiking group.
Of course, as the authors point out, being more honest with one group of people makes it harder to hide things from others.
How did I benefit from these insights?
I used to think I was engaging in political debates mainly in order to improve the world. Then one of Robin’s papers (this one?) persuaded me that I was mostly motivated by signaling of some sort. It was straightforward to find better ways to help the world. That helped me notice that those political arguments were barely helping my status. So I switched much of my status-seeking desires toward more constructive book reviews/blog posts (trying to pull sideways more often). My increased awareness of my motives enabled me to use a larger fraction of my brain when evaluating whether my strategies were effective at achieving my goals.
I suppose that weakens my ability to signal loyalty to a group. And that I only became comfortable weakening that ability after having mostly joined social groups that didn’t seem to care much about standard notions of loyalty.
I used to imagine that I engaged in conversation mainly to get the explicitly articulated knowledge. And people sometimes said that I talked too little. And I kept wondering what influenced how much I liked a conversation. I’m less confused about those now that I’m aware that some of my interest in conversations is due to status motives.
I keep noticing a discrepancy between how much thought I think an ideal person should devote to researching charities, and the actual time and research I devote to that. I’m less bothered by that discrepancy now that I understand the causes. But I still feel the kind of internal conflict that suggests I’m not yet being fully honest with myself about my motives in this context.
The book’s testable predictions are ok, but not very different from what others predict, so not very conclusive. I’ll add two predictions which I think may help a bit:
- Adoption of medical AIs  will be influenced at least as much by how helpful patients perceive the AIs to be in non-medical contexts as they are by the medical health benefits. (I’m only about 70% confident in that prediction.)
- The authors say there’s a lot of “arbitrary” regional variation in medical treatments. I predict that at least 10% of that variation will be explained by higher demand for medicine in regions where people are least able to count on help in emergencies from non-medical social connections.
The authors suggest that exercise can increase life expectancy by 15 years (although careful reading indicates they’re probably just pointing to a correlation), when most estimates show much smaller effects. It’s a bit strange to say that exercise and smoking get little health-related attention , but it’s certainly true that the health benefits of living in rural areas are overlooked. Or at least overlooked by the urban and suburban people I associate with.
This book will make a small dent in the extent to which people falsely claim to be motivated by noble goals, and will help a modest number of people improve themselves.
But most humans dislike the book’s conclusions, and will continue to believe in more complex models that have less explanatory power.
 – signaling doesn’t seem to explain the increasing cost of subway construction. But that is easily explained by lack of competition.
 – Most medical costs count as something that people value, but there are exceptions. E.g. large price hikes without the pretense of more effort devoted to the product/service. If Shkreli had put 56 times the work into making new pills that were virtually the same as the old pills, would people have been more comfortable with the price increase? The book hints the answer is yes.
 – I have in mind AIs that will partly replace doctors and nurses.
 – but it seems likely that 90+% of that attention is motivated by signaling.