All posts tagged rationality

Book review: Expert Political Judgment: How Good Is It? How Can We Know? by Philip E. Tetlock
This book is a rather dry description of good research into the forecasting abilities of people who are regarded as political experts. It is unusually fair and unbiased.
His most important finding about what distinguishes the worst from the not-so-bad is that those on the hedgehog end of Isaiah Berlin’s spectrum (who derive predictions from a single grand vision) are wrong more often than those near the fox end (who use many different ideas). He convinced me that that finding is approximately right, but leaves me with questions.
Does the correlation persist at the fox end of the spectrum, or do the most fox-like subjects show some diminished accuracy?
How do we reconcile his evidence that humans with more complex thinking do better than simplistic humans, but simple autoregressive models beat all humans? That seems to suggest there’s something imperfect in using the hedgehog-fox spectrum. Maybe a better spectrum would use evidence on how much data influences their worldviews?
Another interesting finding is that optimists tend to be more accurate than pessimists. I’d like to know how broad a set of domains this applies to. It certainly doesn’t apply to predicting software shipment dates. Does it apply mainly to domains where experts depend on media attention?
To what extent can different ways of selecting experts change the results? Tetlock probably chose subjects that resemble those who most people regard as experts, but there must be ways of selecting experts which produce better forecasts. It seems unlikely they can match prediction markets, but there are situations where we probably can’t avoid relying on experts.
He doesn’t document his results as thoroughly as I would like (even though he’s thorough enough to be tedious in places):
I can’t find his definition of extremists. Is it those who predict the most change from the status quo? Or the farthest from the average forecast?
His description of how he measured the hedgehog-fox spectrum has a good deal of quantitative evidence, but not quite enough for me check where I would be on that spectrum.
How does he produce a numerical timeseries for his autoregressive models? It’s not hard to guess for inflation, but for the end of apartheid I’m rather uncertain.
Here’s one quote that says a lot about his results:

Beyond a stark minimum, subject matter expertise in world politics translates less into forecasting accuracy than it does into overconfidence

This book is a colorful explanation of why we are less successful at finding happiness than we expect. It shows many similarities between mistakes we make in foreseeing how happy we will be and mistakes we make in perceiving the present or remembering the past. That makes it easy to see that those errors are natural results of shortcuts our minds take to minimize the amount of data that our imagination needs to process (e.g. filling in our imagination with guesses as our mind does with the blind spot in our eye).
One of the most important types of biases is what he calls presentism (a term he borrows from historians and extends to deal with forecasting). When we imagine the past or future, our minds often employ mental mechanisms that were originally adapted to perceive the present, and we retain biases to give more weight to immediate perceptions than to what we imagine. That leads to mistakes such as letting our opinions of how much food we should buy be overly influenced by how hungry we are now, or Wilbur Wright’s claim in 1901 that “Man will not fly for 50 years.”
This is more than just a book about happiness. It gives me a broad understanding of human biases that I hope to apply to other areas (e.g. it has given me some clues about how I might improve my approach to stock market speculation).
But it’s more likely that the book’s style will make you happy than that the knowledge in it will cause you to use the best evidence available (i.e. observations of what makes others happy) when choosing actions to make yourself happy. Instead, you will probably continue to overestimate your ability to predict what will make you happy and overestimate the uniqueness that you think makes the experience of others irrelevant to your own pursuit of happiness.
I highly recommend the book.
Some drawbacks:
His analysis of memetic pressures that cause false beliefs about happiness to propagate is unconvincing. He seems to want a very simple theory, but I doubt the result is powerful enough to explain the extent of the myths. A full explanation would probably require the same kind of detailed analysis of biases that the rest of the book contains.
He leaves the impression that he thinks he’s explained most of the problems with achieving happiness, when he probably hasn’t done that (it’s unlikely any single book could).
He presents lots of experimental results, but he doesn’t present the kind of evidence needed to prove that presentism is a consistent problem across a wide range of domains.
He fails to indicate how well he follows his own advice. For instance, does he have any evidence that writing a book like this makes the author happy?

Robin Hanson writes in a post on Intuition Error and Heritage:

Unless you can see a reason to have expected to be born into a culture or species with more accurate than average intuitions, you must expect your cultural or species specific intuitions to be random, and so not worth endorsing.

Deciding whether an intuition is species specific and no more likely than random to be right seems a bit hard, due to the current shortage of species whose cultures address many of the disputes humans have.
The ideas in this quote follow logically from other essays of Robin’s that I’ve read, but phrasing them this way makes them seem superficially hard to reconcile with arguments by Hayek that we should respect the knowledge contained in culture.
Part of this apparent conflict seems to be due to the Hayek’s emphasis on intuitions for which there is some unobvious and inconclusive evidence that supports the cultural intuitions. Hayek wasn’t directing his argument to a random culture, but rather to a culture for which there was some evidence of better than random results, and it would make less sense to apply his arguments to, say, North Korean society. For many other intuitions that Hayek cared about, the number of cultures which agree with the intuition may be large enough to constitute evidence in support of the intuition.
Some intuitions may be appropriate for a culture even though they were no better than random when first adopted. Driving on the right side of the road is a simple example. The arguments given in favor of a judicial bias toward stare decisis suggest this is just the tip of an iceberg.
Some of this apparent conflict may be due the importance of treating interrelated practices together. For instance, laws against extramarital sex might be valuable in societies where people depend heavily on marital fidelity but not in societies where a divorced person can support herself comfortably. A naive application of Robin’s rule might lead the former society to decide such a law is arbitrary, when a Hayekian might wonder if it is better to first analyze whether to treat the two practices as a unit which should only be altered together.
I’m uncertain whether these considerations fully reconcile the two views, or whether Hayek’s arguments need more caveats.