bias

All posts tagged bias

Book review: Thinking, Fast and Slow, by Daniel Kahneman.

This book is an excellent introduction to the heuristics and biases literature, but only small parts of it will seem new to those who are familiar with the subject.

While the book mostly focuses on conditions where slow, logical thinking can do better than fast, intuitive thinking, I find it impressive that he was careful to consider the views of those who advocate intuitive thinking, and that he collaborated with a leading advocate of intuition to resolve many of their apparent disagreements (mainly by clarifying when each kind of thinking is likely to work well).

His style shows that he has applied some of the lessons of the research in his field to his own writing, such as by giving clear examples. (“Subjects’ unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular”).

He sounds mildly overconfident (and believes mild overconfidence can be ok), but occasionally provides examples of his own irrationality.

He has good advice for investors (e.g. reduce loss aversion via “broad framing” – think of a single loss as part of a large class of results that are on average profitable), and appropriate disdain for investment advisers. But he goes overboard when he treats the stock market as unpredictable. The stock market has some real regularities that could be exploited. Most investors fail to find them because they see many more regularities than are real, are overconfident about their ability to distinguish the real ones, and because it’s hard to distinguish valuable feedback (which often takes many years to get) from misleading feedback.

I wish I could find equally good book for overuse of logical analysis when I want the speed of intuition (e.g. “analysis paralysis”).

Book Review: Simple Heuristics That Make Us Smart by Gerd Gigerenzer and Peter M. Todd.

This book presents serious arguments in favor of using simple rules to make most decisions. They present many examples where getting a quick answer by evaluating a minimal amount of data produces almost as accurate a result as highly sophisticated models. They point out that ignoring information can minimize some biases:

people seldom consider more than one or two factors at any one time, although they feel that they can take a host of factors into account

(Tetlock makes similar suggestions).

They appear to overstate the extent to which their evidence generalizes. They test their stock market heuristic on a mere six months worth of data. If they knew much about stock markets, they’d realize that there are a lot more bad heuristics which work for a few years at a time than there are good heuristics. I’ll bet that theirs will do worse than random in most decades.

The book’s conclusions can be understood by skimming small parts of the book. Most of the book is devoted to detailed discussions of the evidence. I suggest following the book’s advice when reading it – don’t try to evaluate all the evidence, just pick out a few pieces.

Book review: What Intelligence Tests Miss – The Psychology of Rational Thought by Keith E. Stanovich.

Stanovich presents extensive evidence that rationality is very different from what IQ tests measure, and the two are only weakly related. He describes good reasons why society would be better if people became more rational.

He is too optimistic that becoming more rational will help most people who accomplish it. Overconfidence provides widespread benefits to people who use it in job interviews, political discussions, etc.

He gives some advice on how to be more rational, such as thinking the opposite of each new hypothesis you are about to start believing. But will training yourself to do that on test problems cause you to do it when it matters? I don’t see signs that Stanovich practiced it much while writing the book. The most important implication he wants us to draw from the book is that we should develop and use Rationality Quotient (RQ) tests for at least as many purposes as IQ tests are used. But he doesn’t mention any doubts that I’d expect him to have if he thought about how rewarding high RQ scores might affect the validity of those scores.

He reports that high IQ people can avoid some framing effects and overconfidence, but do so only when told to do so. Also, the sunk cost bias test looks easy to learn how to score well on, even when it’s hard to practice the right behavior – the Bruine de Bruin, Parker and Fischhoff paper than Stanovich implies is the best attempt so far to produce an RQ test lists a sample question for the sunk costs bias that involves abandoning food when you’re too full at a restaurant. It’s obvious what answer produces a higher RQ score, but that doesn’t say much about how I’d behave when the food is in front of me.

He sometimes writes as if rationality were as close to being a single mental ability as IQ is, but at other times he implies it isn’t. I needed to read the Bruine de Bruin, Parker and Fischhoff paper to get real evidence. Their path independence component looks unrelated to the others. The remaining components have enough correlation with each other that there may be connections between them, but those correlations are lower than the correlations between the overall rationality score and IQ tests. So it’s far from clear whether a single RQ score is better than using the components as independent tests.

Given the importance he attaches to testing for and rewarding rationality, it’s disappointing that he devotes so little attention to how to do that.

He has some good explanations of why evolution would have produced minds with the irrational features we observe. He’s much less impressive when he describes how we should classify various biases.

I was occasionally annoyed that he treats disrespect for scientific authority as if it were equivalent to irrationality. The evidence for Big Foot or extraterrestrial visitors may be too flimsy to belong in scientific papers, but when he says there’s “not a shred of evidence” for them, he’s either using a meaning of “evidence” that’s inappropriate when discussing the rationality of people who may be sensibly lazy about gathering relevant data, or he’s simply wrong.

Influence

Book review: Influence: The Psychology of Persuasion by Robert B. Cialdini.

This book gives clear descriptions of six strategies that salesmen use to influence customers, and provides advice on how we can somewhat reduce our vulnerability to being exploited by them. It is one of the best books for laymen about heuristics and biases.

It shows why the simplest quick fixes would produce more problems than they solve, by showing that there are good reasons why we use heuristics that create opportunities for people to exploit us.

The author’s willingness to admit that he has been exploited by these strategies makes it harder for readers to dismiss the risks as something only fools fall for.

Book review: Human Accomplishment: The Pursuit of Excellence in the Arts and Sciences, 800 B.C. to 1950 by Charles Murray.
I was reluctant to read this book but read it because a reading group I belong to selected it. I agree with most of what it says, but was underwhelmed by what it accomplished.
He has compiled an impressive catalog of people who have accomplished excellent feats in arts, science, and technology.
He has a long section arguing that the disproportionate number of dead white males in his list is not a result of bias. Most of this just repeats what has been said many times before. He appears do have done more than most to check authorities of other cultures to verify that their perspective doesn’t conflict with his. But that’s hard to do well (how many different languages does he read well enough to avoid whatever selection biases influence what’s available in English?) and hard for me to verify. He doesn’t ask how his choice of categories (astronomy, medicine, etc) bias his results (I suspect not much).
His most surprising claim is that the rate of accomplishment is declining. He convinced me that he is measuring something that is in fact declining, but didn’t convince me that what he measured is important. I can think of many other ways of trying to measure accomplishment: number of lives saved, number of people whose accomplishment was bought by a million people, number of people whose accomplishment created $100 million in revenues, the Flynn Effect, number of patents, number of peer-reviewed papers, or number of metainnovations. All of these measures have nontrivial drawbacks, but they illustrate why I find his measure (acclaim by scholars) very incomplete. An incomplete measure may be adequate for conclusions that aren’t very sensitive to the choice of measure (such as the male/female ratio of important people), but when most measures fail to support his conclusion that the rate of accomplishment is declining, his failure to try for a more inclusive measure is disappointing.
His research appears careful to a casual reader, but I found one claim that was definitely not well researched. He thinks that “the practice of medicine became an unambiguous net plus for the patient” around the 1920s or 1930s. He cites no sources for this claim, and if he had found the best studies on the subject he’d see lots of uncertainty about whether it has yet become a net plus.

Politimetrics (associated with the Westminster Business School) has sponsored some additional Intrade contracts which will provide information about the impact of the presidential election on the country if they ever get enough liquidity. So far, there’s been no sign that much liquidity will exist.
One reason I (and presumably other traders) haven’t placed many orders is that the contracts deal with individual candidates. Since the value of the new contracts should fluctuate with the probability of the relevant candidate’s winning, and those fluctuations are currently much larger than any other factor affecting the prices, trading them would require any trader who doesn’t accept the market price to frequently monitor the prices of the underlying contracts. Nobody wants to do that unless the contracts already have significant volume.
Even if they had some liquidity, there’s a good deal of risk that the long-shot bias which appears to be common on Intrade would limit my confidence in the value of the information provided by those prices for all but the two or three candidates who are most likely to win in November (i.e. I’d probably believe what they said about Clinton relative to Obama, but I’d doubt they would be useful for voters in Republican primaries).
When it becomes clear who will win each party’s nomination, these problems will be reduced, and I’ll probably place a moderate number of orders on some of these contracts.
It should be possible to design a better user interface for decision markets of this nature so that users could place orders purely on the probable impact of a candidate’s election. Shock response futures come closer to doing that than contracts of the form “X wins and Y happens”, but can probably only indicate the direction of the impact.
I’ve created web pages at https://bayesianinvestor.com/amm/implied.html and https://bayesianinvestor.com/amm/implied4.html (which are currently being updated 4 times a day) which show implied prices (i.e. the price of the conditional contract as a percent of the price of the underlying candidate’s contract) that ought to represent what the markets think the probable effects would be if that candidate wins. Ideally traders could place orders expressed in terms of those implied prices, but that’s nontrivial to implement, and unlikely to happen unless someone pays Intrade a fair amount to create.
I’ve commented on Jed Christiansen’s blog about why I doubt the conditional contracts I’m subsidizing have had enough trading yet to produce valuable information. But the trends suggest there will be enough trading within a few weeks.

Book review: The Robot’s Rebellion: Finding Meaning in the Age of Darwin by Keith E. Stanovich.
This book asks us to notice the conflicts between the goals our genes created us to serve and the goals that we as individuals benefit from achieving. Its viewpoint is somewhat new and unique. Little of the substance of the book seemed new, but there were a number of places where the book provides better ways of communicating ideas than I had previously seen.
The title led me to hope that the book would present a very ambitious vision of how we might completely free ourselves from genes and Darwinian evolution, but his advice focuses on modest nearer term benefits we can get from knowledge produced by studying heuristics and biases. The advice consists mainly of elaborations on the ideas of being rational and using scientific methods instead of using gut reactions when those approaches give conflicting results.
He does a good job of describing the conflicts between first order desires (e.g. eating sugar) and higher order desires (e.g. the desire not to desire unhealthy amounts of sugar), and why there’s no easy rule to decide which of those desires deserves priority.
He isn’t entirely fair to groups of people that he disagrees with. I was particularly annoyed by his claim that “economics vehemently resists the notion that first-order desires are subject to critique”. What economics resists is the idea that person X is a better authority than person Y about what Y’s desires are or ought to be. Economics mostly avoids saying anything about whether a person should want to alter his desires, and I expect those issues to be dealt with better by other disciplines.
One of the better ideas in the book was to compare the effort put into testing peoples’ intelligence to the effort devoted to testing their rationality. He mentions many tests that would provide information about how well a person has overcome biases, and points out that such information might be valuable to schools deciding which students to admit and employers deciding whom to hire. I wish he had provided a good analysis of how well those tests would work if people trained to do well on them. I’d expect some wide variations – tests for overconfidence can be made to work fairly well, but I’m concerned that people would learn to pass tests such as the Wason test without changing their behavior under conditions when they’re not alert to these problems.

I am often when people who produce bad results via poorly thought out policies are said to have good intentions.
Too many people divide intentions into two binary categories – good and bad. I prefer to see intentions as ranging along a continuum, with one extreme for plans that involve meticulous research to ensure that the results that the wisest people would expect are consistent with altruism, and the other extreme for plans where anyone can see that the expected results will be unnecessary harm. Most intentions fall in the middle of this spectrum, with people not intending any harm but allowing their expectations to be biased by their self-interest (often their self-interest in appearing altruistic).
It’s unrealistic to expect people to change the way they describe intentions so that it fully reflects such a continuum, so I’ll encourage people to take a smaller step and replace the current Manichean dualism with three categories of intentions – good (resulting from unusual effort to ensure desirable results), normal (i.e. most intentions), and bad (where we expect that the person was aware that the results involve unnecessary harm).

Whose Freedom?

Book review: Whose Freedom?: The Battle Over America’s Most Important Idea by George Lakoff.
This book makes a few good points about what cognitive science tells us about differing concepts associated with the word freedom. But most of the book consists of attempts to explain his opponents’ world view that amount to defending his world view by stereotyping his opponents as simplistic.
Even when I agree that the people he’s criticizing are making mistakes due to framing errors, I find his analysis very implausible. E.g. he explains Bush’s rationalization of Iraqi deaths as “Those killed and maimed don’t count, since they are outside the war frame. Moreover, Bush has done nothing via direct causation to harm any Iraqis and so has not imposed on their freedom”. Anyone who bothers to listen to Bush can see a much less stupid rationalization – Bush imagines we’re in a rerun of World War II, where the Forces of Evil have made it inevitable that some innocent people will die, and keeping U.S. hands clean will allow Evil to spread.
Lakoff’s insistence that his opponents are unable to understand indirect, systematic causation is ironic, since he shows no familiarity with most of the relevant science of complex effects of human action (e.g. economics, especially public choice economics).
He devotes only one sentence to what I regard as the biggest single difference between his worldview and his opponents’: his opponents believe in “Behavior as naturally governed by rewards and punishments.”
His use of the phrase “idea theft” to describe uses of the word freedom that differ from his use of that word is objectionable both due to the problems with treating ideas as property and with his false implication that his concept of freedom resembles the traditional U.S. concept of freedom (here’s an example of how he rejects important parts of the founders’ worldview: “One of the biggest mistakes of the Enlightenment was to counter this claim with the assumption that morality comes from reason. In fact, morality is grounded in empathy”).
If his claims of empathy are more than simply calling his opponents uncaring, then it may help explain his bias toward helping people who are most effective at communicating their emotions. For example, a minimum wage is part of his concept of freedom. People who have their wages increased by a minimum wage law tend to know who they are and often have labor unions to help spread their opinions. Whereas a person whom the minimum wage prevents from getting a job is less likely to see the cause or have a way to tell Lakoff about the resulting harm. (If you doubt the minimum wage causes unemployment, see http://www.nber.org/papers/w12663 for a recent survey of the evidence.)
This is symptomatic of the biggest problem with the book – he assumes political disagreements are the result of framing errors, not differences in understanding of how the world works, and wants to persuade people to frame issues his way rather than to use scientific methods when possible to better measure effects that people disagree about.
The book also contains a number of strange claims where it’s hard to tell whether Lakoff means what he says or is writing carelessly. E.g. “Whenever a case reaches a high court, it is because it does not clearly fit within the established categories of the law.” – I doubt he would deny that Hamdi v. Rumsfeld fit clearly within established habeas corpus law.
This is a book which will tempt people to believe that anyone who agrees with Lakoff’s policy advice is ignorant. But people who want to combat Lakoff’s ideology should resist that temptation to stereotype opponents. There are well-educated people (e.g. some behavioral economists) who have more serious arguments for many of the policies Lakoff recommends.

Charity and ideology

A story (not online) in a recent issue of Liberty Magazine reports that research by Arthur C. Brooks shows that people who favor government spending on poverty programs give less to charity than those who don’t support such spending.
If that were only true for monetary donations, there would be a number of plausible questions that could be raised. But Brooks gets around those by showing that it is also true of blood donations.
Brooks’ findings should not be used to justify any particular ideology, since a willingness to act charitably doesn’t necessarily correlate with an understanding of the effects of political policies. It might be interesting to know if a willingness to act charitably correlates with more careful education concerning the effects of political policies, but that would be hard to objectively measure. Brooks’ findings should be used only to discredit people who claim that there’s a simple way to determine that advocates of a welfare state are more compassionate than advocates of smaller government.
Here is an excerpt from Brooks’ book, and other reports on his research are here and here.