All posts tagged rationality

Book review: The Rationality Quotient: Toward a Test of Rational Thinking, by Keith E. Stanovich, Richard F. West and Maggie E. Toplak.

This book describes an important approach to measuring individual rationality: an RQ test that loosely resembles an IQ test. But it pays inadequate attention to the most important problems with tests of rationality.


My biggest concern about rationality testing is what happens when people anticipate the test and are motivated to maximize their scores (as is the case with IQ tests). Do they:

  • learn to score high by “cheating” (i.e. learn what answers the test wants, without learning to apply that knowledge outside of the test)?
  • learn to score high by becoming more rational?
  • not change their score much, because they’re already motivated to do as well as their aptitudes allow (as is mostly the case with IQ tests)?

Alas, the book treats these issues as an afterthought. Their test knowingly uses questions for which cheating would be straightforward, such as asking whether the test subject believes in science, and whether they prefer to get $85 now rather than $100 in three months. (If they could use real money, that would drastically reduce my concerns about cheating. I’m almost tempted to advocate doing that, but doing so would hinder widespread adoption of the test, even if using real money added enough value to pay for itself.)

Continue Reading

Why do people knowingly follow bad investment strategies?

I won’t ask (in this post) about why people hold foolish beliefs about investment strategies. I’ll focus on people who intend to follow a decent strategy, and fail. I’ll illustrate this with a stereotype from a behavioral economist (Procrastination in Preparing for Retirement):[1]

For instance, one of the authors has kept an average of over $20,000 in his checking account over the last 10 years, despite earning an average of less than 1% interest on this account and having easy access to very liquid alternative investments earning much more.

A more mundane example is a person who holds most of their wealth in stock of a single company, for reasons of historical accident (they acquired it via employee stock options or inheritance), but admits to preferring a more diversified portfolio.

An example from my life is that, until this year, I often borrowed money from Schwab to buy stock, when I could have borrowed at lower rates in my Interactive Brokers account to do the same thing. (Partly due to habits that I developed while carelessly unaware of the difference in rates; partly due to a number of trivial inconveniences).

Behavioral economists are somewhat correct to attribute such mistakes to questionable time discounting. But I see more patterns than such a model can explain (e.g. people procrastinate more over some decisions (whether to make a “boring” trade) than others (whether to read news about investments)).[2]

Instead, I use CFAR-style models that focus on conflicting motives of different agents within our minds.

Continue Reading

Book review: Reinventing Philanthropy: A Framework for More Effective Giving, by Eric Friedman.

This book will spread the ideas behind effective altruism to a modestly wider set of donors than other efforts I’m aware of. It understates how much the effective altruism movement differs from traditional charity and how hard it is to implement, but given the shortage of books on this subject any addition is valuable. It focuses on how to ask good questions about philanthropy rather than attempting to find good answers.

The author provides a list of objections he’s heard to maximizing the effectiveness of charity, a majority of which seem to boil down to the “diversification of nonprofit goals would be drastically reduced”, leading to many existing benefits being canceled. He tries to argue that people have extremely diverse goals which would result in an extremely diverse set of charities. He later argues that the subjectivity of determining the effectiveness of charities will maintain that diversity. Neither of these arguments seem remotely plausible. When individuals explicitly compare how they value their own pleasure, life expectancy, dignity, freedom, etc., I don’t see more than a handful of different goals. How could it be much different for recipients of charity? There exist charities whose value can’t easily be compared to GiveWell’s recommended ones (stopping nuclear war?), but they seem to get a small fraction of the money that goes to charities that GiveWell has decent reasons for rejecting.

So I conclude that widespread adoption of effective giving would drastically reduce the diversity of charitable goals (limited mostly by the fact that spending large amount on a single goal is subject to diminishing returns). The only plausible explanation I see for peoples’ discomfort with that is that people are attached to beliefs which are inconsistent with treating all potential recipients as equally deserving.

He’s reluctant to criticize “well-intentioned” donors who use traditional emotional reasoning. I prefer to think of them as normally-intentioned (i.e. acting on a mix of selfish and altruistic motives).

I still have some concerns that asking average donors to objectively maximize the impact of their donations would backfire by reducing the emotional benefit they get from giving more than it increases the effectiveness of their giving. But since I don’t expect more than a few percent of the population to be analytical enough to accept the arguments in this book, this doesn’t seem like an important concern.

He tries to argue that effective giving can increase the emotional benefit we get from giving. This mostly seems to depend on getting more warm fuzzy feelings from helping more people. But as far as I can tell, those feelings are very insensitive to the number of people helped. I haven’t noticed any improved feelings as I alter my giving to increase its impact, and the literature on scope insensitivity suggests that’s typical.

He wants donors to treat potentially deserving recipients as equally deserving regardless of how far away they are, but he fails to include people who are distant in time. He might have good reasons for not wanting to donate to people of the distant future, but not analyzing those reasons risks making the same kind of mistake he criticizes donors for making about distant continents.

Book review: Thinking, Fast and Slow, by Daniel Kahneman.

This book is an excellent introduction to the heuristics and biases literature, but only small parts of it will seem new to those who are familiar with the subject.

While the book mostly focuses on conditions where slow, logical thinking can do better than fast, intuitive thinking, I find it impressive that he was careful to consider the views of those who advocate intuitive thinking, and that he collaborated with a leading advocate of intuition to resolve many of their apparent disagreements (mainly by clarifying when each kind of thinking is likely to work well).

His style shows that he has applied some of the lessons of the research in his field to his own writing, such as by giving clear examples. (“Subjects’ unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular”).

He sounds mildly overconfident (and believes mild overconfidence can be ok), but occasionally provides examples of his own irrationality.

He has good advice for investors (e.g. reduce loss aversion via “broad framing” – think of a single loss as part of a large class of results that are on average profitable), and appropriate disdain for investment advisers. But he goes overboard when he treats the stock market as unpredictable. The stock market has some real regularities that could be exploited. Most investors fail to find them because they see many more regularities than are real, are overconfident about their ability to distinguish the real ones, and because it’s hard to distinguish valuable feedback (which often takes many years to get) from misleading feedback.

I wish I could find equally good book for overuse of logical analysis when I want the speed of intuition (e.g. “analysis paralysis”).

Book Review: Simple Heuristics That Make Us Smart by Gerd Gigerenzer and Peter M. Todd.

This book presents serious arguments in favor of using simple rules to make most decisions. They present many examples where getting a quick answer by evaluating a minimal amount of data produces almost as accurate a result as highly sophisticated models. They point out that ignoring information can minimize some biases:

people seldom consider more than one or two factors at any one time, although they feel that they can take a host of factors into account

(Tetlock makes similar suggestions).

They appear to overstate the extent to which their evidence generalizes. They test their stock market heuristic on a mere six months worth of data. If they knew much about stock markets, they’d realize that there are a lot more bad heuristics which work for a few years at a time than there are good heuristics. I’ll bet that theirs will do worse than random in most decades.

The book’s conclusions can be understood by skimming small parts of the book. Most of the book is devoted to detailed discussions of the evidence. I suggest following the book’s advice when reading it – don’t try to evaluate all the evidence, just pick out a few pieces.

Book review: What Intelligence Tests Miss – The Psychology of Rational Thought by Keith E. Stanovich.

Stanovich presents extensive evidence that rationality is very different from what IQ tests measure, and the two are only weakly related. He describes good reasons why society would be better if people became more rational.

He is too optimistic that becoming more rational will help most people who accomplish it. Overconfidence provides widespread benefits to people who use it in job interviews, political discussions, etc.

He gives some advice on how to be more rational, such as thinking the opposite of each new hypothesis you are about to start believing. But will training yourself to do that on test problems cause you to do it when it matters? I don’t see signs that Stanovich practiced it much while writing the book. The most important implication he wants us to draw from the book is that we should develop and use Rationality Quotient (RQ) tests for at least as many purposes as IQ tests are used. But he doesn’t mention any doubts that I’d expect him to have if he thought about how rewarding high RQ scores might affect the validity of those scores.

He reports that high IQ people can avoid some framing effects and overconfidence, but do so only when told to do so. Also, the sunk cost bias test looks easy to learn how to score well on, even when it’s hard to practice the right behavior – the Bruine de Bruin, Parker and Fischhoff paper than Stanovich implies is the best attempt so far to produce an RQ test lists a sample question for the sunk costs bias that involves abandoning food when you’re too full at a restaurant. It’s obvious what answer produces a higher RQ score, but that doesn’t say much about how I’d behave when the food is in front of me.

He sometimes writes as if rationality were as close to being a single mental ability as IQ is, but at other times he implies it isn’t. I needed to read the Bruine de Bruin, Parker and Fischhoff paper to get real evidence. Their path independence component looks unrelated to the others. The remaining components have enough correlation with each other that there may be connections between them, but those correlations are lower than the correlations between the overall rationality score and IQ tests. So it’s far from clear whether a single RQ score is better than using the components as independent tests.

Given the importance he attaches to testing for and rewarding rationality, it’s disappointing that he devotes so little attention to how to do that.

He has some good explanations of why evolution would have produced minds with the irrational features we observe. He’s much less impressive when he describes how we should classify various biases.

I was occasionally annoyed that he treats disrespect for scientific authority as if it were equivalent to irrationality. The evidence for Big Foot or extraterrestrial visitors may be too flimsy to belong in scientific papers, but when he says there’s “not a shred of evidence” for them, he’s either using a meaning of “evidence” that’s inappropriate when discussing the rationality of people who may be sensibly lazy about gathering relevant data, or he’s simply wrong.

At last Sunday’s Overcoming Bias meetup, we tried paranoid debating. We formed groups of mostly 4 people (5 for the first round or two) and competed to produce the most accurate guess to trivia questions with numeric answers, with one person secretly designated to be rewarded for convincing the team to produce the least accurate answer.

It was fun and may have taught us a little about becoming more rational. But in order to be valuable, it should be developed further to become a means of testing rationality. As practiced, it tested some combination of trivia knowledge and rationality. The last round reduced the importance of trivia knowledge by rewarding good confidence intervals instead of a single good answer. I expect there are ways of using confidence intervals that remove the effects of trivia knowledge from the scores.

I’m puzzled about why people preferred the spokesman version to the initial version where the median number was the team’s answer. Designating a spokesman publicly as a non-deceiver provides information about who the deceiver is. In one case, we determined who the deceiver was by two of us telling the spokesman that we were sufficiently ignorant about the subject relative to him that he should decide based only on his knowledge. That gave our team a big advantage that had little relation to our rationality. I expect the median approach can be extended to confidence intervals by taking the median of the lows and the median of the highs, but I’m not fully confident that there are no problems with that.

The use of semi-randomly selected groups meant that scores were weak signals. If we want to evaluate individual rationality, we’d need rather time consuming trials of many permutations of the groups. Paranoid debating is more suited to comparing groups (e.g. a group of people credentialed as the best students from a rationality dojo, or the people most responsible for decisions in a hedge fund).

See more comments at Less Wrong.

This paper reports that people with autistic spectrum symptoms are less biased by framing effects. Unfortunately, the researchers suggest that the increased rationality is connected to an inability to incorporate emotional cues into some decision making processes, so the rationality comes at a cost in social skills.

Some analysis of how these results fit in with the theory that autism is the opposite end of a spectrum from schizophrenia can be found here:

It seems that the schizophrenic is working on the basis of an internal model and is ignoring external feedback: thus his reliance on previous response.I propose that an opposite pattern would be observed in Autistics with Autistics showing no or less mutual information, as they have poor self-models; but greater cross-mutual information , as they would base their decisions more on external stimuli or feedback.

Book review: Mindless Eating: Why We Eat More Than We Think by Brian Wansink.
This well-written book might help a few people lose a significant amount of weight, and many to lose a tiny bit.
Some of his advice seems to demand as much willpower for me as a typical diet (e.g. eat slowly), but he gives many small suggestions and advises us to pick and choose the most appropriate ones. There’s enough variety and novelty among his suggestions that most people are likely to find at least one feasible method to lose a few pounds.
A large fraction of his suggestions require none of the willpower that a typical diet requires, but will be rejected by most people because their ego will cause them to insist that only people less rational than them are making the kind of mistakes that the book’s suggestions will fix.
Most of the book’s claims seem to be backed up by careful research. But I couldn’t find any research to back up the claim that approaches which cause people to eat 100 calories per day less for days will cause people to lose 10 pounds in ten months. He presents evidence that such a diet doesn’t need to make people feel deprived over the short time periods they’ve been studied. But there’s been speculation among critics of diet books that our bodies have a natural “set point” weight, and diets which work for a while have no long-term effect because lower body weights cause increased desire to return to the set point. This book offers only weak anecdotal evidence against that possibility.
But even if it fails as a diet book, it may help you understand how the taste of your food is affected by factors other than the food itself.

Bryan Caplan has a good post arguing democracy produces worse results than rational ignorance among voters would explain.
However, one aspect of his style annoys me – his use of the word irrationality to describe what’s wrong with voter thinking focuses on what is missing from voter thought processes rather than what socially undesirable features are present (many economists tend to use the word irrationality this way). I hope his soon-to-be-published book version of this post devotes more attention to what voters are doing that differs from boundedly rational attempts at choosing the best candidates (some of which I suspect fall into what many of us would call selfishly rational motives even though economists usually classify them as irrational). Some of the motives that I suspect are important are the desire to signal one’s group membership, endowment effects which are one of the many reasons people treat existing jobs as if they were more valuable than new and more productive jobs that can be created, and reputation effects where people stick with whatever position they had in the past because updating their beliefs in response to new evidence would imply that their original positions weren’t as wise as they want to imagine.
Alas, his policy recommendations are not likely to be very effective and are generally not much easier to implement than futarchy (which I consider to be the most promising approach to dealing with the problems of democracy). For example:

Imagine, for example, if the Council of Economic Advisers, in the spirit of the Supreme Court, had the power to invalidate legislation as “uneconomical.”

If I try hard enough, I can imagine this approach working well. But it would take a lot more than Caplan’s skills at persuasion to get voters to go along with this, and it’s not hard to imagine that such an institution would develop an understanding of the concept of “uneconomical” that is much less desirable than Caplan’s or mine.