Idea Futures

A number of people have compared the final forecasts for the election (e.g. this), but I’m more interested in longer term forecasting, so I’m comparing the state-by-state predictions of Intrade and FiveThirtyEight on the dates for which I saved FiveThirtyEight data a month or more before the election.

Here is a table of states where Intrade disagreed with FiveThirtyEight on one of the first four dates for which I saved FiveThirtyEight data or where they were both wrong on July 24. The numbers are probability of a Democrat winning the state’s electoral votes, with the Intrade forecast first and the FiveThirtyEight forecast second.

State 2008-07-24 2008-08-22 2008-09-14 2008-10-01
CO 71/68 60/53 54.5/46 67.5/84
FL 42/29 34.5/28 30/14 55.2/70
IN 38/26 34.1/15 20/11 38/51
MO 50/26 32.9/13 22.1/11 42.5/48
NC 30/22 25/21 14/7 51/50
NV 51.2/49 49/45 44.9/32 55/66
OH 65/53 50/38 40/29 53.5/68
VA 60.5/50 52.3/36 42/22 59/79

On July 24, both sites predicted Florida, Indiana, and North Carolina wrong. FiveThirtyEight got Indiana right on Oct 1 when Intrade was still wrong, but Intrade got North Carolina right on that date (just barely) while FiveThirtyEight rated it a toss-up.
Intrade got Nevada right on July 24 (just barely) while FiveThirtyEight got it wrong (just barely).
For Virginia, Intrade was right in July and August while FiveThirtyEight was undecided and then wrong.
FiveThirtyEight got Colorado wrong on September 14, but Intrade didn’t.
FiveThirtyEight got Ohio wrong on August 22, while Intrade got it right.
Intrade rated Missouri a toss-up on July 24, while FiveThirtyEight got it right.

On September 14, FiveThirtyEight was fooled by McCain’s post convention bounce by a larger margin than Intrade, but by Oct 1 FiveThirtyEight was more confident about correcting those errors.
For states that were not closely contested, there were numerous examples where Intrade prices where closer to 50 than FiveThirtyEight. It’s likely that this represents long-shot bias on Intrade.

In sum, Intrade made slightly better forecasts for the closely contested states through at least mid September, but after that FiveThirtyEight was at least as good and more decisive. Except for Intrade’s Missouri forecast on July 24, the errors seem largely due to underestimating the effects of economic problems – errors which were also widespread in most forecasts for other things affected by the recession.

In the senate races, I didn’t save FiveThirtyEight forecasts from before November 1. It looks like both Intrade and FiveThirtyEight made similar errors on the Alaska and Minnesota races.
[Update on 2009-01-13: contrary to initial reports, they apparently got the Alaska and Minnesota races right, although there’s still some doubt about Minnesota.]

On the other hand, Intrade had been fairly consistently (but not confidently) saying since early July that California’s Proposition 8 (banning same-sex marriage) would be defeated. Pollsters as a group did a somewhat better job there by issuing conflicting reports.

I’ve made a change to the software which should fix the bug uncovered last weekend.
I’ve restored about half of the liquidity I was providing before last weekend. I believe I can continue to provide the current level of liquidity for at least a few more months unless prices change more than I currently anticipate. I may readjust the amount of liquidity provided in a month or two to increase the chances that I can continue to provide a moderate amount of liquidity until all contracts expire without adding more money to the account.
I’m not making new software public now. I anticipate doing so before the end of November.

Last night an Intrade trader found and exploited a bug in my Automated Market Maker, manipulating DEM.PRES-TROOPS.IRAQ until Intrade rejected one of the market maker’s orders for lack of credit and the software shut down.
The bug involves handling of partial executions of orders, and doesn’t appear to be easily fixable (what happened looks nearly identical to the scenarios I had analyzed and thought I had guarded against).
For the moment, I’ve reduced the market maker’s order size to one contract, which will prevent further exploitation but provide much less liquidity.
I will try to fix the bug sometime in November and increase the order size (on the contracts that don’t get expired at election time) by as much as I can without adding more money to the market maker’s account. I will also analyze the information provided by the markets shortly after the election.

The stock market reacted to today’s defeat of the bank bailout bill with an unusually big decline. Yet the news wasn’t much of a surprise to people watching Intrade, whose contract BAILOUT.APPROVE.SEP08 was trading around 20% all morning. Why did the stock market act as if it was a big surprise?
Did Intrade traders make a lucky guess not based on adequate evidence? Did they have evidence that the stock market ignored? Could the stock market have priced in an 80% chance of the bill being defeated (if so, that would seem to imply that passage would have caused the biggest one-day rise in history)? Could the stock market have been reacting to other news which just happened to coincide with the House vote? (It looks like the market had a short-lived jump coinciding with news that House leaders hoped to twist enough arms to reverse the vote, but I wasn’t able to watch the timing carefully because I was at the dentist).

It seems like one of these must be true, but each once seems improbable.

Arnold Kling, whose comments on the bailout have been better than most, was surprised that the bill failed.

I covered a few of my S&P 500 futures short positions at near the end of trading, but I’m still positioned quite cautiously (I made a small profit today).

To deter any suspicion that the comparisons I plan to make between Intrade’s predictions and polls are comparisons I selected to make Intrade look good, I’m announcing now that I intend to use as the primary poll aggregator. I intend to pay attention to predictions that are more long-term than I focused in 2004, so the comparison I’ll attach the most importance to will be based on the first snapshot I took of’s state by state projections, which was on July 24.

Also, as of last week, one of the Presidential Decision Markets that I’m subsidizing, DEM.PRES-OIL.FUTURES, has attracted enough trading (I suspect from one large trader) to make me reasonably confident that it’s showing the effects of trader opinion rather than the effects of my automated market maker (saying that oil futures will drop if the Democratic candidate wins, and rise if he loses).

Book review: Infotopia: How Many Minds Produce Knowledge by Cass R. Sunstein.
There’s a lot of overlap between James Surowiecki’s The Wisdom of Crowds and Infotopia, but Infotopia is a good deal more balanced and careful to avoid exaggeration. This makes Infotopia less exciting but more likely to convince a thoughtful reader. It devotes a good deal of attention to conditions which make groups less wise than individuals as well as conditions where groups outperform the best individuals.
Infotopia is directed at people who know little about this subject. I found hardly any new insights in it, and few ideas that I disagreed with. Some of its comments will seem too obvious to be worth mentioning to anyone who uses the web much. It’s slightly better than Wisdom of Crowds, but if you’ve already read Wisdom of Crowds you’ll get little out of Infotopia.

Predictocracy (part 2)
Book review: Predictocracy: Market Mechanisms for Public and Private Decision Making by Michael Abramowicz (continued from prior post).
I’m puzzled by his claim that it’s easier to determine a good subsidy for a PM that predicts what subsidy we should use for a basic PM than it is to determine the a good subsidy for the basic PM. My intuition tells me that at least until traders become experienced with predicting effects of subsidies, the markets that are farther removed from familiar questions will be less predictable. Even with experience, for many of the book’s PMs it’s hard to see what measurable criteria could tell us whether one subsidy level is better than another. There will be some criteria that indicate severely mistaken subsidy levels (zero trading, or enough trading to produce bubbles). But if we try something more sophisticated, such as measuring how accurately PMs with various subsidy levels predict the results of court cases, I predict that we will find some range of subsidies above which increased subsidy produces tiny increases in correlations between PMs and actual trials. Even if we knew that the increased subsidy was producing a more just result, how would we evaluate the tradeoff between justice and the cost of the subsidy? And how would we tell whether the increased subsidy is producing a more just result, or whether the PMs were predicting the actual court cases more accurately by observing effects of factors irrelevant to justice (e.g. the weather on the day the verdict is decided)?
His proposal for self-resolving prediction markets (i.e. markets that predict markets recursively with no grounding in observed results) is bizarre. His arguments about why some of the obvious problems aren’t serious would be fascinating if they didn’t seem pointless due to his failure to address the probably fatal flaw of susceptibility to manipulation.
His description of why short-term PMs may be more resistant to bubbles than stock markets was discredited just as it was being printed. His example of deluded Green Party voters pushing their candidate’s price too high is a near-perfect match for what happened with Ron Paul contracts on Intrade. What Abramowicz missed is that traders betting against Paul needed to tie up a lot more money than traders betting for Paul. High volume futures markets have sophisticated margin rules which mostly eliminate this problem. I expect that low-volume PMs can do the same, but it isn’t easy and companies such as Intrade have only weak motivation to do this.
He suggests that PMs be used to minimize the harm resulting from legislative budget deadlocks by providing tentative funding to projects that PMs predict will receive funding. But if the existence of funding biases legislatures to continue that funding (which appears to be a strong bias, judging by how rare it is for a legislature to stop funding projects), then this proposal would fund many projects that wouldn’t otherwise be funded.
His proposals to use PMs to respond to disasters such as Katrina are poorly thought out. He claims “not much advanced planning of the particular subjects that the markets should cover would be needed”. This appears to underestimate the difficulty of writing unambiguous claims, the time required for traders to understand them, the risks that the agencies creating the PMs will bias the claim wording to the agencies’ advantage, etc. I’d have a lot more confidence in a few preplanned PM claims such as the expected travel times on key sections of roads used in evacuations.
I expect to have additional comments on Predictocracy later this month; they may be technical enough that I will only post the on the futarchy_discuss mailing list.

Book review: Predictocracy: Market Mechanisms for Public and Private Decision Making by Michael Abramowicz.
This had the potential to be an unusually great book, which makes its shortcomings rather frustrating. It is loaded with good ideas, but it’s often hard to distinguish the good ideas from the bad ideas, and the arguments for the good ideas aren’t as convincing as I hoped.
The book’s first paragraph provides a frustratingly half-right model of why markets produce better predictions than alternative institutions, involving a correlation between confidence (or sincerity) and correctness. If trader confidence was the main mechanism by which markets produce accurate predictions, I’d be pretty reluctant to believe the evidence that Abramowicz presents of their success. Sincerity is hard to measure, so I don’t know what to think of its effects. A layman reading this book would have trouble figuring out that the main force for accurate predictions is that the incentives alter traders’ reasoning so that it becomes more accurate.
The book brings a fresh perspective to an area where there are few enough perspectives that any new perspective is valuable when it’s not clearly wrong. He is occasionally clearer than others. For instance, his figure 4.1 enabled me to compare three scoring rules in a few seconds (I’d previously been unwilling to do the equivalent by reading equations).
He advocates some very fine-grained uses of prediction markets (PMs), which is a sharp contrast to my expectation that they are mainly valuable for important issues. Abramowicz has a very different intuition than I do about how much it costs to run a prediction market for an issue that people normally don’t find interesting. For instance, he wants to partly replace small claims court cases with prediction markets for individual cases. I’m fairly sure that obvious ways to do that would require market subsidies much larger than current court costs. The only way I can imagine PMs becoming an affordable substitute for small claims courts would be if most of the decisions involved were done by software. Even then it’s not obvious why one or more PM per court case would be better than a few more careful evaluations of whether to turn those decisions over to software.
He goes even further when proposing PMs to assess niceness, claiming that “just a few dollars’ worth of subsidy per person” would be adequate to assess peoples’ niceness. Assuming the PM requires human traders, that cost estimate seems several orders of magnitude too low (not to mention the problems with judging such PMs).
His idea of “the market web” seems like a potentially valuable idea for a new way of coordinating diverse decisions.
He convinced me that Predictocracy will solve a larger fraction of democracy’s problems than I initially expected, but I see little reason to believe that it will work as well as Futarchy will. I see important classes of systematic biases (e.g. the desire of politicians and bureaucrats to acquire more power than the rest of us should want) that Futarchy would reduce but which Predictocracy doesn’t appear to alter.
Abramowicz provides reasons to hope that predictions of government decisions 10+ years in the future will help remove partisan components of decisions and quirks of particular decision makers because uncertainty over who will make decisions at that time will cause PMs to average forecasts over several possible decision makers.
He claims evaluations used to judge a PM are likely to be less politicized than evaluations that directly affect policy because the evaluations are made after the PM has determined the policy. Interest groups will sometimes get around this by making credible commitments (at the time PMs are influencing the policy) to influence whoever judges the PM, but the costs of keeping those commitments after the policy has been decided will reduce that influence. I’m not as optimistic about this as Abramowicz is. I expect the effect to be real in some cases, but in many cases the evaluator will effectively be part of the interest group in question.

I just got around to checking out a mailing list devoted to Futarchy. It looks interesting enough that I expect to post a number of messages to it over the next few weeks. But I have some concerns that is focused too much on problems associated with the final stages on the path to a pure Futarchy rather than on what I see as the more valuable goal of implementing an impure system that involves voters relying heavily on market predictions (which I see as a necessary step to take before people will seriously consider pure Futarchy).
I’m in the process of writing comments on the book Predictocracy, probably too many for one post, and I expect I’ll post some of them only on the futarchy_discuss list.