prediction markets

All posts tagged prediction markets

MIRI has produced a potentially important result (called Garrabrant induction) for dealing with uncertainty about logical facts.

The paper is somewhat hard for non-mathematicians to read. This video provides an easier overview, and more context.

It uses prediction markets! “It’s a financial solution to the computer science problem of metamathematics”.

It shows that we can evade disturbing conclusions such as Godel incompleteness and the paradox of the liar, by expecting to only be very confident about logically deducible facts (as opposed to being mathematically certain). That’s similar to the difference between treating beliefs about empirical facts as probabilities, as opposed to boolean values.

I’m somewhat skeptical that it will have an important effect on AI safety, but my intuition says it will produce enough benefits somewhere that it will become at least as famous as Pearl’s work on causality.

Automated market-making software agents have been used in many prediction markets to deal with problems of low liquidity.

The simplest versions provide a fixed amount of liquidity. This either causes excessive liquidity when trading starts, or too little later.

For instance, in the first year that I participated in the Good Judgment Project, the market maker provided enough liquidity that there was lots of money to be made pushing the market maker price from its initial setting in a somewhat obvious direction toward the market consensus. That meant much of the reward provided by the market maker went to low-value information.

The next year, the market maker provided less liquidity, so the prices moved more readily to a crude estimate of the traders’ beliefs. But then there wasn’t enough liquidity for traders to have an incentive to refine that estimate.

One suggested improvement is to have liquidity increase with increasing trading volume.

I present some sample Python code below (inspired by equation 18.44 in E.T. Jaynes’ Probability Theory) which uses the prices at which traders have traded against the market maker to generate probability-like estimates of how likely a price is to reflect the current consensus of traders.

This works more like human market makers, in that it provides the most liquidity near prices where there’s been the most trading. If the market settles near one price, liquidity rises. When the market is not trading near prices of prior trades (due to lack of trading or news that causes a significant price change), liquidity is low and prices can change more easily.

I assume that the possible prices a market maker can trade at are integers from 1 through 99 (percent).

When traders are pushing the price in one direction, this is taken as evidence that increases the weight assigned to the most recent price and all others farther in that direction. When traders reverse the direction, that is taken as evidence that increases the weight of the two most recent trade prices.

The resulting weights (p_px in the code) are fractions which should be multiplied by the maximum number of contracts the market maker is willing to offer when liquidity ought to be highest (one weight for each price at which the market maker might position itself (yes there will actually be two prices; maybe two weight ought to be averaged)).

There is still room for improvement in this approach, such as giving less weight to old trades after the market acts like it has responded to news. But implementers should test simple improvements before worrying about finding the optimal rules.

trades = [(1, 51), (1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52),]
p_px = {}
num_agree = {}

probability_list = range(1, 100)
num_probabilities = len(probability_list)

for i in probability_list:
    p_px[i] = 1.0/num_probabilities
    num_agree[i] = 0

num_trades = 0
last_trade = 0
for (buy, price) in trades: # test on a set of made-up trades
    num_trades += 1
    for i in probability_list:
        if last_trade * buy < 0: # change of direction
            if buy < 0 and (i == price or i == price+1):
                num_agree[i] += 1
            if buy > 0 and (i == price or i == price-1):
                num_agree[i] += 1
        else:
            if buy < 0 and i <= price:
                num_agree[i] += 1
            if buy > 0 and i >= price:
                num_agree[i] += 1
        p_px[i] = (num_agree[i] + 1.0)/(num_trades + num_probabilities)
    last_trade = buy

for i in probability_list:
    print i, num_agree[i], '%.3f' % p_px[i]

I’ve made a change to the software which should fix the bug uncovered last weekend.
I’ve restored about half of the liquidity I was providing before last weekend. I believe I can continue to provide the current level of liquidity for at least a few more months unless prices change more than I currently anticipate. I may readjust the amount of liquidity provided in a month or two to increase the chances that I can continue to provide a moderate amount of liquidity until all contracts expire without adding more money to the account.
I’m not making new software public now. I anticipate doing so before the end of November.

To deter any suspicion that the comparisons I plan to make between Intrade’s predictions and polls are comparisons I selected to make Intrade look good, I’m announcing now that I intend to use FiveThirtyEight.com as the primary poll aggregator. I intend to pay attention to predictions that are more long-term than I focused in 2004, so the comparison I’ll attach the most importance to will be based on the first snapshot I took of FiveThirtyEight.com’s state by state projections, which was on July 24.

Also, as of last week, one of the Presidential Decision Markets that I’m subsidizing, DEM.PRES-OIL.FUTURES, has attracted enough trading (I suspect from one large trader) to make me reasonably confident that it’s showing the effects of trader opinion rather than the effects of my automated market maker (saying that oil futures will drop if the Democratic candidate wins, and rise if he loses).

This post is a response to a challenge on Overcoming Bias to spend $10 trillion sensibly.
Here’s my proposed allocation (spending to be spread out over 10-20 years):

  • $5 trillion on drug patent buyouts and prizes for new drugs put in the public domain, with the prizes mostly allocated in proportion to the quality adjusted life years attributable to the drug.
  • $1 trillion on establishing a few dozen separate clusters of seasteads and on facilitating migration of people from poor/oppressive countries by rewarding jurisdictions in proportion to the number of immigrants they accept from poorer / less free regions. (I’m guessing that most of those rewards will go to seasteads, many of which will be created by other people partly in hopes of getting some of these rewards).

    This would also have a side affect of significantly reducing the harm that humans might experience due to global warming or an ice age, since ocean climates have less extreme temperatures, seasteads will probably not depend on rainfall to grow food, and can move somewhat to locations with better temperatures.
  • $1 trillion on improving political systems, mostly through prizes that bear some resemblance to The Mo Ibrahim Prize for Achievement in African Leadership (but not limited to democratically elected leaders and not limited to Africa). If the top 100 or so politicians in about 100 countries are eligible, I could set the average reward at about $100 million per person. Of course, nowhere near all of them will qualify, so a fair amount will be left over for those not yet in office.
  • $0.5 trillion on subsidizing trading on prediction markets that are designed to enable futarchy. This level of subsidy is far enough from anything that has been tried that there’s no way to guess whether this is a wasteful level.
  • $1 trillion existential risks
    Some unknown fraction of this would go to persuading people not to work on AGI without providing arguments that they will produce a safe goal system for any AI they create. Once I’m satisfied that the risks associated with AI are under control, much of the remaining money will go toward establishing societies in the asteroid belt and then outside the solar system.
  • $0.5 trillion on communications / computing hardware for everyone who can’t currently afford that.
  • $1 trillion I’d save for ideas I think of later.

I’m not counting a bunch of other projects that would use up less than $100 billion since they’re small enough to fit in the rounding errors of the ones I’ve counted (the Methuselah Mouse prize, desalinization and other water purification technologies, developing nanotech, preparing for the risks of nanotech, uploading, cryonics, nature preserves, etc).

Book review: Infotopia: How Many Minds Produce Knowledge by Cass R. Sunstein.
There’s a lot of overlap between James Surowiecki’s The Wisdom of Crowds and Infotopia, but Infotopia is a good deal more balanced and careful to avoid exaggeration. This makes Infotopia less exciting but more likely to convince a thoughtful reader. It devotes a good deal of attention to conditions which make groups less wise than individuals as well as conditions where groups outperform the best individuals.
Infotopia is directed at people who know little about this subject. I found hardly any new insights in it, and few ideas that I disagreed with. Some of its comments will seem too obvious to be worth mentioning to anyone who uses the web much. It’s slightly better than Wisdom of Crowds, but if you’ve already read Wisdom of Crowds you’ll get little out of Infotopia.

Predictocracy (part 2)
Book review: Predictocracy: Market Mechanisms for Public and Private Decision Making by Michael Abramowicz (continued from prior post).
I’m puzzled by his claim that it’s easier to determine a good subsidy for a PM that predicts what subsidy we should use for a basic PM than it is to determine the a good subsidy for the basic PM. My intuition tells me that at least until traders become experienced with predicting effects of subsidies, the markets that are farther removed from familiar questions will be less predictable. Even with experience, for many of the book’s PMs it’s hard to see what measurable criteria could tell us whether one subsidy level is better than another. There will be some criteria that indicate severely mistaken subsidy levels (zero trading, or enough trading to produce bubbles). But if we try something more sophisticated, such as measuring how accurately PMs with various subsidy levels predict the results of court cases, I predict that we will find some range of subsidies above which increased subsidy produces tiny increases in correlations between PMs and actual trials. Even if we knew that the increased subsidy was producing a more just result, how would we evaluate the tradeoff between justice and the cost of the subsidy? And how would we tell whether the increased subsidy is producing a more just result, or whether the PMs were predicting the actual court cases more accurately by observing effects of factors irrelevant to justice (e.g. the weather on the day the verdict is decided)?
His proposal for self-resolving prediction markets (i.e. markets that predict markets recursively with no grounding in observed results) is bizarre. His arguments about why some of the obvious problems aren’t serious would be fascinating if they didn’t seem pointless due to his failure to address the probably fatal flaw of susceptibility to manipulation.
His description of why short-term PMs may be more resistant to bubbles than stock markets was discredited just as it was being printed. His example of deluded Green Party voters pushing their candidate’s price too high is a near-perfect match for what happened with Ron Paul contracts on Intrade. What Abramowicz missed is that traders betting against Paul needed to tie up a lot more money than traders betting for Paul. High volume futures markets have sophisticated margin rules which mostly eliminate this problem. I expect that low-volume PMs can do the same, but it isn’t easy and companies such as Intrade have only weak motivation to do this.
He suggests that PMs be used to minimize the harm resulting from legislative budget deadlocks by providing tentative funding to projects that PMs predict will receive funding. But if the existence of funding biases legislatures to continue that funding (which appears to be a strong bias, judging by how rare it is for a legislature to stop funding projects), then this proposal would fund many projects that wouldn’t otherwise be funded.
His proposals to use PMs to respond to disasters such as Katrina are poorly thought out. He claims “not much advanced planning of the particular subjects that the markets should cover would be needed”. This appears to underestimate the difficulty of writing unambiguous claims, the time required for traders to understand them, the risks that the agencies creating the PMs will bias the claim wording to the agencies’ advantage, etc. I’d have a lot more confidence in a few preplanned PM claims such as the expected travel times on key sections of roads used in evacuations.
I expect to have additional comments on Predictocracy later this month; they may be technical enough that I will only post the on the futarchy_discuss mailing list.

Book review: Predictocracy: Market Mechanisms for Public and Private Decision Making by Michael Abramowicz.
This had the potential to be an unusually great book, which makes its shortcomings rather frustrating. It is loaded with good ideas, but it’s often hard to distinguish the good ideas from the bad ideas, and the arguments for the good ideas aren’t as convincing as I hoped.
The book’s first paragraph provides a frustratingly half-right model of why markets produce better predictions than alternative institutions, involving a correlation between confidence (or sincerity) and correctness. If trader confidence was the main mechanism by which markets produce accurate predictions, I’d be pretty reluctant to believe the evidence that Abramowicz presents of their success. Sincerity is hard to measure, so I don’t know what to think of its effects. A layman reading this book would have trouble figuring out that the main force for accurate predictions is that the incentives alter traders’ reasoning so that it becomes more accurate.
The book brings a fresh perspective to an area where there are few enough perspectives that any new perspective is valuable when it’s not clearly wrong. He is occasionally clearer than others. For instance, his figure 4.1 enabled me to compare three scoring rules in a few seconds (I’d previously been unwilling to do the equivalent by reading equations).
He advocates some very fine-grained uses of prediction markets (PMs), which is a sharp contrast to my expectation that they are mainly valuable for important issues. Abramowicz has a very different intuition than I do about how much it costs to run a prediction market for an issue that people normally don’t find interesting. For instance, he wants to partly replace small claims court cases with prediction markets for individual cases. I’m fairly sure that obvious ways to do that would require market subsidies much larger than current court costs. The only way I can imagine PMs becoming an affordable substitute for small claims courts would be if most of the decisions involved were done by software. Even then it’s not obvious why one or more PM per court case would be better than a few more careful evaluations of whether to turn those decisions over to software.
He goes even further when proposing PMs to assess niceness, claiming that “just a few dollars’ worth of subsidy per person” would be adequate to assess peoples’ niceness. Assuming the PM requires human traders, that cost estimate seems several orders of magnitude too low (not to mention the problems with judging such PMs).
His idea of “the market web” seems like a potentially valuable idea for a new way of coordinating diverse decisions.
He convinced me that Predictocracy will solve a larger fraction of democracy’s problems than I initially expected, but I see little reason to believe that it will work as well as Futarchy will. I see important classes of systematic biases (e.g. the desire of politicians and bureaucrats to acquire more power than the rest of us should want) that Futarchy would reduce but which Predictocracy doesn’t appear to alter.
Abramowicz provides reasons to hope that predictions of government decisions 10+ years in the future will help remove partisan components of decisions and quirks of particular decision makers because uncertainty over who will make decisions at that time will cause PMs to average forecasts over several possible decision makers.
He claims evaluations used to judge a PM are likely to be less politicized than evaluations that directly affect policy because the evaluations are made after the PM has determined the policy. Interest groups will sometimes get around this by making credible commitments (at the time PMs are influencing the policy) to influence whoever judges the PM, but the costs of keeping those commitments after the policy has been decided will reduce that influence. I’m not as optimistic about this as Abramowicz is. I expect the effect to be real in some cases, but in many cases the evaluator will effectively be part of the interest group in question.

I just got around to checking out a mailing list devoted to Futarchy. It looks interesting enough that I expect to post a number of messages to it over the next few weeks. But I have some concerns that is focused too much on problems associated with the final stages on the path to a pure Futarchy rather than on what I see as the more valuable goal of implementing an impure system that involves voters relying heavily on market predictions (which I see as a necessary step to take before people will seriously consider pure Futarchy).
I’m in the process of writing comments on the book Predictocracy, probably too many for one post, and I expect I’ll post some of them only on the futarchy_discuss list.