amm

All posts tagged amm

Manifold Markets is a prediction market platform where I’ve been trading since September. This post will compare it to other prediction markets that I’ve used.

Play Money

The most important fact about Manifold is that traders bet mana, which is for most purposes not real money. You can buy mana, and use mana to donate real money to charity. That’s not attractive enough for most of us to treat it as anything other than play money.

Play money has the important advantage of not being subject to CFTC regulation or gambling laws. That enables a good deal of innovation that is stifled in real-money platforms that are open to US residents.

Continue Reading

Book review: Superforecasting: The Art and Science of Prediction, by Philip E. Tetlock and Dan Gardner.

This book reports on the Good Judgment Project (GJP).

Much of the book recycles old ideas: 40% of the book is a rerun of Thinking Fast and Slow, 15% of the book repeats Wisdom of Crowds, and 15% of the book rehashes How to Measure Anything. Those three books were good enough that it’s very hard to improve on them. Superforecasting nearly matches their quality, but most people ought to read those three books instead. (Anyone who still wants more after reading them will get decent value out of reading the last 4 or 5 chapters of Superforecasting).

The book’s style is very readable, using an almost Gladwell-like style (a large contrast to Tetlock’s previous, more scholarly book), at a moderate cost in substance. It contains memorable phrases, such as “a fox with the bulging eyes of a dragonfly” (to describe looking at the world through many perspectives).

Continue Reading

Automated market-making software agents have been used in many prediction markets to deal with problems of low liquidity.

The simplest versions provide a fixed amount of liquidity. This either causes excessive liquidity when trading starts, or too little later.

For instance, in the first year that I participated in the Good Judgment Project, the market maker provided enough liquidity that there was lots of money to be made pushing the market maker price from its initial setting in a somewhat obvious direction toward the market consensus. That meant much of the reward provided by the market maker went to low-value information.

The next year, the market maker provided less liquidity, so the prices moved more readily to a crude estimate of the traders’ beliefs. But then there wasn’t enough liquidity for traders to have an incentive to refine that estimate.

One suggested improvement is to have liquidity increase with increasing trading volume.

I present some sample Python code below (inspired by equation 18.44 in E.T. Jaynes’ Probability Theory) which uses the prices at which traders have traded against the market maker to generate probability-like estimates of how likely a price is to reflect the current consensus of traders.

This works more like human market makers, in that it provides the most liquidity near prices where there’s been the most trading. If the market settles near one price, liquidity rises. When the market is not trading near prices of prior trades (due to lack of trading or news that causes a significant price change), liquidity is low and prices can change more easily.

I assume that the possible prices a market maker can trade at are integers from 1 through 99 (percent).

When traders are pushing the price in one direction, this is taken as evidence that increases the weight assigned to the most recent price and all others farther in that direction. When traders reverse the direction, that is taken as evidence that increases the weight of the two most recent trade prices.

The resulting weights (p_px in the code) are fractions which should be multiplied by the maximum number of contracts the market maker is willing to offer when liquidity ought to be highest (one weight for each price at which the market maker might position itself (yes there will actually be two prices; maybe two weight ought to be averaged)).

There is still room for improvement in this approach, such as giving less weight to old trades after the market acts like it has responded to news. But implementers should test simple improvements before worrying about finding the optimal rules.

trades = [(1, 51), (1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52),]
p_px = {}
num_agree = {}

probability_list = range(1, 100)
num_probabilities = len(probability_list)

for i in probability_list:
    p_px[i] = 1.0/num_probabilities
    num_agree[i] = 0

num_trades = 0
last_trade = 0
for (buy, price) in trades: # test on a set of made-up trades
    num_trades += 1
    for i in probability_list:
        if last_trade * buy < 0: # change of direction
            if buy < 0 and (i == price or i == price+1):
                num_agree[i] += 1
            if buy > 0 and (i == price or i == price-1):
                num_agree[i] += 1
        else:
            if buy < 0 and i <= price:
                num_agree[i] += 1
            if buy > 0 and i >= price:
                num_agree[i] += 1
        p_px[i] = (num_agree[i] + 1.0)/(num_trades + num_probabilities)
    last_trade = buy

for i in probability_list:
    print i, num_agree[i], '%.3f' % p_px[i]

[See here and here for some context.]

John Salvatier has drawn my attention to a paper describing A Practical Liquidity-Sensitive Automated Market Maker [pdf] which fixes some of the drawbacks of the Automated Market Maker that Robin Hanson proposed.

Most importantly, it provides a good chance that the market maker makes money in roughly the manner that a profit-oriented human market maker would.

It starts out by providing a small amount of liquidity, and increases the amount of liquidity it provides as it profits from providing liquidity. This allows markets to initially make large moves in response to a small amount of trading volume, and then as a trading range develops that reflects agreement among traders, it takes increasingly large amounts of money to move the price.

A disadvantage of following this approach is that it provides little reward to being one of the first traders. If traders need to do a fair amount of research to evaluate the contract being traded, it may be that nobody is willing to inform himself without an expectation that trading volume will become significant. Robin Hanson’s version of the market maker is designed to subsidize this research. If we can predict that several traders will actively trade the contract without a clear-cut subsidy, then the liquidity-sensitive version of the market maker is likely to be appropriate. If we can predict that a subsidy is needed to generate trading activity, then the best approach is likely to be some combination of the two versions. The difficulty of predicting how much subsidy is needed to generate trading volume leaves much uncertainty.

[Updated 2010-07-01:
I’ve reread the paper more carefully in response to John’s question, and I see I was confused by the reference to “a variable b(q) that increases with market volume”. It seems that it is almost unrelated to what I think of as market volume, and is probably better described as related to the market maker’s holdings.

That means that the subsidy is less concentrated on later trading than I originally thought. If the first trader moves the price most of the way to the final price, he gets most of the subsidy. If the first trader is hesitant and wants to see that other traders don’t quickly find information that causes them to bet much against the first trader, then the first trader probably gets a good deal less subsidy under the new algorithm. The latter comes closer to describing how I approach trading on an Intrade contract where I’m the first to place orders.

I also wonder about the paper’s goal of preserving path independence. It seems to provide some mathematical elegance, but I suspect the market maker can do better if it is allowed to make a profit if the market cycles back to a prior state.
]

I’ve made a change to the software which should fix the bug uncovered last weekend.
I’ve restored about half of the liquidity I was providing before last weekend. I believe I can continue to provide the current level of liquidity for at least a few more months unless prices change more than I currently anticipate. I may readjust the amount of liquidity provided in a month or two to increase the chances that I can continue to provide a moderate amount of liquidity until all contracts expire without adding more money to the account.
I’m not making new software public now. I anticipate doing so before the end of November.

Last night an Intrade trader found and exploited a bug in my Automated Market Maker, manipulating DEM.PRES-TROOPS.IRAQ until Intrade rejected one of the market maker’s orders for lack of credit and the software shut down.
The bug involves handling of partial executions of orders, and doesn’t appear to be easily fixable (what happened looks nearly identical to the scenarios I had analyzed and thought I had guarded against).
For the moment, I’ve reduced the market maker’s order size to one contract, which will prevent further exploitation but provide much less liquidity.
I will try to fix the bug sometime in November and increase the order size (on the contracts that don’t get expired at election time) by as much as I can without adding more money to the market maker’s account. I will also analyze the information provided by the markets shortly after the election.

I have implemented subsidies to encourage trading of some conditional prediction market contracts that may provide useful information about the consequences of the 2008 presidential election, via a simple automated market maker (using an algorithm described near the end of http://hanson.gmu.edu/ifextropy.html). The subsidized market maker ought to provide incentives for traders to devote more thought to these contracts than they would if the liquidity was less predictable.
Intrade has agreed not to charge any trading or expiry fees on these contracts.
Some places to look for extensive description of the motivations behind these subsidies are here and here.

The contracts are:

Please read the detailed specifications at Intrade before trading them, as one-line descriptions are not sufficient for you to fully understand them.
For the first two of those contracts, the market maker will enter bids and asks of 38 contracts, and can lose a maximum of $5187.76 on each contract. For the other four contracts, the market maker will enter bids and asks of 115 contracts, and can lose a maximum of $7906.25 on each contract.
I will maintain a web page here devoted to these contracts.
See also this more eloquent description on Overcoming Bias.