Automated market-making software agents have been used in many prediction markets to deal with problems of low liquidity.
The simplest versions provide a fixed amount of liquidity. This either causes excessive liquidity when trading starts, or too little later.
For instance, in the first year that I participated in the Good Judgment Project, the market maker provided enough liquidity that there was lots of money to be made pushing the market maker price from its initial setting in a somewhat obvious direction toward the market consensus. That meant much of the reward provided by the market maker went to low-value information.
The next year, the market maker provided less liquidity, so the prices moved more readily to a crude estimate of the traders’ beliefs. But then there wasn’t enough liquidity for traders to have an incentive to refine that estimate.
One suggested improvement is to have liquidity increase with increasing trading volume.
I present some sample Python code below (inspired by equation 18.44 in E.T. Jaynes’ Probability Theory) which uses the prices at which traders have traded against the market maker to generate probability-like estimates of how likely a price is to reflect the current consensus of traders.
This works more like human market makers, in that it provides the most liquidity near prices where there’s been the most trading. If the market settles near one price, liquidity rises. When the market is not trading near prices of prior trades (due to lack of trading or news that causes a significant price change), liquidity is low and prices can change more easily.
I assume that the possible prices a market maker can trade at are integers from 1 through 99 (percent).
When traders are pushing the price in one direction, this is taken as evidence that increases the weight assigned to the most recent price and all others farther in that direction. When traders reverse the direction, that is taken as evidence that increases the weight of the two most recent trade prices.
The resulting weights (p_px in the code) are fractions which should be multiplied by the maximum number of contracts the market maker is willing to offer when liquidity ought to be highest (one weight for each price at which the market maker might position itself (yes there will actually be two prices; maybe two weight ought to be averaged)).
There is still room for improvement in this approach, such as giving less weight to old trades after the market acts like it has responded to news. But implementers should test simple improvements before worrying about finding the optimal rules.
trades = [(1, 51), (1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52),]
p_px = {}
num_agree = {}
probability_list = range(1, 100)
num_probabilities = len(probability_list)
for i in probability_list:
p_px[i] = 1.0/num_probabilities
num_agree[i] = 0
num_trades = 0
last_trade = 0
for (buy, price) in trades: # test on a set of made-up trades
num_trades += 1
for i in probability_list:
if last_trade * buy < 0: # change of direction
if buy < 0 and (i == price or i == price+1):
num_agree[i] += 1
if buy > 0 and (i == price or i == price-1):
num_agree[i] += 1
else:
if buy < 0 and i <= price:
num_agree[i] += 1
if buy > 0 and i >= price:
num_agree[i] += 1
p_px[i] = (num_agree[i] + 1.0)/(num_trades + num_probabilities)
last_trade = buy
for i in probability_list:
print i, num_agree[i], '%.3f' % p_px[i]