Idea Futures

Here are some scattered comments about the 2024 elections.

I was glad to have Manifold Markets and Election Betting Odds to watch the results. I want numbers, not encumbered by the storytellers and emotions of the news media. I also watched the odds that Nate Silver tried to update, but that was a total flop.

Peak Polarization

I see many weak hints that the polarization of the US has subsided compared to 2020.

Continue Reading

Book review: On the Edge: The Art of Risking Everything, by Nate Silver.

Nate Silver’s latest work straddles the line between journalistic inquiry and subject matter expertise.

“On the Edge” offers a valuable lens through which to understand analytical risk-takers.

The River versus The Village

Silver divides the interesting parts of the world into two tribes.

On his side, we have “The River” – a collection of eccentrics typified by Silicon Valley entrepreneurs and professional gamblers, who tend to be analytical, abstract, decoupling, competitive, critical, independent-minded (contrarian), and risk-tolerant.

Continue Reading

Manifold Markets is a prediction market platform where I’ve been trading since September. This post will compare it to other prediction markets that I’ve used.

Play Money

The most important fact about Manifold is that traders bet mana, which is for most purposes not real money. You can buy mana, and use mana to donate real money to charity. That’s not attractive enough for most of us to treat it as anything other than play money.

Play money has the important advantage of not being subject to CFTC regulation or gambling laws. That enables a good deal of innovation that is stifled in real-money platforms that are open to US residents.

Continue Reading

Book review: Superforecasting: The Art and Science of Prediction, by Philip E. Tetlock and Dan Gardner.

This book reports on the Good Judgment Project (GJP).

Much of the book recycles old ideas: 40% of the book is a rerun of Thinking Fast and Slow, 15% of the book repeats Wisdom of Crowds, and 15% of the book rehashes How to Measure Anything. Those three books were good enough that it’s very hard to improve on them. Superforecasting nearly matches their quality, but most people ought to read those three books instead. (Anyone who still wants more after reading them will get decent value out of reading the last 4 or 5 chapters of Superforecasting).

The book’s style is very readable, using an almost Gladwell-like style (a large contrast to Tetlock’s previous, more scholarly book), at a moderate cost in substance. It contains memorable phrases, such as “a fox with the bulging eyes of a dragonfly” (to describe looking at the world through many perspectives).

Continue Reading

Automated market-making software agents have been used in many prediction markets to deal with problems of low liquidity.

The simplest versions provide a fixed amount of liquidity. This either causes excessive liquidity when trading starts, or too little later.

For instance, in the first year that I participated in the Good Judgment Project, the market maker provided enough liquidity that there was lots of money to be made pushing the market maker price from its initial setting in a somewhat obvious direction toward the market consensus. That meant much of the reward provided by the market maker went to low-value information.

The next year, the market maker provided less liquidity, so the prices moved more readily to a crude estimate of the traders’ beliefs. But then there wasn’t enough liquidity for traders to have an incentive to refine that estimate.

One suggested improvement is to have liquidity increase with increasing trading volume.

I present some sample Python code below (inspired by equation 18.44 in E.T. Jaynes’ Probability Theory) which uses the prices at which traders have traded against the market maker to generate probability-like estimates of how likely a price is to reflect the current consensus of traders.

This works more like human market makers, in that it provides the most liquidity near prices where there’s been the most trading. If the market settles near one price, liquidity rises. When the market is not trading near prices of prior trades (due to lack of trading or news that causes a significant price change), liquidity is low and prices can change more easily.

I assume that the possible prices a market maker can trade at are integers from 1 through 99 (percent).

When traders are pushing the price in one direction, this is taken as evidence that increases the weight assigned to the most recent price and all others farther in that direction. When traders reverse the direction, that is taken as evidence that increases the weight of the two most recent trade prices.

The resulting weights (p_px in the code) are fractions which should be multiplied by the maximum number of contracts the market maker is willing to offer when liquidity ought to be highest (one weight for each price at which the market maker might position itself (yes there will actually be two prices; maybe two weight ought to be averaged)).

There is still room for improvement in this approach, such as giving less weight to old trades after the market acts like it has responded to news. But implementers should test simple improvements before worrying about finding the optimal rules.

trades = [(1, 51), (1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52),]
p_px = {}
num_agree = {}

probability_list = range(1, 100)
num_probabilities = len(probability_list)

for i in probability_list:
    p_px[i] = 1.0/num_probabilities
    num_agree[i] = 0

num_trades = 0
last_trade = 0
for (buy, price) in trades: # test on a set of made-up trades
    num_trades += 1
    for i in probability_list:
        if last_trade * buy < 0: # change of direction
            if buy < 0 and (i == price or i == price+1):
                num_agree[i] += 1
            if buy > 0 and (i == price or i == price-1):
                num_agree[i] += 1
        else:
            if buy < 0 and i <= price:
                num_agree[i] += 1
            if buy > 0 and i >= price:
                num_agree[i] += 1
        p_px[i] = (num_agree[i] + 1.0)/(num_trades + num_probabilities)
    last_trade = buy

for i in probability_list:
    print i, num_agree[i], '%.3f' % p_px[i]

The CFTC is suing Intrade for apparently allowing U.S. residents to trade contracts on gold, unemployment rates and a few others that it had agreed to prevent U.S. residents from trading. The CFTC is apparently not commenting on whether Intrade’s political contracts violate any laws.

U.S. traders will need to close our accounts.

The email I got says

In the near future we’ll announce plans for a new exchange model that will allow legal participation from all jurisdictions – including the US.

(no statement about whether it will involve real money, which suggests that it won’t).

I had already been considering closing my account because of the hassle of figuring out my Intrade income for tax purposes.

Book review: The Signal and the Noise: Why So Many Predictions Fail-but Some Don’t by Nate Silver.

This is a well-written book about the challenges associated with making predictions. But nearly all the ideas in it were ones I was already familiar with.

I agree with nearly everything the book says. But I’ll mention two small disagreements.

He claims that 0 and 100 percent are probabilities. Many Bayesians dispute that. He has a logically consistent interpretation and doesn’t claim it’s ever sane to believe something with probability 0 or 100 percent, so I’m not sure the difference matters, but rejecting the idea that those can represent probabilities seems at least like a simpler way of avoiding mistakes.

When pointing out the weak correlation between calorie consumption and obesity, he says he doesn’t know of an “obesity skeptics” community that would be comparable to the global warming skeptics. In fact there are people (e.g. Dave Asprey) who deny that excess calories cause obesity (with better tests than the global warming skeptics).

It would make sense to read this book instead of alternatives such as Moneyball and Tetlock’s Expert Political Judgment, but if you’ve been reading books in this area already this one won’t seem important.

[See here and here for some context.]

John Salvatier has drawn my attention to a paper describing A Practical Liquidity-Sensitive Automated Market Maker [pdf] which fixes some of the drawbacks of the Automated Market Maker that Robin Hanson proposed.

Most importantly, it provides a good chance that the market maker makes money in roughly the manner that a profit-oriented human market maker would.

It starts out by providing a small amount of liquidity, and increases the amount of liquidity it provides as it profits from providing liquidity. This allows markets to initially make large moves in response to a small amount of trading volume, and then as a trading range develops that reflects agreement among traders, it takes increasingly large amounts of money to move the price.

A disadvantage of following this approach is that it provides little reward to being one of the first traders. If traders need to do a fair amount of research to evaluate the contract being traded, it may be that nobody is willing to inform himself without an expectation that trading volume will become significant. Robin Hanson’s version of the market maker is designed to subsidize this research. If we can predict that several traders will actively trade the contract without a clear-cut subsidy, then the liquidity-sensitive version of the market maker is likely to be appropriate. If we can predict that a subsidy is needed to generate trading activity, then the best approach is likely to be some combination of the two versions. The difficulty of predicting how much subsidy is needed to generate trading volume leaves much uncertainty.

[Updated 2010-07-01:
I’ve reread the paper more carefully in response to John’s question, and I see I was confused by the reference to “a variable b(q) that increases with market volume”. It seems that it is almost unrelated to what I think of as market volume, and is probably better described as related to the market maker’s holdings.

That means that the subsidy is less concentrated on later trading than I originally thought. If the first trader moves the price most of the way to the final price, he gets most of the subsidy. If the first trader is hesitant and wants to see that other traders don’t quickly find information that causes them to bet much against the first trader, then the first trader probably gets a good deal less subsidy under the new algorithm. The latter comes closer to describing how I approach trading on an Intrade contract where I’m the first to place orders.

I also wonder about the paper’s goal of preserving path independence. It seems to provide some mathematical elegance, but I suspect the market maker can do better if it is allowed to make a profit if the market cycles back to a prior state.
]