Back in 2008, I criticized the book Predictocracy for proposing prediction markets whose contracts would be resolved without reference to ground truth.
Recently, Srinivasan, Karger, and Chen (SKC) published more scholarly paper titled Self-Resolving Prediction Markets for Unverifiable Outcomes.
Manipulation
In the naive version of self-resolving markets that I think Predictocracy intended, the market price at some point is used to pay off participants. That means a manipulator can enter the market as a trader, and trade so as to drive the market price in whatever direction they want. Unlike markets that are resolved by a ground truth, there’s no reliable reward for other traders to offset this distortion. It seems likely that manipulators will sometimes be able to set the price wherever they want, because there are no incentives that offset the manipulation.
SKC replace the standard prediction market approach with a sequential peer prediction mechanism, where the system elicits predictions rather than prices, and a separate step aggregates the individual predictions (as in Metaculus).
SKC propose that instead of ground truth or market prices, the market can be closed at a random time, and the prediction of whichever trader traded last is used to determine the rewards to most of the other traders. (Much of the paper involves fancy math to quantify the rewards. I don’t want to dive into that.)
That suggests that in a market with N traders, M of whom are manipulating the price in a particular direction, the chance of the final rewards being distorted by manipulation is M/N. That’s grounds for some concern, but it’s an important improvement over the naive self-resolving market. The cost of manipulation can be made fairly high if the market can attract many truthful traders.
The paper assumes the availability of truthful traders. This seems appropriate for markets where there’s some (possibly very small) chance of the market being resolved by ground truth. It’s a more shaky assumption if there’s a certainty that the market will be resolved based on the final prediction.
When is this useful?
Self-resolving markets are intended to be of some value for eliciting prices for contracts that have a low probability of achieving the kind of evidence that will enable them to be conclusively resolved.
At one extreme, traders will have no expectation of future traders being better informed (e.g. how many angels can fit on the head of a pin). I expect prediction markets to be pointless here.
At the more familiar extreme, we have contracts where we expect new evidence to generate widespread agreement on the resolution by some predictable time (e.g. will Biden be president on a certain date). Here prediction markets work well enough that adding a self-resolving mechanism would be, at best, pointless complexity.
I imagine SKC’s approach being more appropriate to a hypothetical contract in the spring of 2020 that asks whether a social media site should suppress as misinformation claims about COVID originating in a lab leak. We have higher quality evidence and analysis today than we did in 2020, but not enough to fully resolve the question. A random trader today will likely report a wiser probability than one in 2020, so I would have wanted the traders in 2020 to have incentives to predict today’s probability estimates.
I can imagine social media sites using standardized prediction markets (mostly automated, with mostly AI traders?) to decide what to classify as misinformation.
I don’t consider that approach to be as good as getting social media sites out of the business of suppressing alleged misinformation, but I expect it to be an improvement over the current mess, and I don’t expect those sites to give up on imposing their views on users. Prediction markets will likely make them a bit more cautious about doing so based on fads.
The SKC approach seems more appropriate for sites such as Wikipedia where it’s hard to avoid expressing some sort of opinion about what qualifies as misinformation.
AI Risks
Would SKC’s approach be useful for bigger topics whose outcome may or may not be observable? A key market I’d like to see is on something like “Will AI destroy humanity?”.
Some people are concerned about scenarios under which life continues somewhat normally until AI amasses enough power to take over the world. By the time traders can see more evidence of the risk than they see now, it’s too late to do anything about the risk.
Prediction markets continue to seem poorly incentivized for predicting such scenarios.
I consider it more likely that traders will continue to accumulate lots of weak evidence about relevant factors, such as takeoff speeds, and the extent to which the leading AI becomes better than its closest competitors. In this scenario, I expect traders will have more accurate forecasts than they do now sometime before AIs become powerful enough to destroy us.
Such a market would be biased against the scenarios in which AI destroys us before we learn anything new. But getting the incentives right for some scenarios seems better than giving up on incentives.
Conclusion
The SKC proposal seems likely to modestly improve prediction markets. I’m moderately concerned about the increased potential for manipulation.