At last Sunday’s Overcoming Bias meetup, we tried paranoid debating. We formed groups of mostly 4 people (5 for the first round or two) and competed to produce the most accurate guess to trivia questions with numeric answers, with one person secretly designated to be rewarded for convincing the team to produce the least accurate answer.
It was fun and may have taught us a little about becoming more rational. But in order to be valuable, it should be developed further to become a means of testing rationality. As practiced, it tested some combination of trivia knowledge and rationality. The last round reduced the importance of trivia knowledge by rewarding good confidence intervals instead of a single good answer. I expect there are ways of using confidence intervals that remove the effects of trivia knowledge from the scores.
I’m puzzled about why people preferred the spokesman version to the initial version where the median number was the team’s answer. Designating a spokesman publicly as a non-deceiver provides information about who the deceiver is. In one case, we determined who the deceiver was by two of us telling the spokesman that we were sufficiently ignorant about the subject relative to him that he should decide based only on his knowledge. That gave our team a big advantage that had little relation to our rationality. I expect the median approach can be extended to confidence intervals by taking the median of the lows and the median of the highs, but I’m not fully confident that there are no problems with that.
The use of semi-randomly selected groups meant that scores were weak signals. If we want to evaluate individual rationality, we’d need rather time consuming trials of many permutations of the groups. Paranoid debating is more suited to comparing groups (e.g. a group of people credentialed as the best students from a rationality dojo, or the people most responsible for decisions in a hedge fund).
See more comments at Less Wrong.