Hanson on doubt and justifying beliefs using markets

Robin Hanson channels and extends Thomas Reid:

What can you do about serious skepticism, i.e., the possibility that you might be quite mistaken on a great many of your beliefs? For this, you might want to consider which of your beliefs are the most reliable, in order to try to lean more on those beliefs when fixing the rest of your beliefs. But note that this suggests there is no general answer to what to do about doubt – the answer must depend on what you think are actually your most reliable beliefs.

Here’s Reid:

The sceptic asks me, Why do you believe the existence of the external object which you perceive? This belief, sir, is none of my manufacture; it came from the mint of Nature; it bears her image and superscription; and, if it is not right, the fault is not mine: I even took it upon trust, and without suspicion. Reason, says the sceptic, is the only judge of truth, and you ought to throw off every opinion and every belief that is not grounded on reason. Why, sir, should I believe the faculty of reason more than that of perception?—they came both out of the same shop, and were made by the same artist; and if he puts one piece of false ware into my hands, what should hinder him from putting another?

But Hanson goes on:

our most potent beliefs for dealing with doubt are often our beliefs about the correlations between errors in other beliefs. This is because having low error correlations can imply that related averages and aggregates are very reliable. For example, if there is little correlation in the errors your eyes make under different conditions in judging brightness, then you need only see the same light source under many conditions to get a reliable estimate of its brightness.

Since beliefs about low error correlations can support such strong beliefs on aggregates, in practice doubt about one’s beliefs often focuses on doubts about the correlations in one’s belief errors. If we guess that a certain set of errors have low correlation, but worry that they might really have a high correlation, it is doubts about such hidden correlations that threaten to infect many other beliefs.

Indeed: philosophers don’t just worry that the world might not exist, we also worry that our access to the world may be mediated by biased methods: not just perception itself, but the conceptual apparatus that interprets perceptions and makes them meaningful. If there is an error of some sort in that apparatus, it’s unclear exactly how we could go about correcting it, when our only access to that error is through the biased apparatus. I follow Nelson Goodman and John Rawls in advocating reflective equilibrium for such problems, but this method has its limits. Specifically, it doesn’t tell us how to adjudicate case/rule or percept/concept disagreements when they arise, especially in light of the way that any “Error Theory” will entail particular prior commitments which may themselves be mistaken.

Hanson argues that the correlation problem seems particularly pressing when we describe cognitive and social biases. That is, it’s not clear what we can do if our minds, communities, and institutions tend to mislead us in characteristic ways that we cannot anticipate. Of course, it is clear that doubters should seek better institutions and social processes. But what’s better?

Well, Hanson is an economist and he thinks that markets are better:

If fact, you may end up agreeing with me that our best approach is for would-be doubters to coordinate to support new institutions that better reward correction of error, especially correlated error.  I refer of course to prediction markets.

Yet most non-economists and some economists don’t find markets to be particularly credible. (Remember: “Markets can remain irrational a lot longer than you and I can remain solvent.”) Since most concerns are related to market manipulation once futarchy is instantiated, is there a way to prove a hypothesis about real prediction markets that doesn’t fall into a pessimist’s version of the “real communism has never been tried” trap?

Prediction markets suffer from the same skeptical concerns that other governance forms suffer: a kind of path-dependence that suggests “you can’t get there from here.” There’s no reason for democratic citizens skeptical of markets to drop their skepticism in the face of facts they cannot adequately evaluate without depending on their own reasoning powers. Cognitive and social biases guarantee that Hanson’s expertise and disciplinary commitments to economics only undermine his capacity to enact his preferred policies. And even he must worry that he has given too much credence to the wrong methods, if he is to be consistent.

In sum: Skeptics can tell a story about the manipulation of prediction markets once they become a tool for governance, which seems to distinguish their concerns from research results for all sub-governance prediction markets. The best evidence for this kind of manipulation isn’t laboratory results but actually existing futures markets. Much popular speculation, for instance, surrounds the capacity of hedge funds or investment banks to manipulate futures to their own gain by bringing outsized portions of capital to bear in extremely complex forms of market manipulation. Given this, why ought we to accept the evidence from small group experiments like those described by Hanson? The real question is how such prediction markets would perform when they actually served a governance function and were subject to the actions of heavily-leveraged firms looking to enact ingenious schemes.

Is there a way to take a bet against prediction markets that isn’t a performative contradiction?


Posted

in

by

Comments

21 responses to “Hanson on doubt and justifying beliefs using markets”

  1. Robin Hanson Avatar

    Are you saying that there is substantial evidence of manipulation to substantially and sustainably bias market prices? Or are you just saying there is no way to convince people who suspect such things are common that it doesn't actually happen much?

    1. Joshua Miller Avatar

      Well, I'm definitely saying the second thing: there's substantial "motivated skepticism" of prediction markets among democratic citizens who'd rather not give up power to markets they don't understand.

      I'm also trying to say a third thing: we can't know whether there would be targeted manipulation of prediction markets. After all, they evince evidence of such manipulation "during short transition phases." (Wolfers and Zitzewitz 2004) Wouldn't a bad actor with substantial outside-of-market incentives be tempted to time that transition phase to the moment of an important decision? For instance, what keeps an oil company from manipulating temperature expectation futures during a global warming summit?

      This is a version of the overarching concern about which priors to trust: how can we sure that small group experiments among undergraduates gives us a reliable guide to the emergent behaviors of a futarchy?

  2. Robin Hanson Avatar

    Do you really this mechanism is *more* easily manipulated that other mechanisms of governance that we might use instead? Or do you think we should hold this mechanism to much higher standards? We could of course do larger experiments on manipulation with larger budgets.

    1. Joshua Miller Avatar

      By the way, a version of this debate can be seen in my field in discussions of epistemic reliabilism perhaps best described by Ernest Sosa in his "The Raft and the Pyramid."

  3. Joshua Miller Avatar

    I don't know if it's *more* manipulable, or less. I do know that it *will* be held to much higher standards, because people think they understand "one man, one vote," but they're mostly quite certain that they don't understand markets. I'm not saying it's fair or rational, I'm just noting that there's a status quo bias.

    I favor more experiments, though I don't have a budget for you. I'd particularly like to see someone address the "short term manipulation" question using an outsized actor's manipulation, to model the effect of a heavily-leveraged hedge fund actively engaged in deceptive trades in a low capitalization market. Perhaps you know of such an experiment already?

  4. Robin Hanson Avatar

    Do you know of any other governance mechanism where robustness to manipulation has been investigated even as much as it has with prediction markets? The obvious way to hope to gain support for futarchy is to try it out on small scales, then gradually increase the scale of the trials. I'm eager to assist in any such trials.

  5. Joshua Miller Avatar

    There's been orders of magnitude more study of small group deliberation and juries than there has been of prediction markets. It's just that those mechanisms have much more mixed results, especially when it comes to group polarization and ignorance. That's the reason I'm sympathetic to prediction markets.

    That said, prediction markets aren't a governance mechanism on their own, since they don't produce coercive rules. As far as I can tell, you haven't completed the conceptual work needed to interface ideas markets with rule-making or coercive agency. How do we go from the price of a futures contract to the content of a law? Governance is precisely the realm where fact/value distinctions start to break down, after all.

  6. Robin Hanson Avatar

    Yes there are lots of studies of juries, but I know of none regarding the sort of manipulation concerns you focus on for prediction markets. I'd be interested to hear what you think is the remaining conceptual work needed. I think I've considered a lot of issues in the abstract; what mainly remains is to try it out in real organizations and see what the real issues are there.

    1. Joshua Miller Avatar

      I'll take another look at what you've written so far. Is there a current definitive statement of the structure, or will "Shall We Vote on Values, But Bet on Beliefs?" suffice?

      As for juries and small group deliberations, I think you're missing the point. With markets, the fear is manipulation. There are no "hedge fund" risks in juries, though there's plenty of discussion of bias and manipulation there. Just look for discussions of jury nullification and group polarization, i.e. Punitive Damages: How Juries Decide.

  7. Robin Hanson Avatar

    Yes there is research showing voting juries can make *errors*. I ask more specifically for research on errors *caused by manipulation*. That is, create some participants in the process with a private incentive to push the result in a certain direction. Then compare outcomes with outcomes when you don't create such participants. That is what we've done with prediction markets, and what you could do for voting or other mechanisms.

  8. Joshua Miller Avatar

    How is jury nullification not a form of manipulation? If an all-white jury declines to convict a white defendant for murdering a black defendant, that's not an error. It's manipulation! The same thing goes when retributive juries over-compensate victims using punitive damages. They're not making a mistake, they're taking revenge. Mock juries done under laboratory conditions show this pretty clearly, as do case studies in the wild. We've even developed institutional mechanisms to combat this, which have also been studied!

    Of course, you'd be right to point out that making juries less credible doesn't harm the credibility of prediction markets. But the prediction markets studies are comparatively few and concentrated among proponents and leaners; show me the research from avowed prediction market skeptics.

  9. Robin Hanson Avatar

    It doesn't sound like you are talking about experiments with *controlled* manipulation, i.e., where you add and take away the manipulation element and see what difference it makes. You instead seem to be interpreting what some jurors do as "manipulation." E.g., you presume a white juror couldn't have a legitimate reason for not convicting a white defendant. Are any of these jury studies really by "avowed jury skeptics"? Does that really influence your interpretation of the jury studies?

    1. Joshua Miller Avatar

      It doesn't sound like you are talking about experiments with *controlled* manipulation, i.e., where you add and take away the manipulation element and see what difference it makes.

      Well then I'm describing it badly, because that's precisely what it is.

      Are any of these jury studies really by “avowed jury skeptics”?

      Yes. In fact, the US Supreme Court has explictly refused to consider some of the recent research because it was funded by Exxon after the Valdez oil spill to discredit the punitive damages there.

      Does that really influence your interpretation of the jury studies?

      I don't like it when there's no research at all that questions the effect size or fails to prove the hypothesis. A steady stream of confirmations is not generally how science works. Until an in-discipline skeptic does his best to imagine counter-arguments and design experiments to discredit prediction markets, to which supporters then respond, I personally won't feel comfortable that the model has been well-tested.

  10. Robin Hanson Avatar

    I'd like to see cites to specific studies of controlled manipulation, and by "avowed skeptics" of juries.

    1. Joshua Miller Avatar

      C. Sunstein, R. Hastie, J. Payne, D. Schkade, W. Viscusi, Punitive Damages: How Juries Decide (2002)
      Schkade, Sunstein, & Kahneman, Deliberating About Dollars: The Severity Shift, 100 Colum. L. Rev. 1139 (2000)
      Hastie, Schkade, & Payne, Juror Judgments in Civil Cases: Effects of Plaintiff’s Requests and Plaintiff’s Identity on Punitive Damage Awards, 23 Law & Hum. Behav. 445 (1999)
      Sunstein, Kahneman, & Schkade, Assessing Punitive Damages (with Notes on Cognition and Valuation in Law), 107 Yale L. J. 2071 (1998)

  11. Robin Hanson Avatar

    The abstract of your second link is:

    How does jury deliberation affect the pre-deliberation judgments of individual jurors? In this paper we make progress on that question by reporting the results of a study of over 500 mock juries composed of over 3000 jury eligible citizens. Our principal finding is that with respect to dollars, deliberation produces a "severity shift," in which the jury's dollar verdict is systematically higher than that of the median of its jurors' predeliberation judgments. A "deliberation shift analysis" is introduced to measure the effect of deliberation. The severity shift is attributed to a "rhetorical asymmetry," in which arguments for higher awards are more persuasive than arguments for lower awards. When judgments are measured not in terms of dollars but on a rating scale of punishment severity, deliberation increased high ratings and decreased low ratings. We also find that deliberation does not alleviate the problem of erratic and unpredictable individual dollar awards, but in fact exacerbates it. Implications for punitive damage awards and deliberation generally are discussed.

    This does *not* study controlled manipulation, *nor* is it down by "avowed skeptics" of juries.

    1. Joshua Miller Avatar

      The study involves controls: the mock juries deliberated on a the same case but came to different results on the basis of participant manipulation. Take a look at the Hastie 1999 article, which mostly evaluates anchor effects: there, the "manipulation" is the prosecution's dollar figure demand.

      It is done by avowed skeptics, yes. They are skeptics and they've declared that skepticism repeatedly, to the point of being ridiculed for that skepticism by the Supreme Court.

      I don't really understand why you're disagreeing with me here. None of this undermines prediction markets directly, it undermines a different decision procedure! I'm just wishing for a larger ecosystem of researchers on the topic and showing you what that looks like elsewhere.

  12. Robin Hanson Avatar

    You misunderstand how the word "manipulation" is being used in the prediction market literature. It does not just mean "any change." And are those "skeptics" really skeptical about democracy in general? What do they favor instead?

    1. Joshua Miller Avatar

      You misunderstand the implications of small group deliberative weaknesses for democracy. Juries aren’t democracies.

  13. […] his comments on my post last week, Robin Hanson asked about the conceptual work still needed to advance the cause of prediction […]

  14. […] is exploring territory similar to the prediction markets I discussed with Robin Hanson last year here and here.) Share […]

Second Opinions