(What follows are some reflections on two related problems. One is Robin Hanson’s discussion of prediction markets to counteract status quo bias, and the other is my friend Leigh Johnson’s meditation on strong moral relativism. Because of length, I have cut my extended reflections on Dr. J’s “strong relativism” for a post tomorrow.)
If there’s one thing I know, it’s that I might be wrong. In fact, for a great number of my beliefs, I am more likely wrong than right! In general, the problem is identifying false beliefs. Most of the time, my fallibility is not a big problem. If I’m not as handsome as I’d like to think, it won’t matter much because my wife will help to maintain my illusions. If I’m not as smart as I pretend, well, I’m more likely to be ignored than corrected, so I can go to my grave believing myself misunderstood rather than dim. But these are the easy questions. In this blog, in my scholarship, and in my teaching, I frequently make claims that have nothing to do with myself. My arguments are about the world, and if I’m wrong, and someone relies on my information, I suspect that they are made worse by having listened to me. But am I more likely to be wrong than my colleagues? Less likely? Just how wrong am I likely to be? Should any of this matter to my employer, George Washington University?
There are many ways to evaluate these questions, but I’d like to start by discussing three, all from economists. The first is what Brad DeLong calls “Marking One’s Beliefs to Market,” and it seems like a useful personal exercise for an academic, whether economist or not:
Back on March 3, 2000 I marked my beliefs to market: took a look back at the ten most important things I had believed in the 1990s, and tried to assess how accurate my beliefs had been.
Shouldn’t we all do this, not just every decade but every year, or even more frequently, whenever facts come in? In the academy, especially after you’ve been granted tenure, it’s easy to drift along without ever testing your beliefs against anything other than your students, who are cowed by grade anxieties and a general respect for authority, and your research cohort, who will more often be motivated by social norms of mutual aid and assistance than by the desire to hold your feet to the fire. That said, I hold all kinds of beliefs, including the belief that my beliefs ought to track the truth. The problem is figuring out how to test them.
For example, I believe that coercion can be justified by deliberative institutions. Our commitment to democracy can sometime be rooted in our commitments to equality, but sometimes we also cite the superior information-gathering of democratic institutions, or the stability such regimes bring. We might argue that disgust with government policies is more cheaply expressed through voting and protesting than it is through revolution, so the state is more likely to know what people want if they have a chance to vote. Most contemporary democratic theorists hold some version of this view, and the disputes among us tend to focus on whether democracy as a whole is more of a reason or a religion. Diana Mutz recently proposed a “middle-range” alternative:
I advocate abandoning tests of deliberative theory per se and instead developing “middle-range” theories that are each important, specifiable, and falsifiable parts of deliberative democratic theory. By replacing vaguely defined entities with more concrete, circumscribed concepts, and by requiring empirically and theoretically grounded hypotheses about specific relationships between those concepts, researchers may come to understand which elements of the deliberative experience are crucial to particular valued outcomes.
Because deliberative democracy as a whole is a kind of moving target, we need to reduce the slogans to mechanism and concepts that are testable. In a forthcoming article on epistemic justifications for democracy, my coauthor Steve Maloney and I consider the proposition that in light of public ignorance and public choice problems, an independent Federal Reserve system is the best for managing financial crises. This is a potentially testable hypothesis, for instance using comparative studies, and we tentatively side with it, though there are some interesting counter-examples, like the Reserve Bank of New Zealand (pdf) whose governor can be removed, but only for performance failures.
Enter Bryan Caplan, who argues that we academics ought to be more Bayesian, i.e. that we ought to frame our beliefs both in terms of their testable claims and their weighted probability. This is an economist’s version of the “middle-range” solution proposed by Mutz:
It is striking, then, to realize that academic economists are not Bayesians. And they’re proud of it!
This is clearest for theorists. Their epistemology is simple: Either something has been (a) proven with certainty, or (b) no one knows – and no intellectually respectable person will say more. If no one has proven that Comparative Advantage still holds with imperfect competition, transportation costs, and indivisibilities, only an ignoramus would jump the gun and recommend free trade in a world with these characteristics.
Empirical economists’ deviation from Bayesianism is more subtle. Their epistemology is rooted in classical statistics. The respectable researcher comes to the data an agnostic, and leaves believing “whatever the data say.” When there’s no data that meets their standards, they mimic the theorists’ snobby agnosticism. If you mention “common sense,” they’ll scoff. If you remind them that even classical statistics assumes that you can trust the data – and the scholars who study it – they harumph.
Rather than divide our beliefs into certitudes and unknowns, we might follow Caplan in trying to evaluate the likelihood of some unknowns, or to weight inadequate evidence rather than discounting it entirely. For instance, in our discussion of epistemic reliability vis-a-vis the Federal Reserve, we also briefly address possible counterarguments, focusing on the ways that skeptics tend to attribute conspiratorial intent to the reserve system. Fed skepticism is a widely held view because it has recently been popularized by the libertarian Ron Paul, while our own position is considered elitist by many democratic theorists and has even been challenged by some contrarian economists. Given our tentativeness, perhaps we ought not to have published in the face of such a political controversy, or perhaps we ought to have reported that our findings were only 61% likely.
After all, if a vote were held today, I suspect that the majority of Americans could be persuaded that the Federal Reserve system is corrupt and needs to be replaced or audited extensively by Congress, and that’s not just me spitballing. You can take that prediction to the bank, because polling will back me up: in July only 39% of the public thought that the Fed was doing an Excellent or Good job. 13% decided not to register any opinion at all, indicating a large unacknowledged public ignorance problem hiding behind the 48% disapproval ratings. How can we expect people who don’t know the three branches of government to have a realistic opinion on appropriate money supply? This undermines one of the middle-range justifications for democracy: its capacity to supply the best possible epistemic grounds for public policy.
Caplan’s colleague Robin Hanson takes Bayeseanism a step further, by arguing that we ought to weight out own beliefs in terms of intensity in addition to probability, by using prediction markets. In other words, scholars and ordinary people ought to put their money where their mouths are. This would likely look something like Intrade, where contracts on future events are bought and sold. The price promises to tell us not just what people are thinking, but how strongly they think it. Despite some concerns about market manipulation, Hanson’s research appears correct: the presence of price manipulators (as in the markets for presidential primaries) appears to increase market capitalization and enhance the available-information-tracking of the predictions.
It’s not clear how to mark a belief like ours to market. Obviously, Steve and I would never sign on to a futures contract that merely tried to predict public opinion:of course citizens mistrust state financial institutions in times of financial criss. So we’d want some brand of specialized judgment from economists, but then our counter-party would worry about the sociology of economics departments, especially the prevalence of Milton Friedman-inspired monetarism. We’d be betting both on the ‘fact of the matter’ and on the biases of the discipline. The same thing could also be said of predictions about global warming. After the recent release of e-mails, it’s been suggested that the mainstream or consensus view has been suppressing both bad arguments by climate skeptics (that’s not really news, that’s good peer review) but also potentially paradigm-shifting arguments by climate-alarmists! So predictions about what climate scientists will say or sign on to are clearly predictions about what they think is palatable, and not what they think is most accurate, and so we’ve lost yet another potential condition (peer-reviewed articles published in top journals) for our futures contracts.
So here’s the question: how efficacious could such ideas and predictions markets be for philosophy?
A surprising number of my beliefs are not about the world as such: they are about texts and the history of ideas, specifically the history of philosophy. Testing most of my beliefs is as simple as checking an explanatory sentence of mine against a sentence written by Plato or Hegel. If they correspond, then I have been shown to be correct, if not, I’ve got some explaining to do. When I correct, or am corrected by, other philosophers in an academic setting, sometimes this is simply done by to directing attention to an article or book that appears to contradict the erroneous textual interpretation. Then they or I attempt to explain the discrepancy. This kind of erudition-comparison is common among those who understand the field of philosophy as inextricably linked to its own history. But I suspect that we’d like to be right about the world in addition to being right about Heidegger.
Another problem has to do with the testability of most of the matters under discussion. For instance, I find that I am unsure whether or not I am a brain in a vat. I’d like to think that I’m not, but I also grant that if I were, it is probable that the computer that controls my sensations would give me no indications of my situation. So the one belief that seems to be the most essential seems untestable: every day I wake up and look to the market for the value of that belief, and find that there is no market at all for it. How much should I charge for a futures contract on conclusive proof that I am not a brain in a vat, with the condition that there be definitive evidence by January 1, 2011? How should I rate this contract against one that predicts that we will discover that we are all being deceived by an evil demon, or already trapped in the afterlife or eternally recurrence? Mostly we would say to those who are willing to make bets about our envattedness that they are wasting their time and money: propositions about such matters are not testable. But we might also say that an obsession with falsification and obsessive speculation about solipsism will often lead to precisely this kind of time-wasting exercise, and is best avoided.