Appreciative Thinking

I’ve been having a debate on a friend’s Facebook page about the value of Martha Nussbaum’s work (I’m a fan) and serendipitously I found this post on “appreciative thinking” via Tyler Cowen. It’s a kind of inverted critical thinking, from Seth Roberts:

When it comes to scientific papers, to teach appreciative thinking means to help students see such aspects of a paper as:

  1. What can we learn from it? What new ideas does it suggest? What already-existing plausible ideas does it make more plausible or less plausible?
  2. How is it an improvement over previous work? Does it use new methods? Does it use old methods in a new way? Do it show a better way to do something?
  3. Did the authors show good taste in their choice of problem? Is this a problem both important and possibly solvable?
  4. Are details done well? Is it well-written? Is the context of the work made clear? Are the data well-analyzed? Does it make good use of graphs? Is the discussion imaginative rather than formulaic?
  5. What’s interesting or enjoyable about it?

That sort of thing. In my experience few papers are worthless. But I’ve heard lots of papers called worthless.

The framing for these rules is worth looking at as well. Obviously, a lot of these skills are part and parcel of any true critical thinking or good close reading, but it’s nice to see folks emphasizing the positive element of reading. “Appreciative thinking” also seems like a good way to introduce a version of the principle of charity that Augustine describes in his On Christian Doctrine. The nice thing about this is the way it’s framed as a “checklist skill,” the kind you can put on your syllabus and design assignments around.

Anyway, it doesn’t exactly resolve the issue of Martha Nussbaum, but it does suggest some perspectives from which her work might be valuable even if some of her conclusions are also wrong.

Beyond ‘Real’ and ‘Relative’: What are moral propositions about?

Dr. J responds on moral realism. It appears that our dispute focuses on the role that ‘the world’ plays in verifying our moral propositions. Dr. J is  right to note that I’ve made an important and potentially dispositive claim in asserting that agent-neutrality requires that one’s account be “either verified by the world or not.”  However, I didn’t mean to agree with Watchmen‘s Dr. Manhattan that we can simply deduce moral facts from physical facts:

“A live human body and a deceased human body have the same number of particles. Structurally there’s no difference.”

Morally, there’s a big difference. Ignoring the absence of living processes, especially neural activity, live and dead bodies are the same. That’s a natural fact, but Dr. J holds that no moral fact follows from it, at least not in an obvious way. Unfortunately, Dr. J credits me with a a kind of reverse Dr. Manhattan-ism and so I’m going to try to defend myself from this reading:

But surely we must allow that moral values aren’t verifiable-or-not realities OF the world in the same way that objects or events are. That is, surely we must admit that VALUES ARE NOT THE SAME AS FACTS. AnPan’s definition wants to conflate descriptive and prescriptive claims, positive and normative claims. Or, at the very least, he wants to make prescriptive and normative claims derivations of descriptive or positive claims. That’s just wrong, in my view, and I don’t think that my resistance to that conflation necessarily means that I don’t think that moral values are “real.”

First, I think Dr. J goes too far in embracing ‘the real.’ Part of this is just the weird prejudices that have been built into our metaphysical language and the attempts to short-hand philosophical positions with labels. Dr. J can believe that values are ‘real’ (since she ‘really’ holds them) without holding that moral propositions are agent-neutral, and thus becoming a ‘moral realist.’ And yet, this seems to be the basis for Dr. J’s suggestion that I’ve confused her account of “weak relativism” with the “strong relativism” she actually adopts:

Let me say, in conclusion, that I think AnPan’s essay effectively took my “strong relativism” to be the same as what I described as “lazy relativism,”namely, a variant of subjectivism. I don’t think that moral values are justified solely by the subjective assertion of them. And I don’t think that Aristotle and John Brown were both right about slavery, but I just do not know how one locates the rightness or wrongness of their positions out there in the “real” world.

It is possible that I have misread her, but I think I’m trying to locate a tension in what she’s written, as an act of friendly close reading. (Of course, this friendly reading is inevitably a two way street, so it may be that there’s something I need to review through her eyes.) As a definitional matter, it seems that one cannot be a moral realist in the sense of agent-neutrality without forgoing relativism, since even in Dr. J’s ‘Strong Relativism,’ moral propositions are perspectival or agent-variant. I think that Dr. J is still committed to the claim that the source of verification for a moral proposition lies somewhere within the individual or group who makes the claim, which she relates to human freedom:

If I deny that there are “absolute” moral values, or that we have some revealed or reasonable access to them, then I am now the ONLY one responsible for giving an account of why I believe x instead of y. It means, among other things, that I understand the activity of moral evaluation to be the activity of free beings, that is, beings who (unlike objects) are not primarily governed by necessity… therefore are not obligated by necessity to hold whatever values they hold… therefore must take responsibility for their free choice to take up certain values and not others.

This isn’t a view that allows for an overlapping consensus through public reason giving. Values are contingent rather than necessary choices, on this view, and a complete (rather than reasonable) pluralism will emerge indexed to the amount of free choice that specific human beings actually enact. (Not everyone will freely choose their values, or else they’ll do it in bad faith, denying the freedom that they have exercised and disavowing the choice.) I’ve been trying to suggest that relativism, no matter how sophisticated, runs into the same set of problems when trying to account for errors or willful unreasonableness. The key is the status of the ‘justificatory account’ which Dr. J claims that only the actor can supply. In that sense, it seems to be something absolutely personal, own-most, or appropriated by the speaker, even while it is subjected to the response of our interlocutors. Yet insofar as the agent claims full editorial control over her justifactory account, she need not revise it unless she is satisfied. Dr. J mentions this issue in her original post on Strong Relativism:

If you claim that your moral values are authorized by the proper exercise of Reason or utilitarian calculation, and I can reasonably account for my arrival at opposite values, then you either have to account for your understanding of what Reason dictates or you have to demonstrate to me (in terms that I can agree to) how I am not being reasonable.

It’s this ‘in terms that I can agree to’ that troubles me. Looking back to John Brown and Aristotle, Dr. J holds that they can’t both be right about slavery, while they owe each other mutually recognizable reasons to resolve their dispute. I’m not sure how that works: John Brown’s hatred of slavery was rooted in a religious tradition that would have made no sense to Aristotle, while Aristotle’s defense of natural slavery is rooted in a theory of individuation and citizenship that Brown would not have recognized. Neither of them could offer ‘terms [the other] can agree to.’ One possibility is that Dr. J holds that Aristotle doesn’t owe John Brown palatable reasons or vice versa, but rather the enslaver owes them to the slave who he is coercing. This makes a lot of sense to me: the limit of free choice of values is when my values impinge on your freedom. But at that moment when I threaten to coerce you, it seems that my values are necessitated by something outside of myself, specifically: your values! If we generalize this, we’re left with an agent-neutral (but not mind-independent!) source of values and prescriptions. I would call that moral realism. Perhaps Dr. J’s strong relativism is actually compatible with agent-neutral moral realism?

One can say that moral propositions have truth conditions without saying that those truth conditions entail something specific about the world. I’ve obviously moved too quickly from the claim that our moral propositions must have some source of verification to the claim that I know what that source is, and that it’s ‘the world.’ (Whatever that means: I agree with Dr. J that the ‘world’ in question is the human world, or what I’d call the phenomenological world.) I tried to defend myself from this move by hiding behind fallibilism, in effect asserting that I know something about morality (that it is agent-neutral) without knowing much more about it (like which acts, specifically, are permissible and which prohibited.) Perhaps this position is untenable: perhaps my fallibilism is only an absolutism-to-be. The fact that I keep coming back to fairly ‘absolutist’ examples, like murder, torture, and slavery, justifies that concern.

Here’s why I’m reticent to spell out a specific theory of moral verification. Though I do not intend to conflate normative and positive claims about the world, it is still an open question whether they might be ‘derivations’ in the way Dr. J described: I think values might be dependent upon, and derived from, facts in some way that I cannot yet adequately specify. Of course, some values are independent of some facts, and so it’s important to find the right facts from which to start deriving values. If Dr. J and I are to make any progress beyond the metaphysics of morality, then I think we’ll need to move to the question of what, specifically, justifies an account of permissibility and obligation.

One possibility is that the ultimate non-agential limit of values and moral propositions is intersubjective. I originally thought I might be able to persuade Dr. J of this, though now I’m not sure: intersubjective verification requires only that our ‘moral games’ never conflict, such that, for instance, you’re never playing cops and robbers while I’m playing cowboys and indians, or better, that you’re never playing ‘imperial dominator and colonized native’ while I’m playing ‘aboriginal host and violently pushy guest.’

As an example, consider the practical syllogism: in choosing between overarching principles like “Murder is wrong,” and “Killing in self-defense is permissible,” we depend on our evaluation of the facts. “Am in danger? Am I using minimal force to defend myself?” There’s both a fact of the matter about whether a particular act was needed in self-defense or an overreaction and, in addition, a fact of the matter about the claim ‘killing is wrong except in certain circumstances such as self-defense.’ I don’t think we can analyze this in a relativist way: compare that claim to the absolute pacifist’s proposition ‘killing is wrong, even in cases of self-defense.’ The pacifist and the non-pacifist cannot both be correct without contradiction. Only the subjectivist claims that it can be true, for the absolute pacifist, that the non-pacifist’s self-defense is prohibited, while being true, for the non-pacifist, that self-defense is permitted.

For clarity: Intersubjective Moral Non-Contradiction holds that A’s claim about what is right for B cannot be co-veridical with C’s contradictory claim about what is right for B.

If values reference an individual’s choice or preferences, then the principle of intersubjective moral non-contradiction would be false. I take it that IMNC is entailed by the prescriptive nature of moral propositions, their aspiration to ‘ought’-ness, so most of the arguments among legal positivists and natural law theorists would apply here, because the moral system would be deeply linked to the political and legal system, as in Kant’s ‘moral law’ which explicitly confutes the two.

One reason I think that Dr. J will ultimate reject intersubjective moral non-contradiction is her claims that,

at the end of the day, all value-assgnments exist in a context, which means they can be decontextualized and recontextualized and are thus essentially relative to the contexts in which they belong. The context is what “justifies” or “verifies” the values, not the real world.

On that view, part of what an agent brings to the moral table is her own context. As sympathetic as I am to this way of speaking with regard to particular exegetical or interpretive strategies, I don’t think the same contextual problem can hold for moral and ethical questions. Put a different way, I suspect that all of the contexts in which they are embedded are ultimately nested within a ‘full context,’ a global order of contexts. That context is the thing I keep calling the world. As a fallibilist, I’m okay with saying I can’t see how all those contexts fit together, but the fact that they all do is a judgment I’ve derived from a physical fact, that we must share the planet together. Moral reason-giving, then, would be tied to the ‘horizon-fusing’ project Gadamer popularized, because it’s possible to be more or less short-sighed in evaluating these interconnections.

So a second possibility is that there are worldly constraints in the finite resources available on the planet that enforce game-theoretic strategies related to compromise and mutual coercion. In a subsistence society, for instance, limited food availability forces us to devote our resources to cooperation rather than competition. Limited fossil fuels might make certain kinds of consumption patterns immoral just because one person is choosing between an exotic vacation and leaving oil for our grandchildren. Contra Robert Nozick, there’s a Lockean proviso problem here. (Enclosers who appropriate the land through labor must leave “enough and good enough” for others.) The fact of finite resources forces us to unify our values into a sustainable patchwork rather than try to satisfy all preferences or values.

This last perhaps suggests one of my biggest concerns with relativism: the suggestion that pluralism does not demand, ultimately, a fairly tight overlapping consensus on matters of public concern. Dr. J uses the example of a t-shirt purchase, which reminded me of Peter Singer’s claims about our obligation to the global poor: if the $30 I could spend on a vintage t-shirt could equally well be spent on saving one of the 25,000 children who will die today from an easily treated disease that will kill them because of how desperately poor they are, then my preferences may not be so contingent. Nor can it be up to me to choose the context in which to view the question. When Dr. J moved from a discussion of  ‘moral propositions’ to ‘values’, she may have been offering an important insight, because value theory incorporates both aesthetic and exchange values in addition to ethical values. I’m generally fairly relativist about aesthetic values, but they may not be so irrelevant to questions like murder, slavery, and torture if my aesthetic values are only satisfiable in a global economic order that feeds off those practices.

This suggests, to me, that the ultimate verification for my values cannot simply be my own free choice, because my choices implicate others. That co-implication is precisely why I started this little disagreement with my friend Dr. J.

The Will-Be/Ought Gap: Marking Ideas to Market and Moral Realism

(What follows are some reflections on two related problems. One is Robin Hanson’s discussion of prediction markets to counteract status quo bias, and the other is my friend Leigh Johnson’s meditation on strong moral relativism. Because of length, I have cut my extended reflections on Dr. J’s “strong relativism” for a post tomorrow.)

If there’s one thing I know, it’s that I might be wrong. In fact, for a great number of my beliefs, I am more likely wrong than right! In general, the problem is identifying false beliefs. Most of the time, my fallibility is not a big problem. If I’m not as handsome as I’d like to think, it won’t matter much because my wife will help to maintain my illusions. If I’m not as smart as I pretend, well, I’m more likely to be ignored than corrected, so I can go to my grave believing myself misunderstood rather than dim. But these are the easy questions. In this blog, in my scholarship, and in my teaching, I frequently make claims that have nothing to do with myself. My arguments are about the world, and if I’m wrong, and someone relies on my information, I suspect that they are made worse by having listened to me. But am I more likely to be wrong than my colleagues? Less likely? Just how wrong am I likely to be? Should any of this matter to my employer, George Washington University?

There are many ways to evaluate these questions, but I’d like to start by discussing three, all from economists. The first is what Brad DeLong calls “Marking One’s Beliefs to Market,” and it seems like a useful personal exercise for an academic, whether economist or not:

Back on March 3, 2000 I marked my beliefs to market: took a look back at the ten most important things I had believed in the 1990s, and tried to assess how accurate my beliefs had been.

Shouldn’t we all do this, not just every decade but every year, or even more frequently, whenever facts come in? In the academy, especially after you’ve been granted tenure, it’s easy to drift along without ever testing your beliefs against anything other than your students, who are cowed by grade anxieties and a general respect for authority, and your research cohort, who will more often be motivated by social norms of mutual aid and assistance than by the desire to hold your feet to the fire. That said, I hold all kinds of beliefs, including the belief that my beliefs ought to track the truth. The problem is figuring out how to test them.

For example, I believe that coercion can be justified by deliberative institutions. Our commitment to democracy can sometime be rooted in our commitments to equality, but sometimes we also cite the superior information-gathering of democratic institutions, or the stability such regimes bring. We might argue that disgust with government policies is more cheaply expressed through voting and protesting than it is through revolution, so the state is more likely to know what people want if they have a chance to vote. Most contemporary democratic theorists hold some version of this view, and the disputes among us tend to focus on whether democracy as a whole is more of a reason or a religion. Diana Mutz recently proposed a “middle-range” alternative:

I advocate abandoning tests of deliberative theory per se and instead developing “middle-range” theories that are each important, specifiable, and falsifiable parts of deliberative democratic theory. By replacing vaguely defined entities with more concrete, circumscribed concepts, and by requiring empirically and theoretically grounded hypotheses about specific relationships between those concepts, researchers may come to understand which elements of the deliberative experience are crucial to particular valued outcomes.

Because deliberative democracy as a whole is a kind of moving target, we need to reduce the slogans to mechanism and concepts that are testable. In a forthcoming article on epistemic justifications for democracy, my coauthor Steve Maloney and I consider the proposition that in light of public ignorance and public choice problems, an independent Federal Reserve system is the best for managing financial crises. This is a potentially testable hypothesis, for instance using comparative studies, and we tentatively side with it, though there are some interesting counter-examples, like the Reserve Bank of New Zealand (pdf) whose governor can be removed, but only for performance failures.

Enter Bryan Caplan, who argues that we academics ought to be more Bayesian, i.e. that we ought to frame our beliefs both in terms of their testable claims and their weighted probability. This is an economist’s version of the “middle-range” solution proposed by Mutz:

It is striking, then, to realize that academic economists are not Bayesians.  And they’re proud of it!

This is clearest for theorists.  Their epistemology is simple: Either something has been (a) proven with certainty, or (b) no one knows – and no intellectually respectable person will say more.  If no one has proven that Comparative Advantage still holds with imperfect competition, transportation costs, and indivisibilities, only an ignoramus would jump the gun and recommend free trade in a world with these characteristics.

Empirical economists’ deviation from Bayesianism is more subtle.  Their epistemology is rooted in classical statistics.  The respectable researcher comes to the data an agnostic, and leaves believing “whatever the data say.”  When there’s no data that meets their standards, they mimic the theorists’ snobby agnosticism.  If you mention “common sense,” they’ll scoff.  If you remind them that even classical statistics assumes that you can trust the data – and the scholars who study it – they harumph.

Rather than divide our beliefs into certitudes and unknowns, we might follow Caplan in trying to evaluate the likelihood of some unknowns, or to weight inadequate evidence rather than discounting it entirely. For instance, in our discussion of epistemic reliability vis-a-vis the Federal Reserve, we also briefly address possible counterarguments, focusing on the ways that skeptics tend to attribute conspiratorial intent to the reserve system.  Fed skepticism is a widely held view because it has recently been popularized by the libertarian Ron Paul, while our own position is considered elitist by many democratic theorists and has even been challenged by some contrarian economists. Given our tentativeness, perhaps we ought not to have published in the face of such a political controversy, or perhaps we ought to have reported that our findings were only 61% likely.

After all, if a vote were held today, I suspect that the majority of Americans could be persuaded that the Federal Reserve system is corrupt and needs to be replaced or audited extensively by Congress, and that’s not just me spitballing. You can take that prediction to the bank, because polling will back me up: in July only 39% of the public thought that the Fed was doing an Excellent or Good job. 13% decided not to register any opinion at all, indicating a large unacknowledged public ignorance problem hiding behind the 48% disapproval ratings. How can we expect people who don’t know the three branches of government to have a realistic opinion on appropriate money supply? This undermines one of the middle-range justifications for democracy: its capacity to supply the best possible epistemic grounds for public policy.

Caplan’s colleague Robin Hanson takes Bayeseanism a step further, by arguing that we ought to weight out own beliefs in terms of intensity in addition to probability, by using prediction markets. In other words, scholars and ordinary people ought to put their money where their mouths are. This would likely look something like Intrade, where contracts on future events are bought and sold. The price promises to tell us not just what people are thinking, but how strongly they think it. Despite some concerns about market manipulation, Hanson’s research appears correct: the presence of price manipulators (as in the markets for presidential primaries) appears to increase market capitalization and enhance the available-information-tracking of the predictions.

It’s not clear how to mark a belief like ours to market. Obviously, Steve and I would never sign on to a futures contract that merely tried to predict public opinion:of course citizens mistrust state financial institutions in times of financial criss. So we’d want some brand of specialized judgment from economists, but then our counter-party would worry about the sociology of economics departments, especially the prevalence of Milton Friedman-inspired monetarism. We’d be betting both on the ‘fact of the matter’ and on the biases of the discipline. The same thing could also be said of predictions about global warming. After the recent release of e-mails, it’s been suggested that the mainstream or consensus view has been suppressing both bad arguments by climate skeptics (that’s not really news, that’s good peer review) but also potentially paradigm-shifting arguments by climate-alarmists! So predictions about what climate scientists will say or sign on to are clearly predictions about what they think is palatable, and not what they think is most accurate, and so we’ve lost yet another potential condition (peer-reviewed articles published in top journals) for our futures contracts.

So here’s the question: how efficacious could such ideas and predictions markets be for philosophy?

A surprising number of my beliefs are not about the world as such: they are about texts and the history of ideas, specifically the history of philosophy. Testing most of my beliefs is as simple as checking an explanatory sentence of mine against a sentence written by Plato or Hegel. If they correspond, then I have been shown to be correct, if not, I’ve got some explaining to do. When I correct, or am corrected by, other philosophers in an academic setting, sometimes this is simply done by to directing attention to an article or book that appears to contradict the erroneous textual interpretation. Then they or I attempt to explain the discrepancy. This kind of erudition-comparison is common among those who understand the field of philosophy as inextricably linked to its own history. But I suspect that we’d like to be right about the world in addition to being right about Heidegger.

Another problem has to do with the testability of most of the matters under discussion. For instance, I find that I am unsure whether or not I am a brain in a vat. I’d like to think that I’m not, but I also grant that if I were, it is probable that the computer that controls my sensations would give me no indications of my situation. So the one belief that seems to be the most essential seems untestable: every day I wake up and look to the market for the value of that belief, and find that there is no market at all for it. How much should I charge for a futures contract on conclusive proof that I am not a brain in a vat, with the condition that there be definitive evidence by January 1, 2011? How should I rate this contract against one that predicts that we will discover that we are all being deceived by an evil demon, or already trapped in the afterlife or eternally recurrence? Mostly we would say to those who are willing to make bets about our envattedness that they are wasting their time and money: propositions about such matters are not testable. But we might also say that an obsession with falsification and obsessive speculation about solipsism will often lead to precisely this kind of time-wasting exercise, and is best avoided.