Verifying Moral Realism (The Will-be/Ought Gap, continued)

Yesterday, I called myself a moral realist, which is to say that I believe that some claims about values are agent-neutral.

Going back to testability, I suspect that one place that markets will not yield much benefit is in evaluations of normativity. Despite the fact that I am a moral realist and believe that there are truth-conditions for ought statements, I worry about the conditions under which a moral proposition could be said to satisfy some futures contract. Yet for many, this is a problem with moral realism, not with idea markets. My moral realism is a belief about the world, and the obvious moral anti-realist claim is simply that there is no evidence for that belief, or that these beliefs are dependent on matters that are not fully resolved. (For instance, when looking for evidence for the belief that “Murder is wrong,” I’d be looking for evidence that can be sustained whether or not I’m a brain in a vat. After all, we might all be brains in vats, and death may be no more than a reset switch.)

As I noted yesterday, one way to  make untestable principles testable is to seek out the middle-range theories that justify them. Indeed, this is what our profession has mostly done, especially in considering the truth-conditions of moral statements. What is it, exactly, that the claim “Murder is wrong” could possibly track? If it is my own moral intuition, then it is nothing more than a self-report, and is clearly not agent-neutral. A sociopath could rightly claim that murder didn’t seem wrong to her, and we’d be at a loss to continue.

One troublesome way to verify such claims is associated with the logical and legal positivists. If, for instance, my claim that “Murder is wrong” depends on JHVH having chiseled “Thou shalt not kill” onto a particular stone tablet, then it stands or falls on that historical fact. But this is not my claim: I believe that murder is wrong regardless of what was chiseled on the tablets stored in the Ark of the Covenant, let alone the identity of the chiseler. The same thing could be said for the claim that “Murder is wrong” because it violates § 18.2-32 of the criminal code of Virginia, where I currently reside. It doesn’t matter what God or the state commands, some acts are, I maintain, morally impermissible. But perhaps I am wrong.  Perhaps this is simply a matter of faith on my part, a curious superstition or vestigial judgment from our ancestors, and going around asking my students and friends whether they really think murder is wrong or not is a kind of ostentatious gesture designed to preserve that superstition. In this sense, discussions of murder are like judgments of taste, and the moralist’s job is to shame her audience into pretending to believe that we like Chopin better than the The Clash. “Seriously? Joe Strummer? Puh-lease.”

But I don’t think so. I am more than willing to admit I could be wrong about which acts are permissible and which obligatory or permitted, or put differently, about which specific facts might count as a justification for killing, so I continue to be a fallibilist, and to seek out opportunities to test my beliefs. Yet if I can be wrong about a moral proposition, then it stands to reason that I can be right about it, too. But perhaps not. For instance, it is tempting to many atheists to say that though our beliefs and claims about God are desperate to track some epistemic reality, it’s simply an unfortunate fact about our lives that they do not. Because God does not exist, then all the wars fought over what communion or baptism means are simply wasted or misdirected efforts. Perhaps discussions of the moral impermissibility of murder are like discussions of God or fairies: they are focused on a fiction. But again, I don’t think so.

As an example, I propose fairness, following Rawls. Though I may not know exactly what fairness entails in many situations, I believe that I can come to tentatively correct conclusions about the demands of fairness through a process we call reflective equilibrium. In short, because conditions like non-contradiction and valid inference apply to my moral reasoning, I suspect that I am reasoning about a real object and that valid moral arguments can also be sound. (The moral-anti-realist can respond that my reasoning resembles the obsessive fandom of a nerd who points out plot holes in Star Trek, but more on this later.) Moving back and forth between cases and principles, I can generate research questions and frame disputes, and at any point we can cast about and show where we started, what justified our progress, and maintain that we have truly achieved a closer approximation of the demands of fairness.

Here, too, I meet with disagreement, but I find that others are not just skeptics of the “you never know if you’re a brain in a vat” sort, but that they are actively committed to the opposing viewpoint. Since it is very, very rarely that I can find an occasion for disagreement with my estimable friend Dr. J, it gave me great pleasure to see her articulating her account of that contrary view in her posts on moral relativism (Lazy Relativism and Strong Relativism) recently.

First, a review: Dr. J shared her distaste for the lazy relativism of some of our students, who articulate a vision of the world that does not require moral deliberation. As she points out, this is rarely because they have a robust conception of agent-relative values, but because they believe that they know the “Truth” and don’t feel like trying to figure out the principles and justifications that ground their opinions. But she ends with a stinger: she too is a relativist!

By way of explanation, Dr. J offers her own position as a “strong” relativist, the relativism of one who nonetheless values robust deliberation about values. How does she justify all this deliberation about something if there is no ‘fact of the matter’ to deliberate about?

As a relativist about moral truths, I deny the authority and the necessity of my antagonist’s moral truths, and I ought to be able to give an account of how I arrived at my value judgments independent of such authority or necessity. If I can give such an account, then the advantage has shifted. Whereas the lazy relativist leaves him- or herself vulnerable to the charge of being simply irrational (i.e., holding that mutually exclusive propositions are equally true), the strong relativist who can give an account of his or her beliefs and take ultimate responsibility for the judgments that constitute his or her values is now able to make different demands of his or her antagonist.

So far, so good: there’s little difference between the position I’ve articulated and the one Dr. J holds. I too believe we must take responsibility for our judgments: though I believe they are agent-neutral, they nonetheless survive only in our institutions and practices, which must be fired by human reason if they are to have any significance. Dr. J argues that she distinguishes herself from her sleepy freshmen because she is willing to pursue moral deliberations even if they’re not ultimately “about” anything that exists as an independent entity out there in the world, like mathematics. Her position trumps theirs, she suggests, because she “can give an account of… her beliefs, and take ultimate responsibility for the judgments that constitute… her values.” I take this to be a coherentist moral epistemology. What distinguishes the lazy relativists and the strong relativist, Dr. J claims, is not that one of their beliefs is true, and one false, but that one is justified by an coherent account, while the other is not. Neither corresponds to any state of affairs in the world, which is why they are both relativists.

I’m interpellating here, so I may be misinterpreting Dr. J, but I suspect that she might accept this description: strong relativism trumps lazy relativism because her account moves logically from premises through inferences to conclusions, while her students offer only conclusions. However, Dr. J seems to maintain that her premises themselves cannot be testable. That is why the lazy relativist is at risk of being deemed ‘irrational’: where Dr. J applies reason, offers reasons, and pays close attention to reasons, her students do not.

Of course, Dr. J is a philosophy professor. She makes her living being reasonable. She’s prejudiced in favor of reasons! If her students are anything like mine, they have likely pointed out this apparent bias-in-favor-of-reason-giving. When my students tell me this, I reply that moral reasons track, or ought to track, moral realities, and so by exchanging reasons we’re engaged in a process of discovering those values. But Dr. J refuses to avail herself of this traditional response. Instead, she maintains:

…strong relativists take human freedom seriously… especially the human freedom exercised in the determination of values, those things that are not governed by necessity or given over to us whole and complete by some transcendent or transcendental authority. Those determinations are the only ones for which we can be “responsible” or “accountable” or any other ethically-loaded adjective that we commonly use, after all.

Dr. J believes that humans are free, and that they freely choose their values. (I assume that Dr. J is also a determinist, and that by “freedom” she means some brand of compatibilism such that physical causation and voluntariness and/or responsibility are compatible.) The lazy relativist has a ready-made response to this: one can freely choose a non-discursive, non-reason-tracking, non-inquisitive kind of human freedom. After all, if moral values are agent-relative, then why can’t an agent decide not to value reasons? At that point, the demand for reasons and ethical inquiry is only an application of force: the strong relativist can only demand reasons from the lazy relativist through coercion, whether it be a bad grade or a lost election. I’m happy to coerce my students to be reasonable because I believe they agree to that kind of coercion when they sign up for my courses. I’m not sure how Dr. J justified this.

I think the problem here is that Dr. J is contrasting relativism with absolutism, rather than with moral realism:

The absolutist can only ever understand his or her antagonists as in error, and has the unfortunate superadded challenge of not being able to correct that error because the basic rules governing the distinction between truth and error are not shared.

The absolutist, as Dr. J depicts her, is someone who claims to know the truth. The absolutist then has just as much difficulty with moral deliberation as the lazy relativist: if your interlocutor is supposed to know the truth, and she doesn’t, then your interlocutor is operating according to a different set of “basic rules.”

What kinds of reasons would ever motivate her to change rules? That’d be like persuading a chess player that we ought to start playing checkers in the middle of a game. Only if we’re playing by the same rules can we understand each other. But this is why relativists like Rorty and Dr. J understand their position in opposition to absolutists, for whom a moral claim like “It is always and everywhere wrong to torture another human being,” can be known (perhaps a priori!) to be true. To the relativist, this is as absurd as claiming that it is always and everywhere wrong to move your ‘king’ two spaces: in checkers, that’s a fine move!

But morality is not a board game, (unless we are all brains in vats,) which is why I’m a moral realist. One way to make these views palatable to each other is simply to claim that we are all playing the same game, part of which involves figuring out the rules. A moral realist is not committed to the existence of a human-independent set of values, only an agent-neutral set of values. In fact, I don’t experience the problem incommensurability problem that Dr. J describes, except when discussing these matters with people like her, who share most of my values but disagree with me on the metaphysics. When I disagree with a non-relativist, I assume that we are using the same basic rules to distinguish truth and error. If I find that we are not, then our dispute is not really about values, but about these ‘basic rules,’ i.e. the middle-range theories by which we move from undisputed facts to disputed facts to disputed values.

For instance, I say that torture is wrong, but primarily because I believe torture doesn’t work, and is more likely to supply bad intelligence than a good, empathic interrogation using no physical coercion. My repugnance for intentional acts of cruelty is peripheral to my skepticism. If there were some inerrant way to identify terrorists, and to get them to tell the truth, then I might be willing to sign on to using that method to interrogate a known terrorist hiding a known ticking time bomb. The whole problem with torture is that there isn’t such certainty in identification, and pain is not the touchstone of truth. If my middle-range theory of torture’s efficacy were disproven, I’d need to re-evaluate my repugnance. However, I believe it is the pro-torture crowd that has made a mistake in the middle-range theorizing, and the evidence seems to support my view. So you see, I do not claim to know the truth about the world in any sort of privileged sense. I only claim that I make an effort to track my values to the best possible reasons.  And why can we not be realists and fallibilists both?

If we take moral inquiry to be adequately addressed through an appeal to justified true beliefs accompanied by an account, then we can seek an account that would make sense of my claim or shows it to be nonsense while maintaining that our beliefs are about the world and either verified by it or not. If, for instance, my claim can be shown to be self-contradictory or if it varied in some impermissibly arbitrary way (“it is wrong to murder except when the victims is redheaded”) we might say that it fails to meet conditions of identity and non-contradiction, conditions we ascribe to all real objects. Thus being unreasonable about morality would be clealy a case of being wrong about morality, rather than of the free choice of unreason. I think this is a problem with moral relativists of the strong variety as well: if my values about murder are at odds with those of another, then one of us is wrong, and we are obliged to discuss the matter until we can make our views converge. The only alternative to such convergence is that one of us, or the world itself, is unreasonable.

If Aristotle and John Brown were both right about slavery, then we live in an incomprehensibly strange world, and moral propositions are the least of our troubles. I suspect that no relativist would be willing to buy a futures contract on the converse, where the conditions are: proof that there both are natural slaves, and, simultaneously, that slavery is an affront to God, because we are made in his image. Of course, it could be possible, as the error theorists would have it, that they are both wrong, but I’d like to think that one of them is less wrong than the other. Nor, I suspect, would a relativist be willing to grant the free choice of values in Carl Schmitt’s political philosophy, even though it contains a very persuasive account the inadequacies of parliamentary democracy. But perhaps I have descended into a strawman argument against a good friend. I believe that would be wrong, but for now, I can’t figure out what it would mean for her to agree with me! For instance, it is also possible that I have confused subjectivism and relativism here, or that Dr. J will respond with an equally cunning distinction around which our entire dispute will dissolve. (She’s smart like that.) I wouldn’t bet on this being over, that’s for sure.

4 thoughts on “Verifying Moral Realism (The Will-be/Ought Gap, continued)”

Second Opinions