I made the mistake of teaching a set of essays on gay marriage at the end of the semester. I call it a “mistake” because I find it very difficult to give my traditional charitable interpretation to the work of folks like John Finnis and Robert George, who make arguments from a definition of marriage as “one-flesh two-body union” that they claim must exclude homosexuals but include infertile heterosexual couples. Yet they resist the objections that this is a) a narrow doctrinal definition or b) a definition that draws norms from crude anatomy or c) a definition that falls for some other version of the naturalistic fallacy. After reading widely on the subject, I still can’t accept that a rational person would deny that this “one flesh union” definition is all three: only bad faith or completely incommensurable languages seem to justify our disagreements. You might as well just say, “Marriage is Magic.”
This is why I believe that the legal situation in most of the US that tries to restrict marriage to heterosexual couples, including the so-called “Defense of Marriage Act” which abrogates the part of the US Constitution requiring that state grant “full faith and credit” to “acts, records, and proceedings,” is unjust. But when we make this case, we are confronted with the force of cultural and linguistic traditions that restrict certain performative utterances to certain speakers. Most speakers cannot meaningfully utter statements like “I dub this ship ‘The Sylvan Nymph,'” or “I now pronounce you a citizen of Aztlan.” Similarly, if I offered you a knighthood you’d be right to scoff. When we advance arguments in favor of gay marriage, some people deride these arguments as simple violations of convention.
The thing is, they’re not entirely wrong. Conventions bind us. My wife and I tried to get engaged for months, but none of our conversations or decisions seemed to stick. She asked. I asked. We said yes. She gave me a plastic decoder ring out of a crackerjack box. We discussed what a great idea it was. We planned details. We speculated about dates. We digressed. We sent each other links to dresses and suits and honeymoon spots. But for some reason we still weren’t engaged.
Then one day I went ring shopping. This process took months because I refused to buy a natural diamond… but I had become convinced that it had to be a diamond or else the ritual wouldn’t work. It was an ordeal, let me tell you, and I think it had to be! When the ring finally arrived, we went for a walk in front of Nashville’s Parthenon. I knelt to tie my shoes, told her I loved her, and pulled the ring from my pocket. Suddenly we were engaged! We called everyone we knew, and declared it. It was settled: we were going to spend the rest of our lives together. That’s magic.
I call it “magic” to show how hard it is to resist cultural norms, especially in ways that have cross-cultural force, like the engagement process. When an Irish atheist (me) and an Italian lapsed-Catholic (her) try to get married, they’ve got to communicate that to themselves and to each other’s families using some pretty broad semaphore. And why shouldn’t we use “sorcery” to describe this kind of signalling, if the traditional model of autonomous contract captures barely a sliver of the phenomenon? A sufficiently communal socio-cultural ritual is indistinguishable from magic: like a magic spell, it makes things happen in ways and for reasons that none of its participants can really understand.
One of the major debates in the philosophy of emotions is whether they ought to be treated as propositional attitudes and judgments capable of truth-tracking or simply as moods that can be appropriate or inappropriate to a context, but not falsifiable or verifiable. The question is whether emotions are a kind of intentional cognition or not. In this way it is tied to many other debates about intentional states and cognition in ethics, theology, and language in general: the idea that some or all of our attitudes, beliefs, or behaviors are not expressions of meaningful propositions and that to evaluate them as such is a mistake
The appeal of non-cognitivism about emotions is that it recognizes the complex details of emotional phenomena, especially the way that passions are embodied and pre-linguistic, and frequently non-deliberative. There are a couple of other reasons that some non-cognitivists adopt their position, the most important of which is that this position is connected to non-cognitivism about ethics in general. Non-cognitivism about ethics is the claim that ethical propositions are neither true nor false. I’ve discussed my objections to non-cognitivism in ethics under the heading of anti-realism and relativism before: basically, I reject the claim that ethical propositions must track some fact about the world (like the painting being level or crooked) in order to be truth-tracking. (Our minds are in the world, and our ethical sentences can track facts about our minds without becoming subjective, i.e. simply tracking an individuals’ preferences or desires.)
Non-cognitivists also sometimes enunciate reasons tied to first-person epistemic privilege: if an emotion can be true or false, cognitivism seems to suggest that I can be wrong when I am angry or sad or ashamed. I tend to think this is true, in the sense that we can misrecognize our own emotions. Experiments show that a person given adrenaline can be tricked into experiencing the the heightened state as either angry or euphoric, depending on how an actor in the room with them behaves. In this sense, we can literally mislabel our emotions, or else draw distinctions that do not actually exist in our emotional states. (Of course, it is also possible that subjects in the experiment actually did experience different emotions, as the behavior of the actor created reactions that changed the valence or admixture of neurochemical reactions to produce euphoria or anger.)
Another reason to adopt non-cognitivism is to undermine the hierarchy of reason and emotion: if all emotions are only imperfectly expressed propositions, then they can be “trumped” by coldly rational articulations of the reasons these emotions express. This is partly tied to first-person epistemic privilege, but non-cognitivists often want to claim a kind of exemption for the passions, since they express a set of moods and attitudes that might be damaged by overexposure to ratiocination. Like religious beliefs, a non-cognitivist about emotions might argue that the there is something improper about trying to constantly translate and interpret the moods and passions a person experiences into propositional logic or a sentential calculus.
One way this debate sometimes plays out is that defenders of non-cognitivism charge cognitivists with “intellectualizing” the emotions, and in so doing, of participating in the denigration of the emotions in favor of reason. Yet I think this charge is exactly reversed: I think we have an obligation to acknowledge the ways that emotions figure in our reasoning and rationality, not simply as inputs translatable into preferences, but through a complicated interplay of attention and processing that is often impassioned or mostly at the “gut level.”
But this dynamic approach to embodied cognition does replicate the hierarchy between reason and passion in one way: it means that we must submit emotions to rational reflection. The role of emotions in cognition means that we cannot simply “leave the passions alone” or refrain from judging or inspecting them. In fact, it suggests that we ought to be especially wary of the emotional component of cognition, precisely because it’s constantly interacting with the purely propositional kind of reasoning, and yet it is far too easy to ignore this role. We can recognize this when the emotions in question are racist or sexist, but then only because of two centuries of patient work by feminists and anti-racists. Other kinds of systematic emotional biases are similarly fraught with ethical implications, but they are more difficult to remark upon because there is no built-in constituency for the in-group bias, or for my favorite example: the status emotions tied to the moral intuitions related to hierarchy and authority.
In Ted Chiang’s short story, “Liking What You See: A Documentary,” he offers us a typical science-fictional hypothetical, in the form of a staged debate regarding the value of seeing beauty in others. What if you could remove your own capacity to see the beauty in a human face? While at first this seems like an absurd question, Chiang slowly submits a pseudo-scientific neurological explanation and a set of political and ethical arguments that many of his readers will find familiar. By the end, he’s produced both an engaging short story and a kind of policy briefing on a thorny problem: what should we do about the ordinary discrimination of beauty and ugliness?
The deeper societal problem is lookism. For decades people’ve been willing to talk about racism and sexism, but they’re still reluctant to talk about lookism. Yet this prejudice against unattractive people is incredibly pervasive. People do it without even being taught by anyone, which is bad enough, but instead of combating this tendency, modern society actively reinforces it.
Educating people, raising their awareness about this issue, all of that is essential, but it’s not enough. That’s where technology comes in. Think of calliagnosia as a kind of assisted maturity. It lets you do what you should: ignore the surface, so you can look deeper.
I have a tendency to speak in a way that conflates “evidence” and “reasons.” I’m pretty sure they are interchangeable. When we discover evidence, we discover a reason to believe some proposition. At the same time, reason-giving is the exchange of evidence, even when it is nothing more than the exchange of priors and ungrounded convictions.
Neither evidence nor reasons constitute proof, alone: instead we speak of “proving” something according to some evidentiary standard, as when we distinguish, in law, between the standards of “reasonable suspicion,” “probable cause,” “preponderance of the evidence,” and “beyond a reasonable doubt.” In that sense, proof, too, is a matter of satisfying some evidentiary standard by sharing evidence and thus reasons to believe a proposition.
Of course, this stands in opposition to the use of proof in a priori situations, like logical and geometric proofs. In such instances, we seek irrefutable evidence as the only acceptable reason to believe a conclusion. The task in such instances to achieve epistemic closure: to explicitly consider the implicit conclusions entailed by our current set of beliefs. Rather than achieving an evidentiary standard, we look to go beyond evidence to “the things themselves,” which raises all of the concerns of Humean skepticism and Kant’s Critiques.
In light of those concerns, we can consider an alternative. We can reject this formulation of proving as categorically distinct from reason-giving and evidence-appraisal. In that case, we’d take a logical “proof” as evidence of some claim, but nothing more. This requires us to re-evaluate our relationship to the necessity called upon in logic and mathematics.
That re-evaluation is at the heart of the project in naturalized epistemology: when we begin with the assumption that our very reasoning is the product of natural evolutionary forces, and thus perhaps it is not “fit to purpose” for metaphysical reflections, or set theory, or ontological interrogation. Yet when we set out with a purely or preferentially descriptive approach to the habits of belief and and reason-giving in our thinking and knowing, other trouble arises, not the least that, descriptively, we seem to be burdened with a number of epistemic prescriptions and norms. We cannot help but notice that believing is shot through with justification and warrant, that we find ourselves constantly aspiring more than a description of what we find ourselves believing.
In particular, I think it is notable that a description of epistemic behaviors includes the aspiration to demonstrate epistemic virtues, and that those epistemic virtues are said to be warranted by their capacity to achieve the truth. When a particular epistemic habit is shown to be worse than we first thought at achieving the truth, it seems like we tend to reject it, or at least try to reject it by rooting it out in some of the glaring places where we find it. In that sense, my tendency to use “evidence,” “reasons,” and even “proof” interchangeably reflects what I take to be the current state of the art in epistemology: the remarkable proximity between naturalism, reliabilism, and virtue epistemology.
One question that continues to trouble me is a variation of the one first described by Plato’s Meno: are epistemic virtues teachable? That is, is it possible to transmit the habits of responsible, reliable, and truth-sensitive believing? Alternatively, do we as teachers only select and sort reliable knowers from unreliable knowers through a process of assessment and recognition? Or worse, do we select and sort like-minded believers from those who we do not recognize because of the flawed, prejudicial, or biased methods built into our metrics for selection and sorting? Note that teaching as “selection and sorting” is not incommensurable with a subsequent process by which we cultivate virtuous epistemic habits in those who show native talent, but nor is it incommensurable with a process of sorting that merely groups like-minded believers and then privileges some over others through estimable associations.
Often when I am trying to explain problems in the modern political landscape or my own approach to political philosophy, I will return to Max Weber’s account of bureaucracy as more efficient than private office. Yes, I’ve heard all the jokes about “efficiency” in bureaucracy, but Weber’s argument rested on the contrast between private and capricious office-holders and the public and publicly accountable form of governance that characterizes both state and business organization. Weber’s concern was that bureaucracies were too efficient, that their tremendous instrumental rationality obscured a real stupidity about the best ends to pursue. Weber’s theory of bureaucracy was vindicated in the way that Nazi Germany efficiently murdered its Jews, Rom, homosexuals, and communists, and ever since there’s been a tendency to get distracted by the Nazi example whenever the amoral efficiency of bureaucratic regimes are mentioned.
My own interest is in the tension between proceduralism and participation, but this strikes many people as odd and potentially pernicious, since some of my concerns about the administrative state are echoed by populists and the Tea Party activists. (I like to point out to my fellow philosophers that Glenn Beck’s writers have clearly been reading Giorgio Agamben.)
The thing is, I think the most pressing kind of political philosophy, the research that really needs to be done right now, is a philosophical investigation of the contemporary formulation of bureaucratic governance. In short, it’s time to resurrect the tension between the predictive power of social scientists and human freedom. This is the stuff of mid-20th Century existentialism: Heidegger’s criticisms of technology, Sartre’s anxieties about freedom, and to-and-fro of structuralism and post-structuralism and the crisis and critique of the human sciences that runs through Adorno, Foucault, and Derrida. All these Continental philosophical debates largely occurred at just about the same time that Anglo-American philosophers had become obsessed with freedom and determinism, modal logic and counter-factuals, artificial intelligence and qualia, and the question of scientific realism. These questions, it seems to me, are all very much of a piece. Even as they presented themselves as valuable avenues of research and live debates of scholastic importance, they also captured a epochal anxiety about singularity and freedom, the global battle between communist “technocrats” and capitalist “risk-takers.”
I’m simplifying mightily, of course. But when you read with the question of bureaucracy in mind, it’s amazing how often it shows up in surprising places. Take my favorite thinker of the period, Hannah Arendt. In a 1964 lecture on “Cybernetics,” Arendt said:
When I grew up, it was still very common and very fashionable to believe that people who knew how to play chess very well were very intelligent indeed. If today we know that some kinds of these machines — I’m not going to say and names — can play a reasonably good game of chess, then I think it is a question of human dignity to say that this kind of intelligence apparently has not the same status as other kinds of intelligence, as other kinds of thinking. In other words, it is still something technical and it resides still in such a thing which we may accurately call brain power…. but it does not say anything about the level, or about the special particularities of this human being as such.
Is this really only about computers, or “human diginity”? The best way to think about her anxiety here is through the lens of two kinds of rationality: calculation and practical wisdom.
Many decisions are easily calculable in terms of costs and benefits, risks and probabilities. Because many very difficult decisions depend on the evaluating one’s own costs and benefits alongside the cost-benefit calculations of others, it can be tempting to think of these decisions as incalculable. After all, the mind of the Other is unknowable, just as the future is full of surprising and incalculable risks. Unfortunately, this temptation is dangerous. When the stakes are high, human beings tend to act in highly instrumental ways, and to adopt strategies that are easily calculable in the same way as chess moves would be. Because most moves are easily dominated, their real options are remarkably limited, and a well-programmed chess game can out-predict even the best grandmasters.
His first foray into forecasting controversy took place in 1984, when he published an article in PS, the flagship journal of the American Political Science Association, predicting who would succeed Iran’s ruling Ayatollah Khomeini upon his death. He had developed a rudimentary forecasting model that was different from anything anyone had seen before in that it was not designed around one particular foreign-policy problem, but could be applied to any international conflict. “It was the first attempt at a general mathematical model of international conflict,” he says. His model predicted that upon Khomeini’s death, an ayatollah named Hojatolislam Khamenei and an obscure junior cleric named Akbar Hashemi Rafsanjani would emerge to lead the country together. At the time, Rafsanjani was so little known that his name had yet to appear in the New York Times.
Even more improbably, Khomeini had already designated his successor, and it was neither Ayatollah Khamenei nor Rafsanjani. Khomeini’s stature among Iran’s ruling clerics made it inconceivable that they would defy their leader’s choice. At the APSA meeting subsequent to the article’s publication, Bueno de Mesquita was roundly denounced as a quack by the Iran experts—a charlatan peddling voodoo mathematics. “They said I was an idiot, basically. They said my work was evil, offensive, that it should be suppressed,” he recalls. “It was a very difficult time in my career.” Five years later, when Khomeini died, lo and behold, Iran’s fractious ruling clerics chose Ayatollah Khamenei and Hashemi Rafsanjani to jointly lead the country. At the next APSA meeting, the man who had been Bueno de Mesquita’s most vocal detractor raised his hand and publicly apologized to him.
Using game theory, mathematical modeling, and a panel of regional experts, Bueno de Mesquita can beat the best estimates of individual experts and the entire US intelligence community. His work has only one premise: “In the future, we’re still all raging dirtbags.” The CIA has claimed that his estimations are 90% accurate. Wow!
Rational choice theory promises us a world in which decisions are easily calculable because their results are precisely calibrated. The right choice would then be the one given by a calculation, not a decision taken by a free agent. We might wrangle over values and normative claims, but even these disputes can often be solved by making utility maximizing decisions that remove either-or decisions and make them both-and decisions: if we need not choose between our values, our pluralism can go unchallenged. That means that Bueno de Mesquita can resolve problems that would otherwise be unresolvable:
Recently, he’s applied his science to come up with some novel ideas on how to resolve the Israeli-Palestinian conflict. “In my view, it is a mistake to look for strategies that build mutual trust because it ain’t going to happen. Neither side has any reason to trust the other, for good reason,” he says. “Land for peace is an inherently flawed concept because it has a fundamental commitment problem. If I give you land on your promise of peace in the future, after you have the land, as the Israelis well know, it is very costly to take it back if you renege. You have an incentive to say, ‘You made a good step, it’s a gesture in the right direction, but I thought you were giving me more than this. I can’t give you peace just for this, it’s not enough.’ Conversely, if we have peace for land—you disarm, put down your weapons, and get rid of the threats to me and I will then give you the land—the reverse is true: I have no commitment to follow through. Once you’ve laid down your weapons, you have no threat.”
Bueno de Mesquita’s answer to this dilemma, which he discussed with the former Israeli prime minister and recently elected Labor leader Ehud Barak, is a formula that guarantees mutual incentives to cooperate. “In a peaceful world, what do the Palestinians anticipate will be their main source of economic viability? Tourism. This is what their own documents say. And, of course, the Israelis make a lot of money from tourism, and that revenue is very easy to track. As a starting point requiring no trust, no mutual cooperation, I would suggest that all tourist revenue be [divided by] a fixed formula based on the current population of the region, which is roughly 40 percent Palestinian, 60 percent Israeli. The money would go automatically to each side. Now, when there is violence, tourists don’t come. So the tourist revenue is automatically responsive to the level of violence on either side for both sides. You have an accounting firm that both sides agree to, you let the U.N. do it, whatever. It’s completely self-enforcing, it requires no cooperation except the initial agreement by the Israelis that they are going to turn this part of the revenue over, on a fixed formula based on population, to some international agency, and that’s that.”
A real crisis is one where sides cannot easily be chosen. If a formula always comes up with better decisions than I and my fellow citizens do, I think we’d all rather that the formula and its statistician-caretakers do the governing. That kind of proceduralism takes the average citizen out of the picture, or rather reduces her to a datapoint alongside others.
In a sense, the increased effectiveness of bureaucracies translates into reduced freedom for me. If political engagement is an important public good (and I believe it is) then bureaucratic predictiveness will lead to a maldistribution in that good: statisticians will get more of it, and regular folks less. For that reason, I’m really, really uncomfortable with predictive rational choice theory. It’s not the resolutely self-interested view of the world Bueno de Mesquita advances that troubles me, the whole “raging dirtbag” shtick. It is, as Arendt put it, a “question of human dignity.” This is the very reason that Bueno de Mesquita refuses to handicap elections. The intersection of polling data and predictive technologies already contribute to a drastic narrowing of political outcomes in US elections. The capacity to know becomes the requirement to know.
In contrast, practical wisdom doesn’t depend on a person’s predictive powers. It allows one to surf that wave between the good and the possible that is characterized by intense risk and unforeseeability. For this reason, many philosophers see it as a corrective to our overly managed and predictable world. In exercising practical wisdom, I don’t want to act ‘rationally,’ according to a ratio or pre-ordained measure: I want to act wisely, i.e. with a view towards a Good that we can all only see in part. Those endowed with great practical wisdom ask: how can I act in a way that every model would call contrary to ‘self-interest,’ that can rocket us out of the one realm of calculation and into another? This will involve a great deal of reasoning, measuring, and calculating, but it should also entail a risk, a chance, fortuna out of which virtu can appear.
All the objections about incomplete information, lack of normative scope, and observer/participant problems pale before my anxiety that something essential about our freedom is lost when it can be reduced to a statistical abstraction that is nonetheless accurate. That’s why I love black swans, emergent events, and secret revolutions so very much. But that love would be pathological if it came at the expense of the well-being of the least-advantaged. I’m increasingly persuaded that attachment to civic engagement requires a non-epistemic justification, and that we must temper our love of administrative governance with the cautions of mid-20th Century existential phenomenologists like Hannah Arendt.