Diversify or Die

There’s an interesting piece in the Stone today on the consequences of philosophy’s Anglo-European blinders: If Philosophy Won’t Diversify, Let’s Call It What It Really Is. Garfield and Van Norden suggest that the systematic failure to address non-Western sources impoverishes the discipline and belies any claim to universality. And what a wonderfully provocative list of addenda they suggest!

We hope that American philosophy departments will someday teach Confucius as routinely as they now teach Kant, that philosophy students will eventually have as many opportunities to study the “Bhagavad Gita” as they do the “Republic,” that the Flying Man thought experiment of the Persian philosopher Avicenna (980-1037) will be as well-known as the Brain-in-a-Vat thought experiment of the American philosopher Hilary Putnam (1926-2016), that the ancient Indian scholar Candrakirti’s critical examination of the concept of the self will be as well-studied as David Hume’s, that Frantz Fanon (1925-1961), Kwazi Wiredu (1931- ), Lame Deer (1903-1976) and Maria Lugones will be as familiar to our students as their equally profound colleagues in the contemporary philosophical canon. But, until then, let’s be honest, face reality and call departments of European-American Philosophy what they really are.

Thus, the more appropriate title for our departments would be “European and American Philosophy.” On balance, I applaud this argument: we ought to aim to live up to the universality of our disciplinary self-conception, or give up that self-conception entirely.

When I think about diversifying my own syllabi, I almost never reach for Asian philosophers. I aim for gender parity first, racial diversity second, and usually end up with only a few thinkers from outside the Euro-American tradition. Sometimes none. I am scared to get Confucius or Mencius or Wiredu wrong, and I’m worried about orientalizing or exoticizing their traditions. These are obviously resolvable anxieties, given a sincere commitment, but they exist. I have a comfort zone, I push against it in some ways but not in others, and there are biases in the patterns of which ways I leave the comfort zone that I must address.

My biases, though, largely reproduce the biases in the discipline as a whole. And it would be much easier for me to correct my individual failings if the profession would work with me, if my training had worked on me. Why didn’t my graduate school train these biases out of me? Kristie Dotson’s How is this Paper Philosophy? is my go-to answer for this question. I think it’s crucial, but it’s hard to excerpt well, so read it!

… … …

Okay, you’re back? Basically, philosophers are constantly engaged in a dual game of legitimating their work as philosophy and working to reconstitute the borders of what counts as philosophy. These practices–simultaneously forcing people to justify their projects and choices in terms of a shifting standard of legitimate philosophical research–are how we end up treating Chinese or Native American philosophy as merely “inert ideas,” or worse, as “religion, mythology, storytelling, poetry, or ‘dancing’ (as Levinas once so generously declared).”

This has the effect of making philosophy a mostly white man’s game, because what Dotson calls “diverse practitioners” usually find that philosophical borders are being continually redrawn to exclude them. Of course, she writes the essay in defense of Black American, feminist, and queer philosophy, but the point stands: Asians are excluded by the kind of discipline that philosophy has become.

So I think it’s not enough to say philosophy has a budget problem. It does! But maybe it wouldn’t have quite as bad a budget problem if there weren’t so many faculty working on the semantics of the left parenthesis. The discipline became scholastic to avoid the big ideological fights of the last half century, and now is paying the price.

Attending to other traditions might produce more majors and philosophy departments would be richer both financially and ideologically and could then be doing better work. But it’s still an open question which traditions to prioritize, since these decisions get made one hire at a time. The Garfield/Van Norden piece gestures towards African and Latin American philosophy, but it’s part of a project to specifically increase attention to Chinese philosophy. That seems good, but I do also want to see a continued? renewed? nascent? long-delayed emphasis on Black American philosophy, as well as a re-commitment to feminism.

Evidence-Based Parenting, Spanking, and Authoritative Parenting Styles: or, How to Get My Daughter to Brush Her Teeth

Crying Baby, but not My Crying Baby from Flickr user donnieray (CC By 2.0)
A crying baby, but not *my* crying baby from Flickr user donnieray (CC By 2.0)

My daughter doesn’t like to have her teeth brushed. She’s not even two years old, yet, so while that worries me, I guess it’s something we’ve still got time to correct. But one question I often wonder about is whether there’s something we could do differently to change her behavior. She’s maybe twenty-five pounds, right now, so one possibility is to hold her down and force her mouth open. I’ve had to do that to administer medicines, so I know it can work, and that she’ll forgive me afterwards. But frankly it’s terrible, and if it hadn’t been necessary to do to preserve her physical health, I wouldn’t have done it. I tried everything else on the bribe/bargain and disguise/distract continua first, I assure you.

But my personal style, which is also my parenting style, is one that avoids force and authority. Spanking makes me uncomfortable, for instance, though I’m amenable to evidence there too. And it turns out that there’s a lot of data on that.

The American Psychological Association opposes it, and so that’s become something like the default position, sometimes even ensconced in law. Much of the concern there is that open-handed, conditional spanking (i.e. “If you steal from the grocery store, you will get five carefully administered slaps on the buttocks later in the day after an explanation for the reasons for the spanking.”) can lead to more immediate and customary physical abuse, like facial slapping or the use of instruments like belts or canes, when the behavior returns. The evidence seems to suggest that spanking is closely associated with many, many bad outcomes, including noncompliance, aggression, adult spousal abuse, and more.

But that work, primarily linked to one researcher’s meta-analysis, has been seriously challenged in the last decade. It seems reasonable to protest that lumping caring and careful parents in with child abusers may muddy the data a bit, especially when the anti-spanking research was quickly transformed into advocacy that led to outlawing spanking of any sort in more than thirty countries. There was always the risk that the correlation between, say, noncompliance or aggression and spanking ran the other way: noncompliant, aggressive children got spanked because parents had exhausted other options.

So the work of Larzelere and Gunnoe is relevant here. They have also done meta-analyses, but tried to account for more variables, including differences in spanking style and positive developments like school performance. And what they’ve found is that conditional, open-handed spanking for children between two and six years old is associated with positive outcomes later in life.

I think this is a great case for research that challenges the orthodoxy (which is no spanking) but doesn’t actually resolve the question. We thought we knew spanking was unequivocally bad. Now we don’t. That doesn’t mean we know that spanking is good, though.

In fact, even Gunnoe’s research is not clear that spanking is the cause of the positive developmental factors associated with it. In fact, it may well be that willingness to spank is merely a marker for authoritative parenting syles more generally. Being authoritative is associated with both positive outcomes for children and spanking, and Gunnoe tries to argue that it’s really that style that is the cause of the positive developmental outcomes.

Which is a problem. Because even if I was willing to spank my daughter, I don’t think I could do it in a way that evinced authoritarian parenting more generally. Like the medicine I had to force her to take, I’d be deferring to the authority of the experts in my use of force. Call it the Obedient Parenting Style, deferring to the authority of experts. No thanks.

I’m kind of okay with this being a place where the facts are too murky and our values take over. But that makes parsing the data, when it does become available, a difficult task that raises all sorts of concerns about the role of science in law-making, the effects of subtle political biases on research, and the ways that motivated reasoning and motivated skepticism can impact results.

What is the belief you hold that is most likely to be wrong?

Another way of putting this question is: how does your ideology and social setting blind you? One way to answer is to look at those beliefs that you have the most incentive to deceive yourself about. What are your biases? For instance, I’m probably not as smart or as caring as I think I am, because I want to be smart and caring and I’m going to be on the lookout for evidence in favor of those two beliefs and be tempted to ignore or discount evidence against them. But then, too, there is the Dunning-Kreuger effect, so who the hell knows? Whatever we think about these traits, we’re probably wrong, but in the banal way that everyone else is likely to be wrong, too.

I have something like my old prompt about books that changed your mind. Even if we’re conscious of the dangers of motivated reasoning and motivated rationality, we can still point to those beliefs that we hold that we see as the weakest, perhaps not because we hold them in an effort to signal ability or loyalty, but because we find holding those beliefs useful for orienting further inquiry. So here goes:

  1. Moral Realism: The belief I hold that is most likely to be wrong is a belief in moral judgments that track something objective or at least non-agent-relative. After all, it’s difficult to engage in normative inquiry without believing that our researches track something. Just as philosophers of religion tend to believe in God and astrologers tend to believe in the predictive power of the stars, ethical and political philosophers tend to believe in their thing, too. If we’re wrong on this (as thousands of relativist undergraduates have confided in me) then we’re unlikely to find lasting success. And there is certainly some reason to believe that we haven’t seen much in the way of progress in normative inquiry, despite recent trends like the line that runs through John Rawls, Derek Parfit, Philip Pettit, and Elizabeth Anderson.
  2. Character Skepticism: the second-most-likely-to-be-wrong belief I hold is skepticism about the existence of persistent character traits. I went to school with a generation of scholars who were significantly motivated by what they thought of as the deconstruction of the subject or the death of the author, so there’s certainly a sociological or network effect bias to my skepticism. With all the evidence accumulating that character traits like conscientiousness have a genetic component, it’s almost absurd to pretend that there aren’t some traits that persist over time and context. Still, I find that skepticism to be very important for my discussions of moral equality, status emotions, and the fallibility or person-oriented judgments, and so I persist(!) in holding it. (Even while I hold many people in great esteem for what I take to be their persistent habit of being right, wise, or good.)
  3. The Basic Income and the Value-Added Tax: I’m not sure I’m “most likely to be wrong” on BIG+VAT, but I do think it’s the policy advocacy position where my confidence in advancing it is the least-well-matched by the sensus communis. Call it the “largest gap between my estimation of the evidence and the general estimate.” This blog got its title by my mixed feelings about utopian theorizing, but with BIG+VAT I do feel a bit like a utopian. Even beyond all the naysayers, there’s even some recent evidence that consumption (which a VAT would disincentivize) is an important component in reducing poverty. This suggests one reason to prefer income redistribution over VAT, and so the whole edifice is certainly shaky if the right empirical evidence comes along.
  4. The Unimportance of the Middle-Class: I tend to worry less about the middle-class than the least-advantaged, which leads me to worry more about the unemployed than the employed, more about global workers than domestic workers, and more about those without a college degree than those who have credentials. But there are lots of good arguments in political theory for a vibrant middle-class, not the least of which is Elizabeth Warren’s claim that “A middle class where people are falling out and into poverty is a middle class that has less room to bring people up and out of poverty.” So I may very well be wrong.
  5. The Inefficacy of Charter Schools: I tend to think that charter schools are an anti-union boondoggle, that they are less effective than the public schools they replace, and that the cherry-picked evidence in their favor usually depends upon hidden selection effects or a resurgence of racial segregation. Even in the best cases, they seem to offer a model that would not scale beyond the single school which has lucked into success, and I’m heartened by the Stanford study that showed that charters were twice as likely to be worse than regular public schools than to be better than them. But of course, a charter school advocate would say that we ought simply to close failing charters, leaving us with some schools that are equal to public schools and some that are superior to them: at the margin, that’s a good deal. And, too, not all charter schools are for-profit market-oriented corporate monstrosities; there are some innovative experiments in common-pool resource management going on within the charter school movement. Perhaps it is better to let parents dissatisfied with their public school options take the risk. If we believe in pluralism, these experiments might be a better way to match differing childrens’ needs with settings where those needs will be met. I dunno: I’m glad I’m not in charge of  K-12 education policy in this country.
  6. Incarceration and Drugs: Like many progressives, I suspect that there is something deeply wrong with mass incarceration and the drug war. Most of the people I know seem to agree with all the constituent arguments against the way criminal justice is practiced in this country. We’re deeply embarrassed by the number and racial composition of prisoners here. And yet the system remains, and both engaged citizens and smart, caring politicians seem powerless to change it. Clearly, there’s some piece of this puzzle we just don’t understand.
  7. Meat Eating: I’m pretty sure I shouldn’t be eating meat, and certainly not meat produced under the inhumane conditions in US factory farms. Yet I seem to be completely akratic on this front; I believe I shouldn’t, but I do it anyway. I’m certainly wrong, one way or another, because my actions and beliefs are in contradiction. This is more of an anxiety over that inconsistency than a likely-wrong belief, though, so maybe it’s not completely fitting with the principle of the question.

What are you most likely to be wrong about?

This is What Epistocracy Looks Like

Most academics know some version of the critique of elite rule, administrative power, and centralized regulation by experts. Hannah Arendt called bureaucracy the “rule of No Man;” Michel Foucault described the overlap of legislative power, knowledge-production, and the apparatus of discipline and control; Iris Marion Young defended simple street activism against the demand that political participation meet elaborate standards of reasonableness in the name of pluralism and in so doing laid the groundwork for current theories of agonistic democracy like Chantal Mouffe; Roberto Unger suggested that we ought to embrace democratic destabilization, experimentalism, and a radical institutional creativity belied by the supposed necessity of expert judgments; Anthony Giddens and Ulrich Beck have diagnosed the relationship between risk-aversion and governmental responsibility for emergency management as a modern form of legitimacy that both generates hazards and takes responsibility for managing them. Other criticisms came from conservative circles: Friedrich Hayek, Michael Oakeshott, and even Antonin Scalia.

Phillip Tetlock’s work on expertise is very illuminating here: in some fields, the avowed experts’ predictions actually are no better (and sometimes worse!) than a coin flip. That’s why David Estlund criticized the epistocratic tendency to ignore the systematic biases that underwrite invidious comparisons between evaluations of competence and incompetence in his book Democratic Authority.

And yet, some matters of expertise are unavoidable. David Estlund called these “primary bads”: war, famine, economic collapse, political collapse, epidemic, and genocide. In some cases, increased participation decreases the risk of such catastrophes: literacy and universal suffrage decrease the risk of famine, for instance. ”No famine has ever taken place in the history of the world in a functioning democracy,” Amartya Sen wrote in Development as Freedom, because democratic governments ”have to win elections and face public criticism, and have strong incentive to undertake measures to avert famines and other catastrophes.” Yet democracies still go to war and face economic crises (if not yet collapse) and the temptation is always there to imagine a system that will decrease the likelhood of such events.

The standard line is that democracies must keep experts “on tap, but not on top.” But consider a common example that Steven Maloney and I articulated in our paper “Foresight, Epistemic Reliability and the Systematic Underestimation of Risk:”

all citizens are affected by the Federal Reserve funds target rate (the rate that banks charge each other for overnight loans to cover capital reserve requirements) as it ultimately determines the availability of credit and thus the balance between economic growth, inflation, and unemployment. Most experts agree that the range of viable options for this rate is limited. Further, they agree that direct or representative democratic control of the rate would encourage non-optimal outcomes, including price bubbles that could lead to economic collapse. As a result, decisions on the target rate, which affect every citizen, are nonetheless denied to the public. Some citizens thus argue that the Federal Reserve ought then to be abolished as illegitimate. [These] citizens charge that members of the Federal Reserve Board, who are drawn from the management of a few investment banks, allow systematic biases for their home institutions to color their decisions… [I]t makes (1) findings of fact (2) in an exclusive and closed manner that (3) have coercive effects on citizens because (4) democratic decision-making would lead to cataclysmic primary bads….

Now, it is amusing to point to the financial crisis of 2008 and argue that the Federal Reserve failed to prevent economic collapse. But though the crisis was and remains severe, the Federal Reserve actually played a major and undemocratic role in preventing a true collapse. David Runciman’s recent piece in the London Review of Books makes a similar point:

When democracies are in serious trouble, elections always come at the wrong time. Maynard Keynes, the posthumous guru of the current crisis, made this point in the aftermath of the First World War, and again in the early 1930s. When something really momentous is at stake, the last thing you need is democratic politicians trawling for votes. Keynes readily accepted that democracies were far better at renewing themselves than the supposedly more efficient dictatorships. He just wished they wouldn’t try to do it when they were struggling to stop the world descending into chaos.

Matthew Yglesias discussed the implications of the Federal Reserve for Progressives early last year:

No public institution can or should be truly independent of the political process. The Supreme Court is an independent branch of government, and rightly so. But its decisions are subject to hot political debate, and the nomination of judges to sit on the high court is considered an important presidential power. This, too, is as it should be. The assumption that monetary policy is too important to hold central bankers accountable through the political process should have come to an end along with the illusory great moderation.

Perhaps he is right; but perhaps politicizing the Fed will have the same de-legitimizing impact that politicizing the Court has had, which could be dangerous for an institution whose only power is its capacity to make credible counter-cyclical commitments.

Too often, we have the tendency to reduce these questions into a battle between “democrats” and “elitists.” But there are few serious radical democrats who advocate the dissolution of the administrative state, let alone the liberal rights that restrict majoritarian rule.

Objections to elite status and epistemic privilege more often reflect a kind of partianship about which experts to respect, as a proxy for in-group solidarity. It is difficult not to reduce matters of scientific expertise and superstition to in-group/out-group tribalism: after all, as much as I respect the opposition to intelligent design in public schooling, there is little reason to believe it has important implications for biology curricula, and it also has massive public support in many school districts. A pure democracy would allow the people to set their own standards.

We all fear some out-group, whether it be the white supremacists’ fear of non-white incursions, or the secularists’ fear of theological domination. Many people without a college degree resent the wage premium and social status associated with it; many people with a college degree resent the democratic power of the uneducated and the pandering they receive by politicians and media. Regardless of education, there is the sense of irreconcilable differences. Many people believe that we do not inhabit the same world, even as our disputes over how to constitute our shared world erupt over a very narrow band of possible policies.

Who among us is not an elitist or a vanguardist in some sense? We all think we’re right and that we could run things better than the status quo. Even my fellow fallibilists think we’ve got a recipe for institutional humility that would enhance outcomes!

Deciding Whether or Not to Tell a Story

When I was an undergraduate, I took a class called “Truth and Beauty” with the poet Ann Lauterbach. It was basically a class on reading and writing essays, but I took it because I was a philosophy major and I thought it would be about aesthetics, i.e. about whether judgments about beauty can be true or false. Every week we’d read a collection of essays and we would turn in a response essay of our own. We also met with Ann regularly to discuss our work, which was great because she had the kind of presence that made one-on-one encounters particularly powerful and instructive, like academic therapy.

During one of our sessions, I remember bemoaning the fact that my essays were all so analytical. I had read some of her poetry and I yearned for the kind of imaginative approach to language that I thought she had. (I really had no idea about poetry.) I can’t remember her exact response, but it was something like this:

Everybody has their own way of thinking, their own voice. You shouldn’t try to change the way you think, but rather work on improving it.

At the time, I found that inspiring. Here was a brilliant poet giving me permission (nay, charging me with the duty!) to dig deeper into the habits of thought and writing that were most comfortable for me. It was liberating. I’ve since come to realize that my style of thinking is much less strictly analytical and much more about exploring questions and the various possible ways of answering them. (Those links point to a couple of posts addressing different approaches to power and freedom.) But I’m glad I took Ann’s advice, because look where it got me: I got a PhD in philosophy, and I get to teach my favorite texts and questions for a living!

Now, here’s the question: why did I tell you that story?

Notice how my story works: it puts some pretty banal clichés into the mouth of a famous poet, but all she said was “be yourself.” I start by establishing her authority and gravitas, I introduce a problem via a distinction with an implicit hierarchy (analytic versus imaginative), and then the authority figure in my story teaches me a lesson that reverses the hierarchy: it’s okay to be analytic and nerdy! Then I pretend like this simple lesson is what got me to where I am today. Yay poets! Yay philosophy nerds!

But wait! Maybe my story is deceptive. Maybe, as Tyler Cowen said in his recent TEDx talk, stories have a tendency to paper over the messiness of real life:

Narratives tend to be too simple. The point of a narrative is to strip [detail] way, not just into 18 minutes, but most narratives you could present in a sentence or two. So when you strip away detail, you tend to tell stories in terms of good vs. evil, whether it’s a story about your own life or a story about politics. Now, some things actually are good vs. evil. We all know this, right? But I think, as a general rule, we’re too inclined to tell the good vs. evil story. As a simple rule of thumb, just imagine every time you’re telling a good vs. evil story, you’re basically lowering your IQ by ten points or more. If you just adopt that as a kind of inner mental habit, it’s, in my view, one way to get a lot smarter pretty quickly. You don’t have to read any books. Just imagine yourself pressing a button every time you tell the good vs. evil story, and by pressing that button you’re lowering your IQ by ten points or more.

Oh shit! Did I just make myself and my readers dumber? Did my little “A Man Learns a Lesson”-style story just get us all stoned on narrative inanities?

Cowen goes on to qualify this:

we use stories to make sense of what we’ve done, to give meaning to our lives, to establish connections with other people. None of this will go away, should go away, or can go away.

But, he explains, we should worry about stories more, and embrace the messiness of life more. But I wonder if he’s right? After all, Lauterbach told me I shouldn’t try to change the way I think, but rather get really good at the modes of thinking that I already prefer. Surely the same thing is true for people who love stories and think primarily in terms of stories?

So, here’s how I think about this question: Should we listen to Cowen or to Lauterbach? Why?

It seems to me that we should be suspicious of stories if we think that letting reality be messy is good for thinking clearly. The problem there is that we’re only likely to think that if we’ve had good experiences with other forms of analysis: plotting data or formalizing syllogisms. In that case, we’ll hear Cowen’s comments like I heard Lauterbach’s: “Be yourself! Those story-tellers are phonies, anyway.”

On the other hand, we might also want to dig deeper into stories and develop our critical thinking skills from within the narrative form: when is a story too neat? When is a narrator’s omniscience really pandering to the reader? What are the other stories we can tell about authors, about cultures, and about narrative manipulation that might help us to avoid the traps that narratives set for us? If we’ve already got a pretty good sense of the structure of stories, the kinds of things that narratives do and can do, we might prefer to dig deeper and hone this method. But still, the message is Lauterbach’s: “Don’t kick the poets out of the city! Poets can be wise, too!”

In this post, Lauterbach is going to stay the hero. But Cowen is a smart guy, and he tries to inoculate himself against this kind of criticism in the section on cognitive biases. Basically, he reminds us that people tend to misuse their knowledge of psychology through a kind of motivated reasoning that reproduces their earlier, ignorant biases but now with supposed expert certification. In this, as in most things “a little learning is a dangerous thing.” (But isn’t that what TED is for?) Then he reminds us of the epistemic portfolio theory, which holds that we’ll tend to balance our subjects of agnosticism, unpopular beliefs, and dogmatism in a rough equilibrium, so we ought to beware of the ways we abjure narratives in only some parts of our lives. (This is pretty much like ending his whole talk with the prankster’s “NOT!” Silly rationalists: truth-tracking and reason-responsiveness are myths we tell to children to hide the messy emotional facts of the matter.)

The passage in his talk where he typologizes the various narratives we’ll tell about the talk is also pretty funny: “I used to think too much in terms of stories, but then I heard Tyler Cowen, and now I think less in terms of stories!” Yay economists! They’re smart and have all the bases covered. Hey wait: do you think that’s why he told us that story?