Cultural Cognition is Not a Bias

Some recent posts by Dan Kahan on the subject of “cultural cognition” deserve attention:

(Cultural cognition refers to the tendency of individuals to conform their beliefs about disputed matters of fact (e.g., whether global warming is a serious threat; whether the death penalty deters murder; whether gun control makes society more safe or less) to values that define their cultural identities.)

There’s no remotely plausible account of human rationality—of our ability to accumulate genuine knowledge about how the world works—that doesn’t treat as central individuals’ amazing capacity to reliably identify and put themselves in intimate contact with others who can transmit to them what is known collectively as a result of science.

Indeed, as I said at the outset, it is not correct even to describe cultural cognition as a heuristic. A heuristic is a mental “shortcut”—an alternative to the use of a more effortful, and more intricate mental operation that might well exceed the time and capacity of most people to exercise in most circumstances.

But there is no substitute for relying on the authority of those who know what they are talking about as a means of building and transmitting collective knowledge. Cultural cognition is no shortcut; it is an integral component in the machinery of human rationality.

Unsurprisingly, the faculties that we use in exercising this feature of our rationality can be compromised by influences that undermine its reliability. One of those influences is the binding of antagonistic cultural meanings to risk and other policy-relevant facts. But it makes about as much sense to treat the disorienting impact of antagonistic meanings as evidence that cultural cognition is a bias as it does to describe the toxicity of lead paint as evidence that human intelligence is a “bias.”

Look: people aren’t stupid. They know they can’t resolve difficult empirical issues (on climate change, on HPV-vaccine risks, on nuclear power, on gun control, etc.) on their own, so they do the smart thing: they seek out the views of experts whom they trust to help them figure out what the evidence is. But the experts they are most likely to trust, not surprisingly, are the ones who share their values.

What makes me feel bleak about the prospects of reason isn’t anything we find in our studies; it is how often risk communicators fail to recruit culturally diverse messengers when they are trying to communicate sound science.

The number of scientific insights that make our lives better and that don’t culturally polarize us is orders of magnitude greater than the ones that do. There’s not a “culture war” over going to doctors when we are sick and following their advice to take antibiotics when they figure out we have infections. Individualists aren’t throttling egalitarians over whether it makes sense to pasteurize milk or whether high-voltage power lines are causing children to die of leukemia.

People (the vast majority of them) form the right beliefs on these and countless issues, moreover, not because they “understand the science” involved but because they are enmeshed in networks of trust and authority that certify whom to believe about what.

For sure, people with different cultural identities don’t rely on the same certification networks. But in the vast run of cases, those distinct cultural certifiers do converge on the best available information. Cultural communities that didn’t possess mechanisms for enabling their members to recognize the best information—ones that consistently made them distrust those who do know something about how the world works and trust those who don’t—just wouldn’t last very long: their adherents would end up dead.

Rational democratic deliberations about policy-relevant science, then, doesn’t require that people become experts on risk. It requires only that our society take the steps necessary to protect its science communication environment from a distinctive pathology that enfeebles ordinary citizens from using their (ordinarily) reliable ability to discern what it is that experts know.

Empathy, Cognition, and In-Group Preferences

The speculative post on empathy generated a great set of comments over on Facebook, but I think the discussion was weighed down by the framing from the original article regarding “Extreme Female Brain.” Those (like Cordelia Fine) who have rejected the account of autism-spectrum disorders as “Extreme Male Brain” have largely done so because of the absence of evidence for gendered brains when subjects are properly primed with statements designed to downplay gender differences. Studies that show gendered responses are evidence of larger gender biases and stereotype threats in our society at large. So let us drop the gendered speculation except where it’s unavoidable. (Male “mind-reading” from cropped photographs of eyes is error-prone in experimental settings, but this, too, might be a case of stereotype threats.)

The science of empathy is quite advanced, and gives us a basic picture of what and where various parts of an empathic response occur. (I’ve tried to work on the problem of cross-race implicit bias before, but I’ve learned a lot more of the science since then.) There is research that suggests that something like “emotional contagion” is a precursor of empathy. Contagion is importantly different from full-blown empathy in that it is not reflective or subject to ethical or contextual regulation. One of the main questions in the neuroscience of empathy is whether full-blown empathy takes a top-down or a bottom-up approach. Either we start with emotional contagion and move to a rational consideration of what we’re feeling and how we should deal with it, or we start with a contextual appraisal and executively-directed attention and this leads to empathic response. Either we “feel for the other” first and then decide (with limited success) whether we ought to do so and how we ought to deal with it, or we make a contextual decision that someone else’s feelings are morally relevant and then allow ourselves to share in their experiences.

For normal folks, this is likely a little of both: a little bottom-up contagion, a little top-down regulation and contextual judgment. But we are not all neurotypical, and in the world of neuropluralism there may be multiple modalities of empathic response. Consider the hypoethetical neuropluralism of the overcaring brain, the one hypothesized to lead to eating disorders. Someone who was hyper-empathetic (in the sense of having uninhibited emotional contagion) might find themselves unable to avoid the contempt of their peers: they can’t (easily) engage in the kind of meta-cognitive reappraisal that allows them to deny the relevance of the other person’s contempt. Someone with a “healthy” brain might quickly tamp down the emotional contagion that shares in the contemptuous other’s disgust for us, or even transform that disgust into pity or understanding that the contemptious other is really projecting his own body anxieties. The hyper-empath does not manage that, to his detriment.

What I was interested in was the idea that someone who has this particular defect might end up extending their empathic response to non-conspecifics, like non-human animals. But since I also worry quite a bit about other kinds of in-group preference, it occurs to me that the hyper-empath might potentially be unable to deny the relevant of distant others or other races. Could hyper-empaths avoid implicit bias problems on cross-race facial identifications? Would they have the same attenuated empathic response to the suffering of non-proximate others as neurotypicals?

In both cases, there’s clearly a troubling role for contextual regulation and meta-cognitive appraisal: how else could we explain that even our empathy is racist? Rationality excludes the slave from the master’s moral community. Executive judgment reminds us that animals are not morally relevant and prevents us from feeling the importance of their suffering. The cognitive limits of “full-blown empathy” prevent us from caring for the suffering of strangers.

In this sense, the hyper-empaths’ failure at meta-cognitive regulation of emotional contagion might lead them to the same cosmopolitan empathy that those with ordinary empathic response achieve only through travel, working with animals, or after careful thinking about the tenets of utilitarianism.

Bam! Superpowers.

The Fallacy Fallacy [sic] of Mood Affiliation (Workplace Domination Part Two)

In his initial response to the the Crooked Timber bloggers, Cowen also suggests that he doesn’t like the “mood affiliation” of the CT bloggers:

I am not comfortable with the mood affiliation of the piece.  How about a simple mention of the massive magnitude of employee theft in the United States, perhaps in the context of a boss wishing to search an employee?

Cowens’ “fallacy of mood affiliation” is an interesting and useful attempt to describe a kind of sophisticated motivated skepticism that occurs when we evaluate evidence that counters our basically optimistic or pessimistic views of the world. When he first introduced it, Cowen described mood affiliations that caused people to misrecognize particular evidence regarding innovations or environmental effects because the particular evidence fails to confirm their preferences for optimistic accounts of future growth and environmental improvements.

But to those clear examples of the optimism bias, he added two other examples that are only indirectly related:

3. People who see a political war against the interests of the poor and thus who are reluctant to present or digest analyses which blame some of the problems of the poor on…the poor themselves. (Try bringing up “predatory borrowing” in any discussion of “predatory lending” and see what happens.) There’s simply an urgent feeling that any negative or pessimistic or undeserving view of the poor needs to be countered.

4. People who see raising or lowering the relative status of Republicans (or some other group) as the main purpose of analysis, and thus who judge the dispassionate analysis of others, or for that matter the partisan analysis of others, by this standard. There’s simply an urgent feeling that any positive or optimistic or deserving view of the Republicans needs to be countered.

#4 is also clearly a bias where in-group solidarity blinds us to evidence, and Cowen has written about this well in the past. It is not, however, an obvious “mood affiliation” except by analogy, and it serves a pragmatic purpose: you can only call your friends our for being biased so often before they stop being your friends.

#3, though, is neither a mood affiliation nor an optimism bias. We might call it an “unjust-world fallacy,” if we really need a name for it. However, I’d suggest we might want to avoid prejudicing discussions of what makes people poor with attributions of fallacies and congitive biases until we’ve evaluated the evidence.

Since “what makes people poor” is a hotly debated academic question, there’s a lot of evidence, and it pushes in multiple directions. (My own money is on some version of Buddy Karelis’s book, The Persistence of Poverty (pdf) though there’s plenty of room for poverty traps and marginal tax rate arguments.) People affiliate around these positions in many of the same ways that they affiliate around political parties. But there’s a serious dispute in the literature and the question really, really matters, so let’s not glibly reduce our opponents to fallacy-mongers here.

This is relevant to blogging about the workplace only because, by analogy, we’re supposed to believe that employees might be partly to blame for their domination in the same way that poor people are partly to blame for their poverty. But note, there are particular actions the poor engage in that make them poor: failing to finish high school, committing crimes, and getting pregnant out of wedlock are individual actions that primarily harm the individual who enacts them by reducing lifetime wages. In the workplace example, there just aren’t particular actions that workers engage in that justify their being searched or filmed while going to the bathroom (except maybe being unwilling to quit, fight, or unionize). Invading my privacy because somebody else has been stealing doesn’t really fit the kind of personal responsibility motif that Cowen was pushing in the original discussion of poverty. Plus, employee theft costs our economy about $15 billion, which is 0.1% of GDP, and that’s including serious embezzlement in addition to retail “shrink,” so it’s not really so big a deal as Cowen makes it out to be.

Mood affiliation concerns don’t appear to be relevant to workplace domination issues, they threaten to resolve into ad hominem and fallacy fallacy [sic] issues, so let’s drop them and look at the data and the arguments.

Crazyism about Ethics

 

Deciding Whether or Not to Tell a Story

When I was an undergraduate, I took a class called “Truth and Beauty” with the poet Ann Lauterbach. It was basically a class on reading and writing essays, but I took it because I was a philosophy major and I thought it would be about aesthetics, i.e. about whether judgments about beauty can be true or false. Every week we’d read a collection of essays and we would turn in a response essay of our own. We also met with Ann regularly to discuss our work, which was great because she had the kind of presence that made one-on-one encounters particularly powerful and instructive, like academic therapy.

During one of our sessions, I remember bemoaning the fact that my essays were all so analytical. I had read some of her poetry and I yearned for the kind of imaginative approach to language that I thought she had. (I really had no idea about poetry.) I can’t remember her exact response, but it was something like this:

Everybody has their own way of thinking, their own voice. You shouldn’t try to change the way you think, but rather work on improving it.

At the time, I found that inspiring. Here was a brilliant poet giving me permission (nay, charging me with the duty!) to dig deeper into the habits of thought and writing that were most comfortable for me. It was liberating. I’ve since come to realize that my style of thinking is much less strictly analytical and much more about exploring questions and the various possible ways of answering them. (Those links point to a couple of posts addressing different approaches to power and freedom.) But I’m glad I took Ann’s advice, because look where it got me: I got a PhD in philosophy, and I get to teach my favorite texts and questions for a living!

Now, here’s the question: why did I tell you that story?

Notice how my story works: it puts some pretty banal clichés into the mouth of a famous poet, but all she said was “be yourself.” I start by establishing her authority and gravitas, I introduce a problem via a distinction with an implicit hierarchy (analytic versus imaginative), and then the authority figure in my story teaches me a lesson that reverses the hierarchy: it’s okay to be analytic and nerdy! Then I pretend like this simple lesson is what got me to where I am today. Yay poets! Yay philosophy nerds!

But wait! Maybe my story is deceptive. Maybe, as Tyler Cowen said in his recent TEDx talk, stories have a tendency to paper over the messiness of real life:

Narratives tend to be too simple. The point of a narrative is to strip [detail] way, not just into 18 minutes, but most narratives you could present in a sentence or two. So when you strip away detail, you tend to tell stories in terms of good vs. evil, whether it’s a story about your own life or a story about politics. Now, some things actually are good vs. evil. We all know this, right? But I think, as a general rule, we’re too inclined to tell the good vs. evil story. As a simple rule of thumb, just imagine every time you’re telling a good vs. evil story, you’re basically lowering your IQ by ten points or more. If you just adopt that as a kind of inner mental habit, it’s, in my view, one way to get a lot smarter pretty quickly. You don’t have to read any books. Just imagine yourself pressing a button every time you tell the good vs. evil story, and by pressing that button you’re lowering your IQ by ten points or more.

Oh shit! Did I just make myself and my readers dumber? Did my little “A Man Learns a Lesson”-style story just get us all stoned on narrative inanities?

Cowen goes on to qualify this:

we use stories to make sense of what we’ve done, to give meaning to our lives, to establish connections with other people. None of this will go away, should go away, or can go away.

But, he explains, we should worry about stories more, and embrace the messiness of life more. But I wonder if he’s right? After all, Lauterbach told me I shouldn’t try to change the way I think, but rather get really good at the modes of thinking that I already prefer. Surely the same thing is true for people who love stories and think primarily in terms of stories?

So, here’s how I think about this question: Should we listen to Cowen or to Lauterbach? Why?

It seems to me that we should be suspicious of stories if we think that letting reality be messy is good for thinking clearly. The problem there is that we’re only likely to think that if we’ve had good experiences with other forms of analysis: plotting data or formalizing syllogisms. In that case, we’ll hear Cowen’s comments like I heard Lauterbach’s: “Be yourself! Those story-tellers are phonies, anyway.”

On the other hand, we might also want to dig deeper into stories and develop our critical thinking skills from within the narrative form: when is a story too neat? When is a narrator’s omniscience really pandering to the reader? What are the other stories we can tell about authors, about cultures, and about narrative manipulation that might help us to avoid the traps that narratives set for us? If we’ve already got a pretty good sense of the structure of stories, the kinds of things that narratives do and can do, we might prefer to dig deeper and hone this method. But still, the message is Lauterbach’s: “Don’t kick the poets out of the city! Poets can be wise, too!”

In this post, Lauterbach is going to stay the hero. But Cowen is a smart guy, and he tries to inoculate himself against this kind of criticism in the section on cognitive biases. Basically, he reminds us that people tend to misuse their knowledge of psychology through a kind of motivated reasoning that reproduces their earlier, ignorant biases but now with supposed expert certification. In this, as in most things “a little learning is a dangerous thing.” (But isn’t that what TED is for?) Then he reminds us of the epistemic portfolio theory, which holds that we’ll tend to balance our subjects of agnosticism, unpopular beliefs, and dogmatism in a rough equilibrium, so we ought to beware of the ways we abjure narratives in only some parts of our lives. (This is pretty much like ending his whole talk with the prankster’s “NOT!” Silly rationalists: truth-tracking and reason-responsiveness are myths we tell to children to hide the messy emotional facts of the matter.)

The passage in his talk where he typologizes the various narratives we’ll tell about the talk is also pretty funny: “I used to think too much in terms of stories, but then I heard Tyler Cowen, and now I think less in terms of stories!” Yay economists! They’re smart and have all the bases covered. Hey wait: do you think that’s why he told us that story?