Status Emotions and Punishment

I haven’t written much about status emotions, recently, but I came across one of my favorite Facebook memes and remembered again how central it seems. I don’t endorse the misogyny here, but it perfectly describes the way that fundamental attribution bias transforms resentment into contempt, and thus leads, in my view, to both epistemic and moral error:

Funny Confession Ecard: Once you hate someone, everything they do is offensive. 'Look at this bitch eating those crackers like she owns the place.'

I’ve also been thinking a bit about the role of status emotions in our treatment of criminals in the US. It’s important to recognize when your differing judgments are leading you away from the common sense moral community, and punishment is one place that this seems to be occurring for me. Put simply, I just don’t see any good reason to disdain or show contempt for convicted criminals. This follows quite self-evidently from my claim that status emotions are immoral and unreliable. But this puts me outside of the mainstream society’s judgments about criminals, and I wonder if I’ve missed something, am wired differently, or am simply altering my intuitions in order to bite the bullet on my idiosyncratic account of the moral emotions.

Recall that Michelle Mason just assumes that some people are better than others in her account of contempt as a reactive attitude. But the genius of Strawson’s account of the reactive attitudes was that it allowed us to sidestep tricky metaphysical questions about agency and determinism. Mason does the same thing, sidestepping tricky metaphysical questions about personal identity and the persistence of character traits over time and context. Yet she doesn’t thematize the question of persistence or identity in the same way that Strawson thematized determinism and blame.

Blame and punishing seem appropriate, but what I notice is that the prisoners I teach are thoughtful human beings who are interested in the texts we’re reading. They are polite, respectful, and in my judgment genuine. Almost every day that I come to class, someone thanks me for the lesson. At the same time, they have criminal histories. Some were simply caught up in the war on drugs, but some of them allude to having done truly bad things; this is not just a matter of a self-selected group of victim-less criminals. And yet, that doesn’t seem like it matters to me. It doesn’t seem like it should matter: to my mind, they are due the same esteem as anyone else.

Criminals could be the perfect test for status emotions, if you set aside all your concerns about the US’s problems with mass incarceration, innocence and plea bargaining, the racialization of justice, and the war on drugs. Of course, we shouldn’t set those things aside when we’re talking about policy, but at a certain point you have to admit that some people really are guilty. If the claim is simply that they wouldn’t be guilty in a radically different society, we’re back to begging the question in Strawson’s original use of the reactive attitudes: in that case, determinism actually does matter, and these crimes were [over]determined and thus deserving of neither blame nor contempt.

I think we can preserve blame while jettisoning contempt: we resent the criminal for the harm they do, and don’t worry about determinism. We can’t disdain the criminal without assuming something like: “You are the sort of person who would have done that in a different context. I am the sort of person who would not have done that in any of the proximate possible worlds.” I doubt such assumptions are warranted. Perhaps I am wrong. But the policy debate that takes all those political-economic-racial questions seriously would otherwise shift to seeking better means of distinguishing the truly innocent, those whose moral and social status has been wrongly undermined, from the truly guilt, those whose moral and social status is rightly low. My claim is that there is no fact of the matter about trans-modal character, and that this is morally relevant to status.

Contempt depends on the fiction of the doer behind the deed; it disdains the sinner in addition to hating the sin. If someone admits to having committed a bank robbery or a murder, they’re still: (a) human beings, (b) autonomous agents, (c) members of my moral community, (d) capable knowers, and (e) subject to the same moral luck as all contingent creatures. Thus, they are my moral equal and ought to be my social equal as well: an intuition that reports otherwise is simply in error, no matter how many people share the intuition.

Here’s where it’s helpful to be a contrite fallibilist, though: does anyone who has the status hierarchy intuition also have a reflective defense of it? Macalester Bell doesn’t. Mason doesn’t. But maybe somebody does.

Snark Polemics and Contrite Fallibilism

Most people who know me in person would at least consider using the term “snarky” in their description of me, which is why John Barnes’ polemic against “snark” troubled me so:

 It’s a currently fashionable powerful rhetorical weapon that allows the uninvolved and the never-to-be-involved to discredit people who do, or attempt – anything at all.  Not just those who compete or create or dream or make or struggle in the larger world, but even those who merely try to understand or happen to feel some appreciation.

Ouch! But wait… is this what we mean when we say that we are snarky? I always thought of “snark” as a predilection for using the “snide remark” that “bites and scratches” like Lewis Carroll’s imaginary beasts. Yet for Barnes this could just as easily be simple “sarcasm” which he reserves for frequent good use in his polemic against “snark” itself! In fact, he uses “snark” to name that brand of negativity that is definitionally incapable of good use, among all the other forms of negativity that are not (and what a wonderful list!)

By snark I don’t mean just any old negative attitude.  Negativity comes in many flavors, some of them wonderful at the right time in the right place, others at least occasionally worthy as a dash of flavoring in a complex attitude: anger, bitterness, bitchiness, bloody-mindedness, brutal honesty, calumny, contumely, cynicism, despair, depression, ennui,  envy, fucking bloody-mindedness, ferocity, gibes, gracelessness, hatred,  hatefulness, harassment, insult, intemperance, ingratitude, incredulity,  irony, and that’s all the farther I want to go until we get down far enough into the alphabet to find snark (it’s somewhere between skepticism and snobbery).  Snark is the one that is truly good for absolutely nothing and should be considered grounds for putting people on the list, in preparation for crossing them off.

After a short detour into The Art of Rhetoric, Barnes finally concludes that what he so detests can be defined as ignorant knowingness (“somewhere between skepticism and snobbery”):

Snark is a dishonest reduction expressed with knowningness.

This thing he describes is indeed terrible: I’ve often written of the epistemic and social problems with contempt and the refusal to admit one’s own fallibility, of the effort to reduce the irreducible complexity of the world to a single variable, and of the dangers of tricking oneself into believing one’s own hype. But this is not snark!

The problem with Barnes’ definition of snark is that it defines the failing in terms of the honesty and accuracy of the interlocutor. Thus, it usually only applies to the Other: we are cynical or bloody-minded or incredulous. It is only they who are snarky. (Barnes admits that he has erred in the past, but he repents. I recall a similar scene from Augustine’s Confessions involving the theft of some pears.)

As a definition, Barnes’ offers us all we need to know that the thing defined is wholly without value. It simplifies, it does so inauthentically, and then it pretends to knowledge but is in fact ignorant! How detestable! Yet “snark” in the traditional sense does not mean a refusal to listen or learn from those who may or do know more. Barnes has redefined the word to mean that. I think ignorance is bad, too, but why not decry ignorant knowingness and leave snark, which has another meaning that was working perfectly well, out of it?

I’m not trying to be prescriptive about the meaning of the word, but when I find a someone claiming a meaning for a word I was using with seemingly good understanding among several different communities, I feel like they’re being prescriptive with me.

A Metafilter comment called forth the very best possible response, a few lines from Foucault’s interview with Paul Rabinow on the problem with polemics:

Questions and answers depend on a game—a game that is at once pleasant and difficult—in which each of the two partners takes pains to use only the rights given him by the other and by the accepted form of dialogue.

The polemicist, on the other hand, proceeds encased in privileges that he possesses in advance and will never agree to question. On principle, he possesses rights authorizing him to wage war and making that struggle a just undertaking; the person he confronts is not a partner in search for the truth but an adversary, an enemy who is wrong, who is harmful, and whose very existence constitutes a threat. For him, then, the game consists not of recognizing this person as a subject having the right to speak but of abolishing him as interlocutor, from any possible dialogue; and his final objective will be not to come as close as possible to a difficult truth but to bring about the triumph of the just cause he has been manifestly upholding from the beginning. The polemicist relies on a legitimacy that his adversary is by definition denied.

Here Barnes exercises the privilege of an author with many readers: to define the portmanteau “snark” as he would like. But we are given no possible response; if we prize snark but define it differently, then he has already said, “But that is not what I mean!”

Certainly we cannot deny Barnes’ his argument, insofar as it describes a real thing in the world. He’s done a wonderful job of describing a certain feeling that others evoke in us, the feeling that they’d rather be secure in their ignorance than take the time to consider us as equals. But though the thing he describes is bad, why call it snark? I can’t help feeling that it’s because it allows him to tar knowledgeable “snide remarks” with the brush of ignorant knowingness. Perhaps that’s not fair, but that’s how it feels.

In rhetoric there is a technique of using overly precise or nonstandard definitions as a part of an overall equivocation, or to take advantage of this definition to troll others in a supposedly blameless way, for example:

“by ‘bloggers’ I mean stupid people, no relation to members of the Blogspot community.”

I cannot say for certain that Barnes is using this technique, but it does appear so. The polemic form almost always leads to this effort to ontologize one’s own view of the world, to exclude before inquiry, to define others as unworthy of inclusion. The real question, here, is whether we can ever finally complete the project of defining as worthless that part of the world that we would like to exclude, whether it is the part that includes our critics, our partisan enemies, those who practice our profession differently, or those whose tastes diverge from our own. These in-groups and out-groups depend upon the Other being the perpetrator of negativity they do not have a right to deploy, and so if we could finally show that their crimes justify their exclusion, our work will be complete. We will be safe. Justice will come when it is “just us.”

I hope you can see the irony.

I grew up snarky because I attended a fundamentalist Christian school, where appeals to authority and to expertise were used to justify falsehoods and injustice. My female classmates were treated as second-class students, their dress and comportment closely controlled, their futures circumscribed by their duties to the family. Evolution was denied because of its conflict with the inerrant Word of God. Political disagreements were reduced to the question of abortion and religiosity. In such an environment, “snark” is a tool for denying authority’s legitimacy. Without access to the truth, a child can only respond to the absurdities being preached by those supposedly in-the-know with something “between skepticism and snobbery.” I didn’t know better, I “knew” less: I knew what they said wasn’t true, but had barely an inkling what was.

Barnes would probably agree, but perhaps too he would say that adults should put away childish things. Now that we know, now we too can preach, but from the perspective of truth. I’m not so sure: adults who take on the role of polemicist, of expert, are far too likely to fall into the temptations of inerrancy and arrogance. Our proper role is the skeptic’s, not the priest’s. Snobbery is the priest’s emotion; skepticism is all we have left. We should as often aim our snide remarks at our own authority as at those of others. Though there is room for warm “appreciative thinking” to temper the cold skepticism of “critical thinking,” we must always avoid the “worshipful thinking” that appreciation threatens to become.

We ought, with CS Peirce, adopt a contrite fallibilism: “that we can never be absolutely sure of anything, nor can we with probability ascertain the exact value of any measure or general ratio.” And we ought to snark at those who forget it.

But hey! Maybe I’m wrong to prize “snark” in this way. If so, and I am lucky, perhaps Barnes (or another reader) will help me see my error.

The Middle Class is Losing the Race for Second Place

I think about inequality a lot. But I also think about the middle class a lot, which isn’t quite the same thing. Generally, my sympathies lie with the “least advantaged” or “subaltern,” but I also feel the pull of the American cultural commitment to the middle class.

There can be little doubt that we are seeing a dissolution of the middle class, and this often seems a tragedy. Indeed, my favorite financial guru, Elizabeth Warren, put it like this:

“A middle class where people are falling out and into poverty is a middle class that has less room to bring people up and out of poverty.”

And yet, data going back to 1970 indicates that more people are failing to remain in the middle class due to wealth than due to poverty:

the entire reason the middle class has “shrunk” is that more households today have incomes that put them above middle class. That’s right, the share of households with income that puts them in the middle class or higher was 76 percent in 1970 and 75 percent in 2010—two figures that are statistically indistinguishable. For that matter, I am not discovering fire here; Third Way made the same point in early 2007 (page 7).

As Third Way put it in 2007:

The bottom line is that the middle class is shrinking but not because the bottom is dropping out; it is because more people are better off.

Now, let’s be clear: the two middle quintiles of income will always be populated by 40% of the population, so in some sense there will always be a “middle.” But increasingly this group will not be a class.

Alan Kreuger defines the middle class “as having a household income at least half of median income but no more than 1.5 times the median.” And the incomes statistics suggest that it is increasingly difficult to tread water this close to the median income: either you sink below it, or you rocket above it. But compared to 1970, more people are rocketing above it than sinking below it. (As Warren points out, this is largely a matter of women in the workforce: a couple with two incomes is too rich for the middle-class, and couples and single folks with only one income are too poor for it.)

Many different kinds of inequality compete for our attention when we discuss the politics of fairness. For instance, as Tyler Cowen has pointed out, the difference between the top 1% and the rest of the top quintile is largely what has driven the growing inequality over the last thirty years:

the share of pre-tax income earned by the richest 1 percent of earners has increased from about 8 percent in 1974 to more than 18 percent in 2007. Furthermore, the richest 0.01 percent (the 15,000 or so richest families) had a share of less than 1 percent in 1974 but more than 6 percent of national income in 2007. As noted, those figures are from pre-tax income, so don’t look to the George W. Bush tax cuts to explain the pattern. Furthermore, these gains have been sustained and have evolved over many years, rather than coming in one or two small bursts between 1974 and today.

But this inequality is distinct from the inequality that has afflicted the bottom 50% of the income spectrum:

At the same time, wage growth for the median earner has slowed since 1973. But that slower wage growth has afflicted large numbers of Americans, and it is conceptually distinct from the higher relative share of top income earners. For instance, if you take the 1979–2005 period, the average incomes of the bottom fifth of households increased only 6 percent while the incomes of the middle quintile rose by 21 percent. That’s a widening of the spread of incomes, but it’s not so drastic compared to the explosive gains at the very top.

And even this may conceal accounting effects and the inequality that emerges as the US population ages and some among us become better educated. It is at least plausible that there has been no meaningful growth in the inequality of the 99% at all:

Attacking the problem from a different angle, other economists are challenging whether there is much growth in inequality at all below the super-rich. For instance, real incomes are measured using a common price index, yet poorer people are more likely to shop at discount outlets like Wal-Mart, which have seen big price drops over the past twenty years. Once we take this behavior into account, it is unclear whether the real income gaps between the poor and middle class have been widening much at all. Robert J. Gordon, an economist from Northwestern University who is hardly known as a right-wing apologist, wrote in a recent paper that “there was no increase of inequality after 1993 in the bottom 99 percent of the population”, and that whatever overall change there was “can be entirely explained by the behavior of income in the top 1 percent.”

What we see, then, is a world where the rich have gotten much richer and the poor and median incomes have been relatively stagnant.

I agree with Cowen that the first trend is largely driven by financial engineering (“going short on volatility” and expecting a bailout when those bets don’t pay off) that appears to be negative-sum: the very richest get richer not through work but through arbitrage and winner-take-all approaches to the markets, and they do so by putting the brakes on the rest of the economy. In other words, the problem is financial capitalism, and it requires a response rooted specifically in managing the banking, insurance, and real estate sectors of the economy. (This is the so-called FIRE economy.)

But what about the second non-trend? The largely stable infra-99% inequalities somehow disguise the “dissolution of the middle class.” Or do they?

Game theorists like to joke about the “race for second place”: if the winner realizes she’s winning, she has to slow down, which creates a weird disequilibrilizing competition. In decision-theory, this is called “satisficing” and it is opposed to “maximizing.”

Tyler Cowen refers to people who satisfice on income as “threshold earners,” a group that I certainly belong to:

It is also the case that any society with a lot of “threshold earners” is likely to experience growing income inequality. A threshold earner is someone who seeks to earn a certain amount of money and no more. If wages go up, that person will respond by seeking less work or by working less hard or less often. That person simply wants to “get by” in terms of absolute earning power in order to experience other gains in the form of leisure—whether spending time with friends and family, walking in the woods and so on. Luck aside, that person’s income will never rise much above the threshold.

There is plenty of evidence that the richest quintile is full of people who have enough and are unwilling to work any harder to get more. Consider exhortations to “chill” that are quite popular among the upper-middle class.

abandoning the quest for the ideal in favor of the good-enough. It means stepping off the aspirational treadmill, foregoing some material opportunities and accepting some material constraints in exchange for more time to spend on relationships and experiences.

These are folks who were competing for second place, and, having rocketed out of the middle class, have chosen to take more time off. This behavior certainly expands the income inequality between the richest 1% and the rest of the top quintile. But should it bother us?

Now, we all satisfice, i.e. chill, all the time: even the serial entrepreneur satisfices on non-monetary goods: she has a “good enough” marriage, a “good enough” exercise routine, etc. But we’re not proud of this in the same way that so many Americans are proud about being middle class. We don’t all brag about how we get away with giving our spouse “just enough” attention or how we’re “phoning it in until retirement.” Why not? Because we belong to a culture that doesn’t value income-as-such. But when you’re poor, money does buy a measure of happiness, so why do we take such joy in making less simply because we don’t need it? Resources in excess of need can always be given to those who need them more, either through voluntary charity or state-run cash transfers (i.e. taxing and spending.) To my mind, the reality of satisficing is largely selfish.

I suspect these little exhortations to “chill” are not in fact designed to change anyone’s behavior. Rather they’re a kind of self-congratulation. “Look at me! I’m rich and I don’t work very hard!” Last time I checked, the word for self-congratulatory idle rich folks? “Parasites.” In that sense, “medium chill” is just another way of saying “I got mine.”

Congrats! You won the genetic, educational, and financial market lotteries! You bought low and sold high! To say that the middle class is “losing the race for second place” is to point out that, despite their efforts to “chill” they just can’t help getting ahead. The problem is privilege, and structural inequality, and a changing global economy.

That’s why I tend to think that we ought not to worry so much about losing the race for second place through the enrichment of the middle class. We should focus on the poor, many of whom don’t even figure in national inequality numbers because they don’t live in this country: they belong to the “Bottom Billion” who live outside the US on less than $1 a day PPP.

Now, the strongest argument in favor of a domestic middle class (and a massively reduced upper class) is Elizabeth Anderson’s argument for “relational equality,” sometimes also called “democratic equality.” If we prioritize political participation over a more general account of capabilities, then we might worry less about the material well-being of the poorest and more about their capacity to participate as equals in the self-governance of our democracy. But I’ll save that for another day.

This is What Epistocracy Looks Like

Most academics know some version of the critique of elite rule, administrative power, and centralized regulation by experts. Hannah Arendt called bureaucracy the “rule of No Man;” Michel Foucault described the overlap of legislative power, knowledge-production, and the apparatus of discipline and control; Iris Marion Young defended simple street activism against the demand that political participation meet elaborate standards of reasonableness in the name of pluralism and in so doing laid the groundwork for current theories of agonistic democracy like Chantal Mouffe; Roberto Unger suggested that we ought to embrace democratic destabilization, experimentalism, and a radical institutional creativity belied by the supposed necessity of expert judgments; Anthony Giddens and Ulrich Beck have diagnosed the relationship between risk-aversion and governmental responsibility for emergency management as a modern form of legitimacy that both generates hazards and takes responsibility for managing them. Other criticisms came from conservative circles: Friedrich Hayek, Michael Oakeshott, and even Antonin Scalia.

Phillip Tetlock’s work on expertise is very illuminating here: in some fields, the avowed experts’ predictions actually are no better (and sometimes worse!) than a coin flip. That’s why David Estlund criticized the epistocratic tendency to ignore the systematic biases that underwrite invidious comparisons between evaluations of competence and incompetence in his book Democratic Authority.

And yet, some matters of expertise are unavoidable. David Estlund called these “primary bads”: war, famine, economic collapse, political collapse, epidemic, and genocide. In some cases, increased participation decreases the risk of such catastrophes: literacy and universal suffrage decrease the risk of famine, for instance. ”No famine has ever taken place in the history of the world in a functioning democracy,” Amartya Sen wrote in Development as Freedom, because democratic governments ”have to win elections and face public criticism, and have strong incentive to undertake measures to avert famines and other catastrophes.” Yet democracies still go to war and face economic crises (if not yet collapse) and the temptation is always there to imagine a system that will decrease the likelhood of such events.

The standard line is that democracies must keep experts “on tap, but not on top.” But consider a common example that Steven Maloney and I articulated in our paper “Foresight, Epistemic Reliability and the Systematic Underestimation of Risk:”

all citizens are affected by the Federal Reserve funds target rate (the rate that banks charge each other for overnight loans to cover capital reserve requirements) as it ultimately determines the availability of credit and thus the balance between economic growth, inflation, and unemployment. Most experts agree that the range of viable options for this rate is limited. Further, they agree that direct or representative democratic control of the rate would encourage non-optimal outcomes, including price bubbles that could lead to economic collapse. As a result, decisions on the target rate, which affect every citizen, are nonetheless denied to the public. Some citizens thus argue that the Federal Reserve ought then to be abolished as illegitimate. [These] citizens charge that members of the Federal Reserve Board, who are drawn from the management of a few investment banks, allow systematic biases for their home institutions to color their decisions… [I]t makes (1) findings of fact (2) in an exclusive and closed manner that (3) have coercive effects on citizens because (4) democratic decision-making would lead to cataclysmic primary bads….

Now, it is amusing to point to the financial crisis of 2008 and argue that the Federal Reserve failed to prevent economic collapse. But though the crisis was and remains severe, the Federal Reserve actually played a major and undemocratic role in preventing a true collapse. David Runciman’s recent piece in the London Review of Books makes a similar point:

When democracies are in serious trouble, elections always come at the wrong time. Maynard Keynes, the posthumous guru of the current crisis, made this point in the aftermath of the First World War, and again in the early 1930s. When something really momentous is at stake, the last thing you need is democratic politicians trawling for votes. Keynes readily accepted that democracies were far better at renewing themselves than the supposedly more efficient dictatorships. He just wished they wouldn’t try to do it when they were struggling to stop the world descending into chaos.

Matthew Yglesias discussed the implications of the Federal Reserve for Progressives early last year:

No public institution can or should be truly independent of the political process. The Supreme Court is an independent branch of government, and rightly so. But its decisions are subject to hot political debate, and the nomination of judges to sit on the high court is considered an important presidential power. This, too, is as it should be. The assumption that monetary policy is too important to hold central bankers accountable through the political process should have come to an end along with the illusory great moderation.

Perhaps he is right; but perhaps politicizing the Fed will have the same de-legitimizing impact that politicizing the Court has had, which could be dangerous for an institution whose only power is its capacity to make credible counter-cyclical commitments.

Too often, we have the tendency to reduce these questions into a battle between “democrats” and “elitists.” But there are few serious radical democrats who advocate the dissolution of the administrative state, let alone the liberal rights that restrict majoritarian rule.

Objections to elite status and epistemic privilege more often reflect a kind of partianship about which experts to respect, as a proxy for in-group solidarity. It is difficult not to reduce matters of scientific expertise and superstition to in-group/out-group tribalism: after all, as much as I respect the opposition to intelligent design in public schooling, there is little reason to believe it has important implications for biology curricula, and it also has massive public support in many school districts. A pure democracy would allow the people to set their own standards.

We all fear some out-group, whether it be the white supremacists’ fear of non-white incursions, or the secularists’ fear of theological domination. Many people without a college degree resent the wage premium and social status associated with it; many people with a college degree resent the democratic power of the uneducated and the pandering they receive by politicians and media. Regardless of education, there is the sense of irreconcilable differences. Many people believe that we do not inhabit the same world, even as our disputes over how to constitute our shared world erupt over a very narrow band of possible policies.

Who among us is not an elitist or a vanguardist in some sense? We all think we’re right and that we could run things better than the status quo. Even my fellow fallibilists think we’ve got a recipe for institutional humility that would enhance outcomes!

Democracy Means Asking the Right Questions

Whenever I talk to students about democracy, I like to emphasize that the original term for democratic rule was isonomy. Consider the account Otanes gives in Herodotus’ History:

“[T]he rule of the multitude [plêthos de archon] has… the loveliest name of all, equality [isonomiên]…. It determines offices by lot, and holds power accountable, and conducts all deliberating publicly. Therefore I give my opinion that we make an end of monarchy and exalt the multitude, for all things are possible for the majority.” (Herodotus 1982, 3.80)

Here Otanes identifies democracy with the strict equality accomplished through lots, rather than election by popular balloting. Though this might seem too random when compared to the collective choice of representatives, the appeal of this vision of isonomy is that the lottery supplies an equal opportunity for rulership to each citizen, guaranteeing equality well in excess of the American ideal of equality ‘before the law.’ But note that this equality is only possible when combined with two forms of accountability: that accounting by which an officer must give an accurate tally of expenditures during the administration or be held liable, and the figurative accountability by which the officer owes his fellow citizens his reasons for the decisions made in the public deliberations before, during, and after the decision is taken. Obviously, the use of lots only functioned insofar as citizenship was radically restricted, and Otanes justifications for the ‘rule of the multitude’ fell flat against Darius’ account of the tendency of all regimes to fall into monarchy insofar as both oligarchies and democracies produce agonistic tensions from which one man eventually emerges the victor and is designated the most excellent and the wisest of the contenders. (Herodotus 1982, 3.82)

The three norms of isonomy are mutually reinforcing: equal participation requires that the office-holder act with the understanding that she might be replaced by any other member of the community. She cannot abuse her office without being held to account at the end of her term. For the same reason she must regularly give reciprocally recognizable justifications for her actions, without which her decisions might be reversed by the next office-holder, or even punished when her office no longer protects her from prosecution. The ideal result of such a regime is a strong preference for deliberation, consensus, and mutual respect, alongside a cautious honesty and transparency with regard to potentially controversial decisions.

The reverse of isonomy is bureaucracy. Bureaucracies are more efficient, and are supposed to be more procedurally rational, but insofar as they are predicated on expert knowledge, they’re not intended to involve every citizen or to answer to them directly. According to Joseph Schumpter’s popular formulation of the relationship, too much democratic control makes it difficult for the administrative state to efficiently pursue the public goods citizens ultimately want. But there are still ways to hold bureaucracies accountable.

During the Tufts Civic Studies Institute, we met with Luz Santana and Dan Rothstein of The Right Question Institute. Santana and Rothstein have a simple model for teaching people to generate, improve, and strategically deploy interrogatives. They mobilize a few easy heuristics, like the difference between open-ended and “closed” questions (which can be answered with a single word or short phrase,) but they also emphasize the role of questioning in holding others accountable. Underwriting the whole project is the empowering assumption that those with power can nonetheless be required to answer questions about the reasons that went into a decision, the process by which it was reached, and the role for individuals affected. These are subversive demands, as they undermine unreasonable, unfair, and exclusive decisions.

One of the ways that people experience power and weakness is through a tacit recognition of who has the right to ask questions, and who does not. By giving those who normally feel disempowered a little practice and confidence with questioning, Santana and Rothstein suggest that they can reverse some of those tacit assumptions in a democratic manner. It takes about twenty minutes to teach their method, but look at the results:

Dominique’s landlady wanted to sell the property Dominique was renting to a buyer that didn’t want to have a tenant. Without much introduction, the landlady knocked on Dominique’s door one evening and asked her to sign a paper. Unbeknownst to Dominique, the paper was an agreement that she would have to move out of her apartment within 30 days. [Dominique] had just participated in a short educational workshop at the adult literacy program she attends. At the workshop, Dominique had learned that she had the right to ask questions and more importantly, she had learned how to ask good questions about the decisions that affect her life. Dominique politely asked her landlady to leave the paper with her so that she could look it over before she decided if she was going to sign it. Dominique plowed through the language and realized that she would need help in deciphering the paper. Thinking about the RQI process, Dominique started coming up with her own questions. Then, she began calling the few people she knew in Philadelphia to try to get some answers. One of her friends gave  the number of a lawyer that worked for a renter’s assistance program. Dominique followed up and found out that her landlady didn’t actually have a renter’s license and therefore couldn’t take a legal route to evict her. The new owner would have to honor Dominique’s leasing agreement until the following year.

Santana and Rothstein describe this as an exercise in microdemocracy. Most people’s understanding and civic capacities are at their weakest in the formal voting and lobbying of representatives that political theorists tend to emphasize as the heart of political life. This is doubly true for youth, immigrants, and the unemployed. Yet these are are the people that have the most interactions with the state’s coercive power, and a good strategic question can help to democratize the millions of interactions individuals have with the employees of state agencies. 

Though Santana and Rothstein emphasize questioning as a practical skill and the source of all other rights-claims, I think there’s something deeper at work here: not just questioning, but interrogation. We generally reserve “interrogation” for custodial questioning by the police, where state officials set out to elicit a non-voluntary confession from an unwilling speaker, forcing them to divulge something that they did not want to reveal. Indeed, the Latin quaestio is also the word for torture, and a quaesitor or inquisitor would ‘put one to the question’ with implements whose primary purpose was to cause the excruciating pain that was once the only surety in the world of jurisprudence.

In the policing model, the interrogative relationship is a curious reversal of the norms of elite domination: the questioner’s ignorance is her strength, while the respondent’s knowledge is the basis for subjecting her to the question. I think this is what the RQI taps into: because the interrogative relationship prises apart expertise and power, it is especially useful for reworking the sources of bureaucratic governance that most people experience as their primary mode of interaction with the state. By demanding reasons, ordinary citizens help to police the reasonableness of the administrative state; by demanding a fair process, they remind officials that a fair process is expected; by demanding to know what their role in any decision affecting their lives will be, they build the assumption that there will be a role for them into every discussion of the decision, and this assumption can be self-fulfilling.