Parfit Group Week 2: Open Thread

I haven’t heard from Leigh, so I’m posting this mostly as a placeholder. Feel free to jump in with comments, questions, or discussion on chapters 2 and 3 of Reasons and Persons.

My own comments: chapter 2 raises a number of fairly conventional rational choice problems. I first became acquainted with these in thinking about voting systems and Condorcet optimality, but they will be familiar to anyone who thinks about preferences. Because a moral theory that dictates exactly the same action or choice to all adherents will often lead to less-than-optimal outcomes, all consequentialist moral theories and any remaining publicly defended (i.e. non-self-effacing) theories of self-interest will need to be agent-relative, asking different actions of different actors. We can’t all play Falstaff, I guess.

What follows in the remainder of the chapter and throughout chapter 3 is a nice little introduction to collective action problems. What Parfit calls the ‘Repeated Prisoner’s Dilemma’ in this section on ‘practical problems’ is just what most people think of as the traditional ethical situation: our choices will have an impact on the choices of others, and our capacity to choose in the future will be expanded or constrained in non-linear ways based on the choices we make today. Parfit then proposes to evaluate several attempts deal with this: trustworthiness, reluctance to freeride, Kantian universalism, and altruism. Motivationally, these can be solutions both for S and for C: self-interest agents might create a reputational network to evaluate the trustworthiness or tendency to freeride of potential collaborators, for instance, just as they might call for an equitable rule of law in a Hobbesian state or adopt strategic (but occasionally minimalist) altruism so as to enjoy social goods like friendship and rescue. Still, this is a nice typology for thinking through collective action problems.

In chapter 3, Parfit focuses his attention on some alleged trumps to these solutions. These objections are sometimes offered as trumps to consequentialist calculation. All of them assume that the agent can contribute to some great good or evil: saving trapped miners, torturing prisoners, salving the suffering of the wounded, or overfishing a lake. I take this to be a straightforward analysis, until the final sections on small and imperceptible benefits and harms. When I drive my car or use electricity, compete in the marketplace for scarce goods, or volunteer for service work, my actions set off a complicated chain of reactions. The consequences may not be available for my precalculation, but as Parfit argues we cannot fairly extend this fallibilism to retrospective and second-order calculations by refusing to engage in historical and counterfactual calculation. This is where Parfit takes moral calcuation to a kind of moral calculus of infinitesimals. Even if an act makes an imperceptible contribution to some harm or benefit, this act will still be T-given or T-deprecated if adds or subtracts from the share.

Yet consider Walter Sinnott-Armstrong’s argument about global warming and driving: if I go for a Sunday drive, I add a small but measurable amount of carbon dioxide to the atmosphere. Carbon emissions in general contribute to global warming, which will have terrible consequences for future generations, probably including natural disasters like Hurricane Katrina. So, together with the fossil fuel usage of all human beings, my Sunday drive has contributed minisculely to massive suffering. Parfit argues that I am thus infinitesimally to blame, and I suspect that I might be. Sinnott-Armstrong finds this unpersuasive: after all, he cannot prevent climate change by forgoing the drive. He can’t even delay it. All the suffering will result regardless: global warming is the result of a massive collective act, and Sinnott-Armstrong argues that we are only obligated to act collectively to resolve it by advocating for policy changes at the national and global level to stave off global warming.

Sinnott-Armstrong might respond to Parfit’s Harmless Torturers (or to Adolph Eichmann) by suggesting that, rather than give up their jobs in uncertain economic times, they ought to continue to push the button that tortures the thousand victims… while vociferously protesting the policies of torture. The prisoners’ suffering will only end if all the torturers organize a work stoppage and prevent scabs. So if, in the process of trying to organize the general strike, the Harmless Torturer continues to go to work and push his button indirectly and imperceptibly leading to the severe pain of his prisoners, I find, on reflection, that I would have trouble deriding the torturers choice. Perhaps this is simply evidence of my preference for political rather than psychological solutions to collective action problems: I’m a political philosopher, after all. But how can anyone have reason to act a particular way if they submit their decisions about Sunday drives and steak dinners to political deliberation and coercive policy? Musn’t there be independent ethical/psychological reasons for acting in these settings? Any thoughts?


Posted

in

by

Comments

3 responses to “Parfit Group Week 2: Open Thread”

  1. Toby Avatar
    Toby

    Assume that my abstaining from unnecessary driving in no way diminishes the suffering that is caused by global warming. Some versions of consequentialism hold that I therefore am not obligated to abstain from unnecessary driving. Yet I still have the intuition that I am obligated to abstain from unnecessary driving. So I should reject either this intuition or the versions of consequentialism that conflict with this intuition. But I have no idea how to decide which should be rejected. How much weight should our intuitions be given versus some ethical theory that conflicts with those intuitions? (There is the further problem of determining what counts as an intuition, but I leave that aside.) At times, Parfit seems to rely on intuitions as offering overwhelming reason to reject some theory (e.g., the "Repugnant Conclusion" is felt to be repugnant, and this calls for a rejection of any theory that entails the “Repugnant Conclusion”). At other times, Parfit seems to reject intuitions on the basis that they conflict with some theory (e.g., we should reject our intuition that personal identity matters in light of Parfit’s theoretical arguments that personal identity does not matter). But how does one know when to reject the theory in favor of the intuition or vice versa? It seems that Parfit should provide non-arbitrary criteria for determining this, and I am not aware that he has done so.

  2. Joshua Avatar

    I'm with you, Toby. For a guy seeking reasons, Parfit seems to depend on intuition pumps and thought experiments an awful lot. If stories about wounded men in the deseert count as reasons for rejecting an argument, then we're working with a fairly broad definition of 'reasons.' Moreover, his tendency isn't quite reflectively rational: he doesn't move back and forth between theory and examples, but rather moves between examples, always adapting the theory. At best theoretical hurdles help him generate new examples.

    That said, I take Parfit to be fairly responsible in his reason-giving vis-a-vis imperceptibles. Since he begins by showing that we can praise or blame an agent for their share of the total increase or decrease in benefits or harms, his further claims about small changes (the pint of water each of us adds to the water-cart) seem to follow both theoretically and intuitively. Since each of a thousand altuists must add their pint in order to save the wounded men, each altruist has to be able to come to the same conclusion in order for the water-cart to be filled. The same is true for any collective activity: if a consequence requires coordinated activity, then agents seeking that end need to be able to realize through some theory of action what is required of them to achieve the collective effect, or else that effect is impossible or merely happenstance.

  3. Steven Maloney Avatar
    Steven Maloney

    Sorry I'm late to the party. Part of the difficulty I see with Parfit is that his examples require no discovery costs. How do I know what benefit I might get out of a Sunday drive if I never take one? How do I know the range of benefit unless I go frequently? If the only way to come to know such effects is to engage in the activity in the first place, where does that leave our revised moral position?

Leave a Reply to Steven MaloneyCancel reply