I haven’t heard from Leigh, so I’m posting this mostly as a placeholder. Feel free to jump in with comments, questions, or discussion on chapters 2 and 3 of Reasons and Persons.
My own comments: chapter 2 raises a number of fairly conventional rational choice problems. I first became acquainted with these in thinking about voting systems and Condorcet optimality, but they will be familiar to anyone who thinks about preferences. Because a moral theory that dictates exactly the same action or choice to all adherents will often lead to less-than-optimal outcomes, all consequentialist moral theories and any remaining publicly defended (i.e. non-self-effacing) theories of self-interest will need to be agent-relative, asking different actions of different actors. We can’t all play Falstaff, I guess.
What follows in the remainder of the chapter and throughout chapter 3 is a nice little introduction to collective action problems.Â What Parfit calls the ‘Repeated Prisoner’s Dilemma’ in this section on ‘practical problems’ is just what most people think of as the traditional ethical situation: our choices will have an impact on the choices of others, and our capacity to choose in the future will be expanded or constrained in non-linear ways based on the choices we make today. Parfit then proposes to evaluate several attempts deal with this: trustworthiness, reluctance to freeride, Kantian universalism, and altruism. Motivationally, these can be solutions both for S and for C: self-interest agents might create a reputational network to evaluate the trustworthiness or tendency to freeride of potential collaborators, for instance, just as they might call for an equitable rule of law in a Hobbesian state orÂ adopt strategic (but occasionally minimalist) altruism so as to enjoy social goods like friendship and rescue. Still, this is a nice typology for thinking through collective action problems.
In chapter 3, Parfit focuses his attention on some alleged trumps to these solutions. These objections are sometimes offered as trumps to consequentialist calculation. All of them assume that the agent can contribute to some great good or evil: saving trapped miners, torturing prisoners, salving the suffering of the wounded, or overfishing a lake. I take this to be a straightforward analysis, until the final sections on small and imperceptible benefits and harms. When I drive my car or use electricity, compete in the marketplace for scarce goods, or volunteer for service work, my actions set off a complicated chain of reactions. The consequences may not be available for my precalculation, but as Parfit argues we cannot fairly extend this fallibilism to retrospective and second-order calculations by refusing to engage in historical and counterfactual calculation. This is where Parfit takes moral calcuation to a kind of moral calculus of infinitesimals. Even if an act makes an imperceptible contribution to some harm or benefit, this act will still be T-given or T-deprecated if adds or subtracts from the share.
Yet consider Walter Sinnott-Armstrong’s argument about global warming and driving: if I go for a Sunday drive, I add a small but measurable amount of carbon dioxide to the atmosphere. Carbon emissions in general contribute to global warming, which will have terrible consequences for future generations, probably including natural disasters like Hurricane Katrina. So, together with the fossil fuel usage of all human beings, my Sunday drive has contributed minisculely to massive suffering. Parfit argues that I am thus infinitesimally to blame, and I suspect that I might be. Sinnott-Armstrong finds this unpersuasive: after all, he cannot prevent climate change by forgoing the drive. He can’t even delay it. All the suffering will result regardless: global warming is the result of a massive collective act, and Sinnott-Armstrong argues that we are only obligated to act collectively to resolve it by advocating for policy changes at the national and global level to stave off global warming.
Sinnott-Armstrong might respond to Parfit’s Harmless Torturers (or to Adolph Eichmann) by suggesting that, rather than give up their jobs in uncertain economic times, they ought to continue to push the button that tortures the thousand victims… while vociferously protesting the policies of torture. The prisoners’ suffering will only end if all the torturers organize a work stoppage and prevent scabs. So if, in the process of trying to organize the general strike, the Harmless Torturer continues to go to work and push his button indirectly and imperceptibly leading to the severe pain of his prisoners, I find, on reflection, that I would have trouble deriding the torturers choice. Perhaps this is simply evidence of my preference for political rather than psychological solutions to collective action problems: I’m a political philosopher, after all. But how can anyone have reason to act a particular way if they submit their decisions about Sunday drives and steak dinners to political deliberation and coercive policy? Musn’t there be independent ethical/psychological reasons for acting in these settings? Any thoughts?