Peter Levine has been blogging on various aspects of truth recently: democracy in a “post-truth era,” issues in prediction, and now a piece on scientism:
if all truths were scientific truths, we would be in deep trouble. We would then reject any claims that science cannot support. For example, do all human beings have equal value or worth? Either that makes no scientific sense (because objective or intrinsic value is not a scientific idea), or it is manifestly false, because science translates “value” into something like capacity or functioning, and then it is obvious that not all humans are equal.
I would argue that the agential view that treats us as reason-responsive free subjects is a subset of the naturalistic one, and that when naturalism and the agential view are in conflict, naturalism trumps. But I believe that values are compatible with naturalism.
Here’s how I’d put it. “Human equality” is not falsified by science’s insistence of objectivity, it is falsified by our practices and common sense observations. Simply put: we don’t treat all humans equally at present, so the claim of “human equality” is either nonsensical or aspirational. I take it that Levine’s worry is about a scientism that says that such values are nonsense, but I prefer to think of them as aspirations to extend our limited and fragile practices of equality beyond their current scope.
The real danger is not dissolving “human equality” into observable inequality (of status and capacity) but assuming “human equality” is settled while there is still work to be done in achieving the kingdom of ends. We don’t treat women equally to men, we don’t treat non-whites equally to whites, and we don’t treat foreigners equally to neighbors. But we should, and we do aspire to do better.
If equality is aspirational, we don’t need to adopt a non-naturalist metaphysics in order to justify it: we can explain the origin and practice of equality norms in our current practice naturalistically, and then explain our desire to extend those norms naturalistically as well. This is where P.F. Strawson, Elinor Ostrom, Cristina Bicchieri, Karen Stohr, and Jerry Gaus can help.
9 responses to “Naturalism and the Truth of Human Values”
Can we explain not just our *desire* to expand those norms/patterns of behavior, but the idea that we *should* expand such norms, purely in scientistic terms?
(Noting that “the only truths are the truths of science” is quite a bit stronger than “the only metaphysical stuff is natural metaphysical stuff.”)
First, I take it that “scientism” is the name of the the bad replacement naturalism of the Churchlands, not what I’m advocating. I use it to avoid equivocation with quasi-Quinean naturalism. The only truths are the truths of naturalism, but usually the most comfortable way to express those truths is in terms of their respective domains and disciplines: naturalism is the regulative ideal, is all.
Between “ought” and “desire” I think we have an account of norms, rules, and strategies. So, I’d put it this way: in order to give a naturalistic account of some duty, we’d have to do some translation into a formal semantics. However, we don’t normally need to do this; it’s usually pointlessly difficult, but in principle, that translation is always possible. Sometimes, there’s the possibility that we’ll equivocate or make other mistakes in ordinary language, and in fact that equivocal slippage is one of the sources of current norms of “human equality.”
What does “translation into formal semantics” have to do with figuring out which desires we ought to pursue? It would be kind of you to begin your answer to my question with a brief indication of what you take that phrase to mean. I seem to be sucking too much CO2 right now.
Joe, I’m referring to things like Ostrom’s Institutional Analysis and Development Framework here. But Daniel’s question was about explaining duties in a naturalistic framework, not about deciding. Deciding is really much easier.
I think the odds of us ever successfully communicating are close to nil.
Umm. I think you have successfully communicated your frustration, but not your reasons for it.
Here’s a paper on Ostrom’s framework and the “ADICO” syntax: http://www.indiana.edu/~workshop/publications/materials/W08-33%20Draft.pdf
I was going to pose the same question as Daniel, and I’m not sure I follow the discussion after his question (although I look forward to reading the Strawson piece). But to put Daniel’s question a different way: We call expanding our respect for equality “aspirational,” whereas we regret our growing capacity to kill other human beings. How can we know which developments of our natural orientation are aspirational, naturalistically?
Strawson allows us to say that we know which developments are regretable and which aspirational just because they’re the ones that we regret or towards which we aspire. But I think he makes it *too* easy, and we need additional language to talk about the development of meta-norms like those advocated by the traditional ethical systems (virtue, duty, pleasure, capability.)
Here, again, is where reflective equilibrium is risky: where it’s possible to perform a Parfitian act of moral convergence we should do so, showing that norms that start out in different languages are actually not practically or institutionally distinct. But some norms are in irremediable conflict, and there I do think that we have to accept that our informed intuitions are the only possible truth-makers. It’s my belief that our informed intuitions will always be basically consequentialist in either a domination-reducing and capability-maximizing framework, but I haven’t fully resolved whether there might be tensions within those two perspectives. (Daniel Levine suggests that there’s still a third valence, care, which might trouble this account further. I am a deflationist about care, hoping to make it fit under capabilities, but ignorantly so.)
I don’t mean any of this is in a relativistic sense, but I do think it opens up two difficult possibilities: in the future, things that we now take to be moral (even as reflective elites) may be taken to be immoral (even among reflective elites), and vice versa: things we now believe to be immoral may be taken to be moral. As reflective elites I think it’s easy to point to folk morality that’s “obviously” immoral, but it’s harder to believe this of ourselves and our own beliefs without feeling the risk of relativism. The intertemporal comparisons are only supposed to work one way, progressively, but that’s not how fallibility actually works.