In the first place, the human capacity for moral reasoning can be strengthened because our general reasoning capacities are amenable to improvement. This is most readily observed by considering how people develop as they mature from children to adults. We improve at logic, whether or not we learn the fancy Latinate names for rules of inference. We also get better at understanding and explaining events, and thereby become more adept at the common form of reasoning called inference to the best explanation. These are hypotheses about what best explains some event or phenomena, and we make them all the time. If the lawns and streets are wet this morning, the best explanation is that it rained last night while we were asleep. Such inferences are especially useful for making one’s view of the world more accurate. They help us see through many popular conspiracy theories, for instance: those that require implausibly exquisite competence and secrecy from a large group of people. Thus abstract principles of reasoning, in conjunction with observation and general knowledge, can generate substantial conclusions about the world. Some basic training in logic and statistical reasoning doesn’t hurt our ability to reason, either.
Two initial objections may occur to the reader. First, haven’t behavioral economists and psychologists shown us that people are very bad at reasoning? We humans are predictably irrational in various respects, often succumbing to errors about value and probability that are relatively easy to exploit. Second, what do these reflections on general-purpose reasoning have to do with specifically moral thought? Both worries are reasonable but not decisive. What economists and psychologists have shown is that people unreflectively adopt heuristics—rough-and-ready simplifying principles—that work pretty well in a wide variety of common contexts. When interested parties, including both marketers and scientists, figure out the heuristics people use, they can exploit circumstances where it fails. We commonly chase sunk costs, overvalue things that belong to us, and respond differently to equivalent scenarios depending on how they are framed. Although these failures of rationality are fascinating and important, concentration on them can obscure the fact that our reasoning works well in many other circumstances.
What does all this have to do with specifically moral reasoning? Plenty of general reasoning is morally relevant, in that it can be applied to morally significant cases and yield practical conclusions. Hence in an important sense we can improve our moral reasoning simply by improving our general purpose reasoning and applying it to moral questions. A narrower conception of moral reasoning would focus on our ability to reason with moral concepts such as fairness. Again it is helpful to look at bad moral reasoners: children just learning how to apply moral concepts. Kids learn early that “It’s not fair” is more powerful than “I don’t want to” or just “No.” Once introduced to the power of appeals to fairness, children will start to make them when they previously would have said no or just cried, as if unfair just means contrary to what I want. But quickly enough one learns that claims about fairness have to be disciplined in certain ways. You can only make a claim of fairness when you can offer reasons others should accept as binding regardless of who benefits in any specific case. This is just one example of how specifically moral reasoning can be improved.
Admittedly, general principles about fairness and other moral concepts only go so far, and they cannot determine how to balance fairness against other goods, such as welfare. This is not a question that can be given a persuasive answer in the abstract. While it is easy to doubt whether judgments about problem cases are justified, in part because intuitions diverge, it is hard to deny the superiority of certain answers in many ordinary cases. Hence although there are grounds for a modest pessimism about the limits of moral reasoning, the radical proposal under consideration has deeply implausible ramifications.
Recently several arguments have been offered as scientific grounds for such strong pessimism about moral reasoning. I will focus here on two of the most significant. First, it is claimed that the function of the brain is advocacy rather than discovery; the idea is that evolution built the brain to win arguments rather than to find truth. Second, some hold that moral reasoning is fraudulent because we typically engage in it when a moral judgment has already been made on other grounds, primarily emotional ones. Such reasoning amounts to no more than a search for arguments for a pre-established conclusion. In this view, what passes for moral reasoning is really post hoc rationalization. [For those who want more detail, here is a link to one of the most prominent papers in the pessimistic genre, which discusses several of these arguments.]
The idea that the human brain is a machine built to win arguments rather than to discover the truth seizes on the fact that people are biased in myriad ways, and these biases influence their evaluation of evidence. (This tendency is not quite as dismal as it seems, since it makes sense to hold on to core beliefs and values firmly rather than continually reevaluating them, which would have significant psychic costs.) But the claim that the brain’s primary function is advocacy rather than discovery is not credible. Most intellectual tasks involve problem solving rather than persuasion; you don’t argue with a bear but hunt, fight, or flee from it. Even in social contexts, where persuasion is most important, there are obvious costs to being proven wrong. Convince the tribe that bears are harmless and your reputation is likely to suffer. This picture of human thought looks badly distorted. As with much of the case for pessimism, this is an overstatement that derives illicit support from its cynical appeal.
The second claim is directed specifically at moral reasoning, which it holds to be mere rationalization. In this view, even though general reasoning skills can be improved, these improvements do not carry over to moral thinking because the moral domain is shot through with strong interests and emotions. Worse yet, sharp reasoning skills can improve people’s ability to justify whatever they want to do. But the evidence for such an extraordinary conclusion turns out to be surprisingly weak. The strongest case comes from the well-documented human capacity to confabulate about our reasons, telling neat stories about what we do and why, which do not hold up under scrutiny. Although confabulation happens in various contexts, many of which have nothing to do with value judgment, no one thinks that this phenomenon supports a global pessimism about reasons. Even if we sometimes get it wrong about what we’re doing, everyone grants that in most ordinary contexts we know both what we are doing and why. Moreover, sometimes when people confabulate a false causal story, they are still sensitive to reasons that they cannot articulate. In one famous study, subjects who were unconsciously tipped off by an experimenter about how to solve a puzzle often told demonstrably false stories about how they realized the solution—but they were nonetheless responsive to a clue in their environment that helped solve the problem.
Consider one of the most frequently cited experiments when it comes to the case of moral judgment specifically. Social psychologists claim to have found that subjects were morally dumbfounded by various “offensive yet harmless” scenarios, such as eating one’s dead pet dog and cleaning the toilet with the flag. That is to say, although they were quite sure there was something wrong with these actions, the subjects could not give reasons in support of their judgments. Or so it is claimed. This is put forward as evidence that we typically or always make moral judgments on the basis of irrational emotions and then search for bogus reasons to support them afterwards.
While the dumbfounding experiment has been deeply influential, it has serious problems, the worst of which is that it presupposes an extremely narrow conception of what can count as a good practical reason: a reason to act or forbear from acting. This should be obvious, since there are obviously good reasons not to perform those offensive actions previously described or—to take another example from the original experiment— to cannibalize a corpse in a medical laboratory so as to avoid wasting meat. In the dumbfounding scenarios, the experimenters stipulate that there are no harmful consequences of actions that stir up strong aversive reactions. But although a fortunate outcome can be stipulated about a fictional scenario, one cannot simply stipulate that a type of action isn’t dangerous—that is, likely to be harmful in realistic contexts—or that it does not violate well-founded rules, such as the rule laboratories have against the desecration of corpses. The general tendencies of actions are matters of fact rather than stipulation. This point is crucial, because good rules and sound intuitions are based on such generalizations about the likely consequences of an action, not on its specific results, which are often unpredictable.
Moreover, the psychological literature on moral dumbfounding presupposes that the only thing that can count as a practical reason is harm. It then adopts an untenably narrow conception of what counts as harmful that ignores danger, treats well-founded rules as mere suggestions, and ignores painful emotions even when they are predictable. This is not science but scientism, which purports to rest on purely empirical grounds but actually relies on hidden and implausible moral premises. [Read a more detailed critique of the moral dumbfounding experiment].
It is mere scientism to insist that deeply held human aversions and attractions, such as our sensitivity to the expressive aspects of our actions, are irrational taboos to be dismissed as magical thinking. Yet this literature does just that. There is nothing inherently magical about being averse to sticking pins in a doll constructed to resemble your child, for instance. Magical thinking requires some false causal belief, such as belief in the power of voodoo; but one need not labor under any such illusion in order to prefer not to deface an image of your beloved. Similarly, people are reluctant to do things with a symbol of something they care about (such as a flag) which suggests indifference or hostility toward what it symbolizes. Most of us would not want to drink water that has had a sterilized roach dipped into it, even though we know the roach did not add any germs to the water, simply because such “roached” water is disgusting. While there is a science of disgust, there is no science of the disgusting—that is, of what merits disgust—and the tacit assumption that only germs can be disgusting leads to some obviously absurd conclusions. But these are just the cases that the psychological literature takes to show that we are in the grip of taboos and magical thinking—not just in certain instances of moral judgment but typically.
Most ordinary people are aware of these points intuitively, even if they cannot say more about why they are averse to drinking roached water, desecrating corpses, or eating their dead pet. The dumbfounding literature simply assumes that the offensiveness and disgustingness of certain actions does not provide reason to avoid them—so long as they are stipulated to be, in some narrow and artificial sense, harmless. Indeed, harm itself is not a scientific concept but a moral one; yet that does not undermine its significance. Though one could attempt to formulate an empirical notion of what counts as an injury, say, all that would do is demonstrate that there are other sorts of harms than injuries. It is simply not in the purview of science to discover what humans ought to care about.
What we are left with, after the hyperbole, is that there are real worries specific to moral reasoning. When people’s interests are involved in an argument, we can expect them to be biased. And when people are in the grip of a strong emotion, they are often unreasonable. These observations are true but banal. What is more, conceptions of moral reasoning that require it to be untainted by anything contingently human—such as our attachment to specific people and projects, our sensitivity to symbolism and emotional expression, and our special concern with the consequences of our own actions—then such conceptions of morality will inevitably be disappointed with the inability of humans to live up to them. But none of this is to say that it is impossible for people to act against their self-interest, knowingly, or to constrain their behavior on grounds of fairness or other moral concepts. We can and do so, often—albeit not as often as we flatter ourselves in thinking.
But the question was not whether moral reasoning is difficult to engage in honestly and, at least sometimes, harder to follow. It was whether it is possible to strengthen our capacity for moral reasoning—or if, as some pessimists claim, moral reasoning is fraudulent or pointless at its core. The modestly pessimistic claim is true but rather obvious to those who are not naïve about human nature. The strongly pessimistic claim is exaggerated and as simplistic as its naively optimistic counterpart. We should reject it.
Questions to consider in the comments:
- What are the strongest grounds for pessimism about moral reasoning?
- Can you formulate a claim weaker than that moral reasoning cannot be strengthened but stronger than that moral reasoning is difficult, which can be stated clearly and evaluated with evidence?
- What challenges are specific to moral reasoning, and sort of strategies might be employed to meet them?
- Is there anything inherently irrational in caring about expressive and symbolic aspects of our action? Would more rational creatures than humans not care about such things? Do you suppose that we humans could rid ourselves of such cares and, if we could, why think we should do so?
- What would you say about someone who engaged in cannibalism or ate a dead pet, not because he was starving but simply in order to avoid wasting edible meat or to try something new? What about someone who did something that risked serious emotional harm but which, in the event, proved harmless? Are these actions OK or is there something wrong with them?
Discussion Summary
Most of the comments on the essay focused less on moral reasoning than on moral judgment generally. Perhaps this should not be surprising. Although the essay was primarily concerned with recent scientifically based arguments for pessimism about moral reasoning, those arguments tend to take for granted the answers to the most basic issues of moral metaphysics. That is to say, they assume that moral language is meaningful, that some moral claims are true, and that moral knowledge exists, even if they disagree about the nature of such truth. The psychologists tend to accept moral intuitions as they stand, despite thinking them to be driven by emotion rather than sensitive to reason, by adopting a form of moral relativism. The philosophers tend to reject commonplace moral intuitions, precisely because they depend on emotion, but they want to replace these tainted judgments with self-evident, rational intuitions. Thus both embrace pessimism about ordinary moral reasoning despite being optimists (that is, realists) about moral judgment.These new arguments that arise from the empirical ethics movement are novel and interesting, but they presuppose certain things that several of the commenters wanted to call into question. It might help wrap up the discussion to examine these presuppositions. Consider first the challenge that moral judgments are just expressions of the speaker’s emotions and therefore neither true nor false. This is the view associated with logical positivism, but it need not adopt the positivists’ radical account of meaningfulness, which has largely been abandoned. The biggest problem with the simple story of moral judgment as expression of emotion is that nobody treats moral judgments—their own or other people’s—in this way. In making moral judgments we attempt to persuade others to feel as we do. At the very least, then, what we express is not just approval or disapproval; it also urges others to do so as well.
This is the point at which moral reasoning enters the picture. Another fact about how moral discourse actually takes place is that we do not simply make judgments but offer reasons for them, which purport to justify those judgments. We do not simply say that abortion is always wrong or permissible (or some more qualified claim). We also say why: because it stops a beating heart or because it is my body and my choice—to take two bumper sticker quality reasons, for example. Although many people hold skeptical theories about moral judgment, few can consistently treat moral judgments the way those theories seem to require. And it does seem like some reasons are better than others. Most people who have thought about the issue should be able to offer the first line of response against both those bumper sticker reasons. This shows how we treat moral judgments: as claims that stand in need of justification, and that can be justified with reasons. To be sure, those reasons give out pretty quickly. It’s hard to see what more could be said to defend the claim that pain is bad, for instance, but it’s also hard to see what further justification of that claim is necessary.
What needs to be the case in order for moral reasoning to be anything like what it purports to be: evidence in favor of some moral judgment? It need not be the case that there are objective answers to all moral questions, especially not answers that are independent of anything distinctively human but can speak to all rational beings. There are many domains of evaluative judgment—concerning aesthetics, for instance—where it seems clear that something human must be implicated in truths about beauty. A rational being with fundamentally different sensory equipment would not see any point to our concept of the beautiful. Nor does it seem like we need to believe in a final aesthetic truth, where say all painters are ranked in order of their quality, in order to be confident in the relative merit of Rembrandt versus Rockwell as painters.
It seems instead that there are ways in which we can make smaller discriminations in our ability to reason about moral matters. We can point out relevant similarities between one case and another—for instance, between abortion and capital punishment—and then find disanalogies between the cases as well. Then we can examine the similarities and dissimilarities and see whether some are more pertinent than others. There is no guarantee that we will agree about this, or that we will not find ourselves, at the end of the day, unsure about our judgments. But this is true about disagreement and reasoning in other areas as well.
Two New Big Questions:
1. When is the opinion of experts more likely to get the right answer than mass opinion, and when does the “wisdom of crowds” exceed that of individual experts?
2. How do universities promote or inhibit diversity of opinion?
Can Our Capacity for Moral Reasoning Be Strengthened?
Reviewed by Unknown
on
6:14:00 AM
Rating:
No comments: