Man is a rational animal - so at least I have been told. Throughout a long life, I have looked diligently for evidence in favor of this statement, but so far I have not had the good fortune to come across it, though I have searched in many countries spread over three continents.
This sentiment is often expressed (usually a little more forcefully) in pub conversations among the like-minded, referring to any number of examples, such as support for Chelsea FC, the charms of Katie Price, or the attraction of Strictly Come Dancing.
In Pascal Boyer's
Religion Explained, an excellent exploration of the anthropological evidence for the origins of religious thinking, chapter 5,
Why do Gods and Spirits Matter?, includes a discussion of morality. He talks about moral reasoning and feeling:
We all have moral intuitions (“My friend left her purse here, I must give it back to her”), moral judgements (“He should have returned his friend’s purse”), moral feelings (“He stole his friend’s purse, how revolting!”), moral principles (“Stealing is wrong”) and moral concepts (“wrong”, “right”). How is all this organized in the mind? There are two possible ways of describing the mental processes engaged. On the one hand, moral judgements seem to be organized by a system of rules and inferences. People seem to have some notion of very general principles (e.g., “Do not harm other people unless they harmed you”; “Do unto others as you would have them do unto you”; etc.). These provide very general templates. If you fill the placeholders in the templates with particular values—the names of the people involved, the nature of the action considered—you reach a certain description of the situation with a moral tag. This is called the moral reasoning model. On the other hand, in many cases people just seem to feel particular emotions when faced with particular situations and particular courses of action. Without having a clear notion of why we have a particular reaction, we know that doing this rather than that triggers emotional effects that goad us in one direction rather than the other, or that make us proud to have done the right thing, guilty if we did not, revolted if others did not, etc. This is a moral feeling model.
All well and good, and certainly I've often thought that morality arose from a confluence of these two forces. There are some intriguing problems, however, that show us that our moral feelings can be a little inexplicable, when we try to reason them through.
I was sad to hear of the death, recently, of
Philippa Foot. She was the philosopher who first introduced the
Trolley Problem, to throw the spotlight on our moral intuitions. The basic formulation is:
A trolley is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher. Fortunately, you could flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch or do nothing?
I
think most people would flip the switch, but one can nurdle out the sort of moralist one is by the answer one gives. Some would consider themselves infected by the ongoing moral wrong if they participated, apparently. The thought experiment becomes more interesting when comparing different scenarios with the same participants and potential outcomes.
Judith Jarvis Thompson suggested this:
As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you - your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?
Brilliant! I think most would feel more resistant to this second act than the first; perhaps it's the direct action on the person, or his proximity? Or maybe even his fatness. Anyway, the point is, we do have
feelings about these scenarios, that aren't necessarily easy to reason through. The naturalist considers such feelings to be evolved, combined with cultural reinforcement in childhood. Or perhaps the cultural reinforcement simply hooks into an evolved mechanism for distaste - Boyer discusses childhood development, so it's worth a read. But I was more intrigued by the drivers that are proposed for the altruism:
Kin selection
Reciprocal altruism
Commitment gadgets
Kin selection and reciprocal altruism are fairly well-known, but I was less familiar with the commitment gadgets. These are fascinating, because they show how an irrational element to our behaviour could, perhaps, be beneficial. As Boyer points out:
People behave in altruistic ways in many circumstances where no common genes are involved and no reciprocation is expected. They refrain from extracting all the possible benefits from many situations. It would be trivially easy to steal from friends, mug old ladies or leave restaurants without tipping. Also, this restraint does not stem from rational calculation—for instance, from the fear of possible sanctions—for it persists when there is clearly no chance of getting caught; people just say that they would feel awful if they did such things. Powerful emotions and moral feelings seem to be driving behaviour in a way that does not maximize individuals’ benefits.
Boyer gives the example of a shopkeeper and his assistant; why does the assistant refrain from stealing from the till? If it was known that the shopkeeper would react intemperately and irrationally to such a thing, that would be a deterrent to the assistant. If the assistant thought that the shopkeeper would reasonably see that he's simply lost some takings, and a murderous attack is not appropriate, the assistant might well risk the theft. But he is less likely to take the risk if the downside might be his murder. So we can see how
irrational behaviour could be beneficial in a society that demands
cooperation. Boyer says:
So to be known as someone who is actually in the grip of such passionate feelings is a very good thing as long as they are, precisely, feelings that override rational calculations.
He continues to describe the problems of honesty; there is a cost to it. We have many opportunities to be dishonest but most of us don't take them. On the individual level, this is irrational, but in a society where cooperation is paramount, having humans with strong
feelings driving them to be honest becomes a net benefit.
One of the studies cited by Boyer in his discussion is
Boyd and Richerson's Punishment allows the evolution of cooperation (or anything else) in sizable groups (1992), which includes this in the abstract:
We show that cooperation enforced by retribution can lead to the evolution of cooperation in two qualitatively different ways. (1) If benefits of cooperation to an individual are greater than the costs to a single individual of coercing the other n − 1 individuals to cooperate, then strategies which cooperate and punish noncooperators, strategies which cooperate only if punished, and, sometimes, strategies which cooperate but do not punish will coexist in the long run. (2) If the costs of being punished are large enough, moralistic strategies which cooperate, punish noncooperators, and punish those who do not punish noncooperators can be evolutionarily stable.
Coincidentally, I posted a comment on
Russell Blackford's blog about Tom Clark on Sam Harris's
The Moral Landscape (if you can follow that!). Tom Clark was kind enough to point out a problem with my use of retribution:
Mark Jones: “So we must (perhaps only for the sake of a workable society, which seems to be Dennett's thought) include some element of retribution, but craft the entire justice system in the full knowledge that a person is not a self-made thing.”
Keep in mind retribution is defined as punishing without *any* regard to good consequences, such as having a workable society. The retributivist has to justify punishing the offender as *intrinsically* good. If you can supply a convincing justification for that, I’ll sign up as a retributivist.
I replied:
Yes, retribution seems pointless from my world view, in isolation, but surely when one is deciding policy one cannot discount the fact that millions believe in its intrinsic good? Unfortunately, people behave based on what they believe rather than the truth of the matter. So my point was pragmatic rather than principled - it seems to me that in these circumstances retribution delivers an extrinsic good *because* many people believe in its intrinsic good - perhaps this then falls out of the definition of retribution? I'm not sure what it is then.
And isn't Dennett's quote pointing out that we tend to *feel* that retribution is good, rather than it *is* good? I would argue against retribution in *principle*, despite plotting revenge against my enemies.
It seems to me that retribution might be an example of how our intuitions are out of step with our reasoning, but
may have a beneficial effect in a cooperative society. Our demand for retributive justice appears irrational to me, but I can't help feeling it's a good thing! Maybe this is another irrational gadget which is beneficial in a cooperative society.
Before this realisation I'd assumed that irrationality was just a symptom of our imperfect processing, and that evolution had yet to fine-tune it. But now it's clear that some irrational feelings may be beneficial in themselves. And these will occasionally conflict with our better thought out positions. In the first place, we could not be blamed for following our feelings. In the longer term, I think we all have the responsibility to justify many of our intuitions through reason, especially in the knowledge that our feelings may be justifiably irrational, in the survival strategy sense.