Sunday 26 December 2010

Danger Man


St Joseph and the Child, Oil on canvas, Museo ...Image via Wikipedia

In arguing against religious influence in the public sphere I think it's important to distinguish between the right of the religious to express their opinion in a democratic society, which should be defended, and any undue influence their opinion carries, or privilege allowed to their opinion, which should be resisted.

The unfortunate case of St. Joseph's Hospital in Arizona highlights the undue influence religion can have. A hospital that chose to save a mother's life and abort her child has been condemned by the Bishop of Phoenix, and he is withdrawing its Catholic status. This barmy and misogynist decision is, nonetheless, in accordance with Catholic doctrine. A hospital losing Catholic status should be treated as of no consequence - there is no direct funding involved, although indirect funding may be affected - and yet letters have been exchanged between health authority and Diocese as if these things mattered. It's laughable that officials should be wasting their time responding to this deluded cleric. In the Bishop's letter to Catholic Healthcare West (CHW), who run St Joseph's, in November, he said this, in response to CHW's decision to disagree with his judgement on the abortion case:
But this resolution is unacceptable because it disregards my authority and responsibility to interpret the moral law and to teach the Catholic faith as a Successor of the Apostles.
Note the capitals on 'Successor'; he's referring to the supposed Apostolic Succession claimed by the Catholic Church. We see the privilege demanded by this priest for his moral authority, over and above the rest of us. This is quite simply unacceptable in a modern liberal democracy; no one person and no organisation can claim moral authority just because.

Now, in one sense I would grant he has authority; he is responsible for his Catholic Diocese so he has every right to give and withdraw Catholic status as he sees fit. However, writing to the responsible authority and badgering them to change their medical procedures because of some bogus authority *he* claims is fundamentally anti-democratic, and he should be roundly condemned for it. Unfortunately, too many people are still in thrall to his bogus authority, so by dint of popular support he still has undue influence. But one day such interventions in the running of our every day institutions will be regarded as ludicrous, and no attention will be paid.

This is the goal of new atheists, and gnu atheists, and accommodationists should also be looking to achieve this aim.

Enhanced by Zemanta

Read more »

Sunday 5 December 2010

The Irrational Animal

Bertrand Russell 1893Image via Wikipedia

Man is a rational animal - so at least I have been told. Throughout a long life, I have looked diligently for evidence in favor of this statement, but so far I have not had the good fortune to come across it, though I have searched in many countries spread over three continents.

This sentiment is often expressed (usually a little more forcefully) in pub conversations among the like-minded, referring to any number of examples, such as support for Chelsea FC, the charms of Katie Price, or the attraction of Strictly Come Dancing.

In Pascal Boyer's Religion Explained, an excellent exploration of the anthropological evidence for the origins of religious thinking, chapter 5, Why do Gods and Spirits Matter?, includes a discussion of morality. He talks about moral reasoning and feeling:
We all have moral intuitions (“My friend left her purse here, I must give it back to her”), moral judgements (“He should have returned his friend’s purse”), moral feelings (“He stole his friend’s purse, how revolting!”), moral principles (“Stealing is wrong”) and moral concepts (“wrong”, “right”). How is all this organized in the mind? There are two possible ways of describing the mental processes engaged. On the one hand, moral judgements seem to be organized by a system of rules and inferences. People seem to have some notion of very general principles (e.g., “Do not harm other people unless they harmed you”; “Do unto others as you would have them do unto you”; etc.). These provide very general templates. If you fill the placeholders in the templates with particular values—the names of the people involved, the nature of the action considered—you reach a certain description of the situation with a moral tag. This is called the moral reasoning model. On the other hand, in many cases people just seem to feel particular emotions when faced with particular situations and particular courses of action. Without having a clear notion of why we have a particular reaction, we know that doing this rather than that triggers emotional effects that goad us in one direction rather than the other, or that make us proud to have done the right thing, guilty if we did not, revolted if others did not, etc. This is a moral feeling model.

All well and good, and certainly I've often thought that morality arose from a confluence of these two forces. There are some intriguing problems, however, that show us that our moral feelings can be a little inexplicable, when we try to reason them through.

I was sad to hear of the death, recently, of Philippa Foot. She was the philosopher who first introduced the Trolley Problem, to throw the spotlight on our moral intuitions. The basic formulation is:
Photo of Philippa Foot 1942Image via Wikipedia
A trolley is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher. Fortunately, you could flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch or do nothing?
I think most people would flip the switch, but one can nurdle out the sort of moralist one is by the answer one gives. Some would consider themselves infected by the ongoing moral wrong if they participated, apparently. The thought experiment becomes more interesting when comparing different scenarios with the same participants and potential outcomes. Judith Jarvis Thompson suggested this:
As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you - your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?
Brilliant! I think most would feel more resistant to this second act than the first; perhaps it's the direct action on the person, or his proximity? Or maybe even his fatness. Anyway, the point is, we do have feelings about these scenarios, that aren't necessarily easy to reason through. The naturalist considers such feelings to be evolved, combined with cultural reinforcement in childhood. Or perhaps the cultural reinforcement simply hooks into an evolved mechanism for distaste - Boyer discusses childhood development, so it's worth a read. But I was more intrigued by the drivers that are proposed for the altruism:

Kin selection
Reciprocal altruism
Commitment gadgets

Kin selection and reciprocal altruism are fairly well-known, but I was less familiar with the commitment gadgets. These are fascinating, because they show how an irrational element to our behaviour could, perhaps, be beneficial. As Boyer points out:
People behave in altruistic ways in many circumstances where no common genes are involved and no reciprocation is expected. They refrain from extracting all the possible benefits from many situations. It would be trivially easy to steal from friends, mug old ladies or leave restaurants without tipping. Also, this restraint does not stem from rational calculation—for instance, from the fear of possible sanctions—for it persists when there is clearly no chance of getting caught; people just say that they would feel awful if they did such things. Powerful emotions and moral feelings seem to be driving behaviour in a way that does not maximize individuals’ benefits.
Boyer gives the example of a shopkeeper and his assistant; why does the assistant refrain from stealing from the till? If it was known that the shopkeeper would react intemperately and irrationally to such a thing, that would be a deterrent to the assistant. If the assistant thought that the shopkeeper would reasonably see that he's simply lost some takings, and a murderous attack is not appropriate, the assistant might well risk the theft. But he is less likely to take the risk if the downside might be his murder. So we can see how irrational behaviour could be beneficial in a society that demands cooperation. Boyer says:
So to be known as someone who is actually in the grip of such passionate feelings is a very good thing as long as they are, precisely, feelings that override rational calculations.
He continues to describe the problems of honesty; there is a cost to it. We have many opportunities to be dishonest but most of us don't take them. On the individual level, this is irrational, but in a society where cooperation is paramount, having humans with strong feelings driving them to be honest becomes a net benefit.

One of the studies cited by Boyer in his discussion is Boyd and Richerson's Punishment allows the evolution of cooperation (or anything else) in sizable groups (1992), which includes this in the abstract:

We show that cooperation enforced by retribution can lead to the evolution of cooperation in two qualitatively different ways. (1) If benefits of cooperation to an individual are greater than the costs to a single individual of coercing the other n − 1 individuals to cooperate, then strategies which cooperate and punish noncooperators, strategies which cooperate only if punished, and, sometimes, strategies which cooperate but do not punish will coexist in the long run. (2) If the costs of being punished are large enough, moralistic strategies which cooperate, punish noncooperators, and punish those who do not punish noncooperators can be evolutionarily stable.

Coincidentally, I posted a comment on Russell Blackford's blog about Tom Clark on Sam Harris's The Moral Landscape (if you can follow that!). Tom Clark was kind enough to point out a problem with my use of retribution:
Mark Jones: “So we must (perhaps only for the sake of a workable society, which seems to be Dennett's thought) include some element of retribution, but craft the entire justice system in the full knowledge that a person is not a self-made thing.”
Keep in mind retribution is defined as punishing without *any* regard to good consequences, such as having a workable society. The retributivist has to justify punishing the offender as *intrinsically* good. If you can supply a convincing justification for that, I’ll sign up as a retributivist.
I replied:
Yes, retribution seems pointless from my world view, in isolation, but surely when one is deciding policy one cannot discount the fact that millions believe in its intrinsic good? Unfortunately, people behave based on what they believe rather than the truth of the matter. So my point was pragmatic rather than principled - it seems to me that in these circumstances retribution delivers an extrinsic good *because* many people believe in its intrinsic good - perhaps this then falls out of the definition of retribution? I'm not sure what it is then.
And isn't Dennett's quote pointing out that we tend to *feel* that retribution is good, rather than it *is* good? I would argue against retribution in *principle*, despite plotting revenge against my enemies.
It seems to me that retribution might be an example of how our intuitions are out of step with our reasoning, but may have a beneficial effect in a cooperative society. Our demand for retributive justice appears irrational to me, but I can't help feeling it's a good thing! Maybe this is another irrational gadget which is beneficial in a cooperative society.


Before this realisation I'd assumed that irrationality was just a symptom of our imperfect processing, and that evolution had yet to fine-tune it. But now it's clear that some irrational feelings may be beneficial in themselves. And these will occasionally conflict with our better thought out positions. In the first place, we could not be blamed for following our feelings. In the longer term, I think we all have the responsibility to justify many of our intuitions through reason, especially in the knowledge that our feelings may be justifiably irrational, in the survival strategy sense.

Read more »