Thursday, 15 January 2015

A Compendium of Charlie Hebdo Posts


An index to some good pieces on Charlie Hebdo...

Pieces previously discussed, by Stephen Law and Kenan Malik.

Daniel Fincke's excellent analysis of the worst responses to CH:
Charlie Hebdo assumed disproportionate risk because they kept their head up where it was a target when the rest of the media ducked. That made it so that the extremists could say, “We can finish the job and make it so no one satirically depicts Muhammad if we can just pluck off those few heads remaining!”
Jason Rosenhouse, 1, 2 and 3:
Claiming that publishing satirical cartoons constitutes openly begging for violence is awfully close to claiming that violence is an appropriate response to blasphemy.
Jerry Coyne, Charlie Hebdo's cartoons weren't racist:
But of course even if CH was racist, sexist, and homophobic, that doesn’t excuse what happened.
Taslima Nasreen, Cartoonists of Charlie Hebdo and me:
The murder of so many talented people by a few insane and barbaric men to please their God and their prophet, in order to get into paradise, is an offense to human decency.
Accusations of racism are generally beside the point when discussing Charlie Hebdo and their attackers; the cartoonists were murdered for blaspheming the prophet, not for racism (or sexism or anything else).

Nabila Ramdani made this error on PM last night (@15mins), when she complained that the cover depicted above was stigmatising but when challenged simply said this was because it depicted the prophet, and that offended her. How could a mere depiction of a supposed Mohammed (no-one knows what he looked like), which does not denigrate the prophet, stigmatise? Remember, it's the mere depiction that is blasphemy. She describes images like the one above as 'a vicious stereotype', and I fail to see how it is. If the cartoonist depicted the prophet as a Frenchman, would that avoid the charge of stereotyping, and therefore be acceptable to Ramdani? I doubt it. She constantly slides between her subjective offense at the image and the objective racism of the image. This is unacceptable behaviour. The religious frequently claim offence at some arbitrary sleight whilst denouncing non-believers as less than human and destined for some unpleasant eternal punishment. That should be considered more hateful than any blasphemy, but in the skewed worldview of the religiously sympathetic it is considered de rigueur.

Ramdani said that because people would respond negatively to blasphemy, this is a good reason not to publish; but if blasphemy is a tool, or is used as a tool, to prevent examination of some ideology's core beliefs and to buttress its authority, that is the very reason why such images must be published.

Therefore, a discussion needs to be had about the role of blasphemy in western societies. My view is that blasphemy itself needs to become as taboo to proclaim as the contents of blasphemous views currently are to the religious; just as we now wince at the n-word and frown on anyone who drinks and drives, so we should come to find bizarre anyone taking blasphemy seriously. Ramdani should be embarrassed to suggest that blasphemy is a good reason to curtail publication in a free press. Sadly this is far from being the case at the moment, when many countries still have blasphemy on the statute books.


Read more »

Thursday, 8 January 2015

Blasphemy must be Normalised


Below is a good discussion between Douglas Murray and Maajid Nawaz. Murray says:
...the gunmen went in to assert Islamic blasphemy laws in a European city on twelve people in the office of Charlie Hebdo... to enforce a particular idea of blasphemy law.
If nothing else is clear, that surely is. This is not to say that global politics has not played its part in fomenting disaffection amongst Muslims in the world, but it is to make clear the proximate cause of the atrocity; if Charlie Hebdo had never committed blasphemy against the prophet, they would not have been targeted.

I have often puzzled at theists, and not a few atheists, who complain about those who ridicule religion. Theists consider their gods sacred, and, finding them empirically indistinguishable from imaginary friends, resort to high dudgeon in defence of their beliefs. An excellent tactic for the ambitious authoritarian belief system is to invoke horrendous punishment for any questioning of what they can't prove empirically.

But ridicule is a perfectly normal part of social interaction. It can go too far, that much is true, but its power is in its debunking of authority. Bogus authority needs something to raise its balloon, so it can climb in the basket and look down on us, and hot air and pomposity do just the job. Ridicule punctures that balloon of pomposity to release the hot air, and since religions are amongst the most pompous institutions in the world, it is often an appropriate response to their teachings. Rather than grounds for ring-fencing religion, I see grounds for exposing it to more ridicule than other institutions.

I'm glad to see this sentiment in some responses to Charlie Hebdo. Here is Kenan Malik:
To ridicule religion and to defend free expression is not to attack minority communities. On the contrary: without doing both it is impossible to defend the freedoms of Muslims or of any one else. So, yes, let us challenge the Islamists and the reactionaries within Muslim communities. Let us also challenge the anti-Muslim reactionaries. But equally let us call the fake liberals to account.
And another excellent article by Stephen Law, who articulates my own feelings well:
Laughter may not be the only way of getting people to recognise the truth, but it’s sometimes the quickest and most effective way. Satire and mockery are tools that can be employed entirely appropriately, particularly if we’re criticising figures and institutions that maintain a faithful following in part by fostering attitudes of immense reverence and deference. What the pompous and self-aggrandizing fear most is that small boy who points and laughs - and whose name, in this case, is Charlie Hebdo.
Sadly the little boys and one girl who pointed with their pencils at the Muslim emperor were murdered for their ridicule.

It's good to hear Muslim Maajid Nawaz, from the moderate Quilliam Foundation, saying this:
...the editors need to get together ... to share the risk, enough is enough, satire plays an important role in democratic societies, and freedom of speech an even more important role...
Quite right; the appropriate response to Charlie Hebdo is for everyone to blaspheme, to normalise it, to debunk it and to own it; blasphemy is no reason to harm someone, anyone, and ridicule does not need to tread carefully around religion, any more than it has to around any subject.


Read more »

Sunday, 21 December 2014

Angry at God


Philosopher Stephen Maitzen has an excellent piece, called Perfection, Evil, and Morality, which will appear in a forthcoming volume edited by James P. Sterba. It details some of the reasons atheists find incompatible the existence of a perfect god with the suffering all too evident in the world around us. In particular it highlights how obligated such a perfect being would be to prevent the suffering we see. I recommend it.

One response he has seen suggests that morality has a significance in and of itself. But as he points out it's not something that needs to exist or is worth saving, because it only exists because suffering exists. No suffering, no morality. There is no morality required in a universe populated by senseless rocks.

At the end Maitzen mentions something else that is worth highlighting; the notion that atheists complaining about the problem of evil are somehow "angry at God"; Maitzen writes:
Living in a society still dominated by an inherited theistic outlook, atheists like me are not infrequently accused of being “angry at God” and venting our anger in the form of arguments such as those I’ve offered here. The accusation is patronizing, question-begging, and false. Any atheist who can think straight knows that anger at God makes no sense. I’m no more “angry at God” than I’m angry at Santa Claus for failing to relieve me of the burden of Christmas shopping. If I’m angry at anyone, it’s at those of my fellow human beings who (to extend the metaphor) would say morally outrageous things in order to defend the Santa Claus story as true and to excuse Santa Claus for repeatedly failing to do what the story makes it clear he ought to do. 
That summarises well how absurd that particular accusation comes off to me, and I think he is right to blame some of this on an 'inherited theistic outlook'. It still puzzles me that theists don't see how abhorrent their attitude to suffering is. For just one example, consider the discussion thread here. A theist called CodyGirl824 (I presume not a Poe) says, in response to an atheist discussing the problem of evil:
The fact that only humans are capable of evil, because evil requires formulation of intent and acting on that intent. So every evil act is an act of free will. Without free will, there can be no evil. If God were to choose to "prevent" every evil act of any and every human being, He would take away all free will, since God can't just intervene when an evil act is about to occur without obliterating that evil-doer's free will, and the consequences thereof. Without free will, there can be no love. Love is God's purpose in creation. (The Bible tells us so). So we people of faith understand perfectly why there is evil in the world as it exists. As we learn from the metaphorical, allegorical, mytho-poetic narrative of Adam and Eve, all acts of evil are acts of disobedience of God. We really do have all of the atheists' and naturalists objections covered. They simply refuse to recognize this fact.
...and continues in much the same vein despite her many errors being pointed out repeatedly. A sterling job done by her responders on that thread.

Now, to be fair, we cannot judge all theists on one rather obtuse example, but the appearance of callousness in this response does seem to recur in many a theist's response to the problem. What I find callous is the acceptance of suffering in their accounts, when we are taught, and perhaps know, that we should ameliorate it. They are explaining why suffering is necessary, when we (surely) know that it's not.

At least, a suffering-free world appears to be logically possible, and it is surely what is anticipated in heaven, or what existed before this vale of tears was supposedly created. One would expect our world to reflect its maker, if its maker were perfect, and it's simply not. This doesn't strike me as a particularly difficult notion to grasp, and repeated attempts at theodicy suggest that many theists do grasp it.

In the end something must give; their god's perfection, or the wrongness of suffering. Too many refuse to give up their god's perfection.

Read more »

Monday, 29 September 2014

The Armstrong Paradox

Stephen Law has written an open letter to Karen Armstrong, in response to this article in the Guardian. I think he addresses well Armstrong's mistaken idea of secularism, which is, as Gandhi knew, a friend to religion, not an enemy. This secularism, or Secularism, as Law has it, is a pluralist vision enabling a society in which many flowers may bloom.

But I wanted to draw attention to the secularism that Armstrong instead paints. This "aggressive secularism" may be more to blame for the violence we see than religion. She suggests this by ending her piece:
Many secular thinkers now regard “religion” as inherently belligerent and intolerant, and an irrational, backward and violent “other” to the peaceable and humane liberal state – an attitude with an unfortunate echo of the colonialist view of indigenous peoples as hopelessly “primitive”, mired in their benighted religious beliefs. There are consequences to our failure to understand that our secularism, and its understanding of the role of religion, is exceptional. When secularisation has been applied by force, it has provoked a fundamentalist reaction – and history shows that fundamentalist movements which come under attack invariably grow even more extreme. The fruits of this error are on display across the Middle East: when we look with horror upon the travesty of Isis, we would be wise to acknowledge that its barbaric violence may be, at least in part, the offspring of policies guided by our disdain.
While Armstrong concedes in a number of places that there are religious elements in the causes of violence, she is keen to highlight the phrase "the myth of religious violence", as if there is no such thing as religious violence. Here is a quote from the article which I think sums up much of Armstrong's thinking:
In almost every region of the world where secular governments have been established with a goal of separating religion and politics, a counter-cultural movement has developed in response, determined to bring religion back into public life. What we call “fundamentalism” has always existed in a symbiotic relationship with a secularisation that is experienced as cruel, violent and invasive. All too often an aggressive secularism has pushed religion into a violent riposte. Every fundamentalist movement that I have studied in Judaism, Christianity and Islam is rooted in a profound fear of annihilation, convinced that the liberal or secular establishment is determined to destroy their way of life. This has been tragically apparent in the Middle East.
This highlights a paradox in Armstrong's views, which goes something like this:

  1. Religious violence is a myth.
  2. Aggressive secularism is responsible for the violence.
  3. Secularism targets religion, by separating it from politics.

If Armstrong wants to maintain that religious violence is a myth, then attacking secularism would hardly be the way to show that, if (as she seems to think, contra Law) secularism is cruel, violent and invasive to the religious sensibility. If religion wasn't an engine of violence then secularism would have nothing to provoke. To be fair, Armstrong points out religion's close association with politics in many of its forms, but this just re-iterates the issue for secularists: some religions want to monopolise the body politic and in a pluralist society that is undemocratic. This would be so for any ideology that looks to dominate (such as communism); but religion is the most prevalent form of this sort of authoritarianism and is also a privileged form. Religions have co-opted sacredness to inoculate them from criticism; some more successfully than others.

Now, I suspect that Armstrong does think that there is some religious element in much of the violence that is attributed to religion, but she maybe thinks it's overstated. If that is so, then I think her mission would be better served by acknowledging more clearly that religion is to blame for some of it, and to avoid phrases such as "the myth of religious violence".

Her article seems historically well-informed but is fatally flawed by this constant need to deflect the proper appropriation of blame away from religion to all the other, admittedly diverse, causal factors of violence. A reasonable modern atheist doesn't look to blame religion for all society's ills; she looks to assign the level of blame that properly attaches to religion but which for centuries has been diverted by religious privilege. Sadly some people, like Armstrong, still work to maintain that religious exceptionalism.


Read more »

Saturday, 27 September 2014

Rawls and Nozick and Distributive Justice


Distributive justice attempts to answer the question of who gets what in society. To illustrate, let's consider a typical question that a theory of distributive justice should hopefully answer: are we entitled to the full rewards of exercising those talents we just happen to have been born with?

To answer this, I need, for any rewards I receive, an account that justifies that change in holdings, from others to me. The distributive justice debate is the search for such an account.

John Rawls and Robert Nozick provide the background to the debate. Rawls first proposes a hypothetical ‘original position’ (OP) in which we should putatively establish ‘the principles of justice for the basic structure of society’. This position is one in which ‘no one knows his place in society, his class position or social status, nor does anyone know his fortune in the distribution of natural assets and abilities, his intelligence, strength and the like’. Such a position, Rawls thinks, will universalise a notional rational person and remove self-knowledge so that neutral judgements are made. The concept of rationality he employs is that of ‘taking the most effective means to given ends...’, highlighting that the outcome, the end pattern of distribution, is important to Rawls.

By hypothetically removing arbitrary advantages, Rawls establishes a fair OP from which he thinks a rational person will be forced, in a sense, to establish basic structures. The two principles he proposes reflect this: the first is ‘equality in the assignment of basic rights and duties’ and the second, dubbed the difference principle (DP), holds that ‘social and economic inequalities...are just only if they result in compensating benefits for everyone, and in particular for the least advantaged members of society’ (ibid). He does not think that a person in the OP could rationally sacrifice themselves for the greater good, and these principles reflect that rejection of utilitarianism. He suggests almost a Kantian imperative that everyone would sign up to structures that benefited all if they are behind this ‘veil of ignorance’. It is Kantian in that the principles, Rawls claims, command rational assent. However, reason has been deployed in self interest (though self-ignorant) and not to determine duty, per Kant.

So we have a clear statement by Rawls that society’s basic principles should be (hypothetically) established without knowledge of our natural assets and abilities. Equal pay legislation, for example, is consistent with Rawls’s two principles. Outcomes are important, and Rawls places the first egalitarian principle ‘lexically prior’ to the second (we consider equality before we apply the DP), so it seems to follow that a man should not get paid more than a woman, all other things being equal, gender being a natural accident. But then, equal pay legislation for different natural talents would seem to follow too from Rawlsian principles. Should it be illegal, for example, to reward Wayne Rooney more than me (if we both happen to play up front for Manchester United!) on the grounds that he is more talented?

Rawls writes about nullifying ‘the accidents of natural endowment’ ‘as counters in quest for political and economic advantage’, and Rooney is surely no more responsible for his natural talent than I am for my lack of them. Certainly he may have spent more time honing his skills growing up, but he is not ultimately responsible for his ability to work at his skills, or for being in the position to develop his skills. But do Rawls’s principles explicitly support these conclusions and give us a particular answer to our question?

It’s not clear that they do. Moving from a fairly equal distribution to a more unequal distribution could be consistent with Rawls’s principles, so long as everyone benefits. Pay dictated by demand for certain natural talents could well deliver such outcomes. Further, the principles say little about who is entitled to what. The principles are indifferent to me earning a footballer’s salary or Wayne Rooney, so long as the distributions are structurally equivalent, so it’s difficult to establish any particular individual entitlement from the two principles.

Nozick identifies this problem too; ‘one traditional socialist view’ argues for workers’ entitlement to ‘the full fruits of their labour’, he writes. But what he calls ‘time-slice’ distributions are indifferent to who has what; 'time-slice' distributions are judged on their structure as they stand, regardless of the history behind that distribution, which appears to be the Rawlsian approach; remember, Rawls was interested in outcomes, the end pattern of distribution. He is noting that left-wingers also think they are entitled to the full rewards of exercising their talents, so they recognise that history is important to determining a just distribution. He argues for a ‘historical’ principle to justify entitlements, which contrasts with Rawls’s ‘current time-slice principles’ of distribution outcomes. If original acquisitions and subsequent transfers are in ‘accordance with the principle of justice’ then the distribution is just. Whereas, if we concern ourselves with outcomes, initial just patterns of distributions followed by just transfers can result in unjust patterns of distribution, and constant state interference will then be required to make corrections, violating individual liberty and autonomy.

So Nozick rejects various patterns of distributive justice, such as moral merit, need, effort and indeed ‘natural dimension’; he simplifies his ‘entitlement conception’ to ‘From each as they choose, to each as they are chosen’. So long as acquisitions and transfers meet this principle the resulting (unpatterned) distributions are just. In reality no-one’s existing holdings will be just under the principle (because they won't have been arrived at through Nozick's principles), but, to be fair, Nozick is only talking in principle.

Consider Nozick’s example of talented basketball player Wilt Chamberlain: if we imagine an ideal distribution of wealth at the start of a basketball season, and during the season people freely choose to upset that ideal distribution by each paying a small amount to see Chamberlain play, how is that new distribution unjust, even if the worst off are now worse off and Chamberlain is now wealthy? If a just starting position is followed by just transfers, a just distribution surely results, Nozick concludes, and
Chamberlain is fully entitled to what others have chosen to pay him.

Entitlement, then, is left to market forces; just desert is whatever anyone chooses to pay you from their justly acquired holdings. One is not so much being rewarded for one’s talent as being rewarded for exploiting one’s talent. People with no useful skill but a talent for persuading people they want to part with their money would be just as entitled to their rewards as someone with a talent that adds value. But this manipulation casts doubt on just how free people’s choices are in the market place.

Nozick’s maxim has at least three problems, I think.

First, whilst an opening distribution of holdings may be just, that does not necessarily mean people have an unfettered right to do with it as they will. We grant that people can own land, but if they start to pursue a scorched earth policy which renders that land unusable to anyone else, we would consider that unacceptable. In other words, in many holdings there is still a common interest.

Secondly, the opportunity afforded to individuals by societal infrastructures is not recognised in unfettered transfers of holdings. If Wilt Chamberlain were born at a time when basketball were not organised, he would be unable to exploit his talent. The advantages afforded by this commonwealth deserve to be recognised when we analyse distributions.

Thirdly, recognising self-ownership does not mean we have precisely the same rights over the products of our talents as we have over our talents.

To illustrate, consider the ‘eye lottery’ thought experiment, suggested by Jo Wolff, which exploits the libertarian notion that the same property rights attach to our holdings as to our natural endowments. This imagines a state where a minority of people are born without eyes, and people with two eyes are forced to give up one eye to benefit the blind. This evokes a visceral objection to property redistribution, but is disanalogous to the re-distributive project in two key ways: it does not recognise any common interest in holdings and it doesn’t differentiate between natural endowments and the rewards of those natural endowments.

More analogous would be to imagine a state where anyone born short-sighted is issued with spectacles. Through accidents of historical distribution some people have ended up with many spectacles – altruistic people with better eyesight have passed their spectacles on to the more short-sighted for ‘spares’, people have inherited specs, and so on. When people are born with poor eyesight, those with spares are forced to give them up. The spares provide no additional benefit to the holder (they have enough for their own use and cannot wear them all at the same time) and were state supplied in the first place, and no violation of bodily integrity occurs. This spectacle lottery seems much less objectionable, and more justified, than the eye lottery, so objections to redistribution should likewise be tempered.

Nozick’s principle justifies transfers that could impoverish many – the untalented, or, rather, those who are not good at selling their services - which is unpalatable to some for whom Rawls’s protection of the worst off is important. Nevertheless, Nozick’s linking of people’s choices to entitlement is reminiscent of some conceptions of moral responsibility, which recognise that while rational free agents act deterministically, so are not ultimately responsible for their behaviour, they are nevertheless individually responsible if they make intentional choices uncoerced by external factors. Rawls recognises this point when, while saying that people may not deserve their place in the ‘distribution of natural endowments’, they can still be credited for using them:
A basic structure satisfying the difference principle rewards people, not for their place in that distribution [of native endowments], but for training and educating their endowments, and for putting them to work so as to contribute to others’ good as well as their own. (p.75)
That people should not be rewarded for their place in the distribution of natural endowments suggests Rawls would answer our opening question in the negative; he does, however, identify intentional acts with entitlement, and if ‘putting talent to work’ is ‘exercising’ it, then Rawls might instead answer in the affirmative. But while rewarding people for exploiting their talent satisfies the DP (echoes of Nozick there), it is hardly predicated by it, and since also equality of rights and duties takes priority over the DP, such entitlement doesn’t follow uncontroversially from his principles.

Rawls recognises that his two principles may not be the final word, but expects a ‘reflective equilibrium’ to eventually settle them. Despite eschewing utilitarianism, his principles have a forward-looking consequentialist flavour, but massaging outcomes disrupts even the thin notion of individual desert that he allows. Under Nozick, anyone freely paid from holdings justly acquired would be fully entitled to their rewards (though even he allows that a minimal state will require some small percentage of people’s earnings). Nozick’s criticisms of fair distributions are well made, and his backward-looking historical account provides a simple way to establish entitlement, but it reflects little debt to community, and, practically, it may be impossible to establish any holdings that are justly acquired under his principle.

So I find neither account very satisfactory, but lean to the egalitarian because outcomes ultimately must matter more than processes. I can take enough from both accounts to conclude that I am not entitled to the full rewards of exercising those talents I just happen to have been born with, but am entitled to some of them.

Bibliography:

Cottingham, J. (ed.) (2008) Western Philosophy: An Anthology, Oxford, Blackwell Pub..

Pike, J. (2011) Political Philosophy (A222 Book 6), Milton Keynes, The Open University.

Rawls, J. (2001) Justice as Fairness: A Restatement, Cambridge, MA, Harvard University Press.

Read more »

Sunday, 10 August 2014

Some Good Old New Atheists


I've written previously that New Atheists are not new, despite their current negative reputation in the media, which appears to be driven by an overweening respect for religion. I've listed examples of older atheistic writing by luminaries such as David Hume and Bertrand Russell which show that a healthy disrespect for many religious ideas is nothing new.

In a recent edition of Philosophy Now's quarterly magazine, Barbara Smoker, (who pre-dates Dawkins et al by a number of years!) writes on the mystery of existence (behind a paywall), but on the way talks about her atheism. For example:
My years of mental turmoil before managing to rid myself of childhood theistic indoctrination entailed sufficient search – through thinking, reading, listening and debating – to last me for life. We are not expected to keep a lifelong open mind on such hypotheses as the existence of Santa Claus and the Tooth Fairy, so why should an exception be made in the case of God? That hypothesis, like those other stories for children, is merely asserted, without due evidence, by dissembling, or deluded, authorities.
Strident.

She mentions two priests who wrote about their atheism. Jean Meslier (1664-1729) wrote a Testament that was only discovered after his death. In it he writes:
Perhaps you will think, my dear friends, that in such a number of fake religions in this world my intention was at least to exclude from that number the Catholic religion, which all of us profess, and which we say to be the only one which teaches pure truth, the only one which acknowledges and worships the true God as it should, and the only one who leads men on the true way to salvation and eternal happiness. But open your eyes, my dear friends, open your eyes and get rid of everything that your pious and ignorant priests, or your mocker, self-seeking doctors, show zeal in telling you and in having you believe, under the fake pretext of the infallible certainty of their would-be sacred and divine religion. You are not more beguiled nor more abused than those who have been abused and beguiled the most. You are not less in error than those who have been the deepest in it. Your religion is not less vain or superstitious than any other; it is not less fake in its principles, nor less ridiculous and absurd in its dogmas and maxims. You are not less idolatrous than those whom you are not afraid to blame and condemn for their idolatry. The ideas of pagans and yours only differ by their name and appearance. In one word, everything your doctors and priests preach with so much zeal and eloquence about the splendour, the excellence and the holiness of the mysteries that they make you worship, everything they tell you so solemnly about the certitude of their alleged miracles, and everything they recite with so much self-confidence concerning the magnificence of the rewards of heaven, and touching the dreadful castigations of hell, are nothing but delusions, errors, lies, fictions and impostures. 
'Nothing but delusions, errors, lies, fictions and impostures'. That could have been written by Dawkins.

Then Smoker mentions Joseph McCabe (1867-1955), a priest turned atheist who wrote much on the dangers of religion. In his pamphlet From Rome to Rationalism, published in 1897, he explains why he left the church. Here he talks about faith in God:
The majority of men, little addicted to introspection, can give no reason, or only mutter a few superficial and crudely assimilated phrases, when asked for the motive of this, their fundamental belief. A theologian would say that God has provided a mysterious power, called faith, that links securely the minds of the unthinking majority to their belief. A more matter-of-fact observer would see either that they never reflect on the fact that they take this traditional doctrine with little or no proof, or that, from an instinctive feeling of the difficulty of the problem, they readily acquiesce in the most superficial arguments, or, from a confusion of the provinces of faith and reason, they consider it unlawful to indulge in speculation on the problem at all. But the more reflective, and their number is legion now, know that faith - the acceptance of a doctrine on divine authority - necessarily presupposes a knowledge of God, acquired and verifiable by rational methods.
It's plain that he sees a number of different forms of faith (as I recently discussed). McCabe writes well on the moral argument:
On the one hand, we have the inherited experience of innumerable ancestors and the deeply impressed associations of our early training pointing out certain lines of conduct as moral; on the other hand, we have the consciousness of our connection with a society from which our life derives half its happiness, the knowledge that each immoral act and habit tends to undermine a state of society which it is our supreme interest to support and develop. A mind withdrawn from the influence of religion feels no more than this; but this covers the whole ground of the moral code, and it is all we have to explain in conscience. We need no higher legislator to classify our actions, and to impose upon us a sense of obligation to abstain from immorality.
On Catholicism he writes:
But Roman theology is a masterpiece of ingenuity in exegetics. From Christ’s simple words, “Whose sins you shall retain they are retained,” the whole hideous system of the Confessional is evolved; from a medicinal remark of James comes the curious dogma of Extreme Unction; from some strong language of the sorely-tempted Paul is pressed Original Sin and Baptismal Regeneration; from the farewell supper of Christ the extraordinary doctrines of the Eucharist and the Mass, with all their complicated ceremonies; and the Immaculate Conception is proved from a stray remark in the Genesis version of an old Babylonian legend. Scripture must not be taken alone, they tell us; tradition embodies revelation with equal authority. But what is tradition? From the heterogeneous contents of the writings of the Fathers what are we to choose as revealed? Well, the Pope is infallible; but it turns out that even he has no inner revelation or positive assistance in the matter; he must be convinced from Scripture and tradition like ourselves, and it is extremely difficult sometimes to see the connection between his dogmatic conclusions and the scriptural data he alleges for them.
More stridency! Sadly, he paid somewhat for his apostasy:
With the sword of Damocles overhead, I have pursued my inquiry to the end, and avowed my convictions. And for that I stand before the world branded as a criminal by the Church of Rome. My dearest friends have abandoned me as though I were stricken with leprosy, if they did not indeed turn upon me with bitter and insulting language, for I was an apostate, and my word availed nothing against my calumniators. And this is an age of light and freedom and Christian charity. May the days soon come in which men will agree to differ on intellectual questions, and unite in social activity; when social ostracism will not be the inevitable consequence of honesty.
Every time people stand up against religion, a prejudice, perhaps born of an ingrained and unwarranted respect for religion, rears its head, and so those who dare to state their objections to religious institutions and thought come to be demonised. This is to be the fate of the new atheists, despite the fact that there is nothing about their statements and tone that is peculiar to them, as these extracts from venerable, older, non-believers show.


Read more »

Sunday, 3 August 2014

Does Falsifiability Identify Science?



There have been some interesting discussions of falsifiability and science recently: Sean Carroll's essay Falsifiability is a good starting point:
In complicated situations, fortune-cookie-sized mottos like "theories should be falsifiable" are no substitute for careful thinking about how science works. 
I tend to agree. Alan Sokal wrote a series of articles at Scientia on the definition of science:
The bottom line is that science is not merely a bag of clever tricks that turn out to be useful in investigating some arcane questions about the inanimate and biological worlds. Rather, the natural sciences are nothing more or less than one particular application — albeit an unusually successful one — of a more general rationalist worldview, centered on the modest insistence that empirical claims must be substantiated by empirical evidence.
Professor of Astrophysics Coel Hellier wrote:
It is the model that needs to be falsifiable, not every statement deriving from a model. Thus falsification remains important in science, but it is wrong to reject an idea such as the multiverse owing to an over-simplistic application of falsifiability.
And Massimo Pigliucci wrote in answer to the hypothetical proposition 'There is a specifiable “scientific method” that possesses some definable core or essential steps, used by all genuine sciences':
No. Philosophers of science have looked for just such an algorithm (e.g., the famous “hypothetico-deductive” method [1], or Popper’s falsifiability [2]) and have come up short. The post-Kuhn [3] consensus is that there is no such method, and that science helps itself to a loosely defined toolbox of methods, heuristics and intuitions. 
So how valid is Popper's criterion?

In the mid twentieth century when Karl Popper was writing, science was identified with the empirical method – observations, and inductive inference from them. He noticed how this process could be abused by theories he came to think were not science; he wanted ‘to distinguish between science and pseudo-science’ (while ‘pseudo-science’ is different from ‘non-science’, since Popper employs it I shall use it here synonymously with ‘non-science’). Popper thought that the followers of Marx, Freud and Adler could fit any conceivable evidence into their worldview. Furthermore he agreed with David Hume that inductive inference – that the past is a guide to the future - could not be logically justified, and so induction does not guarantee knowledge.

To differentiate knowledge-delivering science, then, from non-science, Popper suggests the criterion of falsifiability. By positing a principle that might hold universally (a generalisation) we can establish a deductive argument that allows us to refute it, should observation disconfirm it.  The argument is in the form modus tollens, denying the consequent:
If Theory then Prediction
Not P
Therefore, Not T
Predictions met would corroborate the theory, but in deductive terms to draw the conclusion that T is true from repeated Ps would be the fallacy of affirming the consequent. So we can never prove a theory true by this construction; just prove it false. Popper says:
...the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability...
So I think it's fair to say that he thinks falsifiability is both necessary and sufficient for scientific status. It does not matter how conjectures are formed, whether by induction or simple invention; testing them will eliminate the false conjectures. Indeed Popper also thinks that conjectures arise from our expectations prior to observation, a reversal of the traditional view.

I’m tempted to look for a ‘science’ that is not falsifiable to refute Popper, but, although his thoughts were prompted by Einstein overturning Newton’s theories, there is something normative about his proposal; he is saying that this is what science should be, and, if it’s not, it should not be called science. So if I offered a science that was not falsifiable, Popper would say it should not be called 'science'.

Thomas Kuhn. in his analysis in The Structure of Scientific Revolutions, defines ‘normal science’  as ‘research firmly based upon one or more past scientific achievements that some particular scientific community acknowledges for a time as supplying the foundation for its further practice’. Kuhn is describing the mundane day-to-day science he thinks more accurately represents the bulk of scientific work; in his view Popper’s ‘critical attitude’ is restricted to those rare periods in science when revolutionary changes occur which alter the way that the scientific community look at the world; he even thinks this is more philosophy than science. In his response to Kuhn, Popper agrees that ‘normal’ scientists exist, but insists they are ‘badly taught’, lacking the critical attitude. He identifies the critical attitude with the ‘scientific attitude’ and its converse, the dogmatic attitude, with the pseudo-scientific so he thinks they are not doing science at all during the ‘normal science’ phase, and it should not be called 'science'.

Conversely, though, there are disciplines which we don’t include in the sciences which are falsifiable; astrology is a discipline that both sides have used to illustrate their views. Popper cites it as a pseudo-science built on a ‘mass of empirical evidence’ and Kuhn agrees it is not a science, but notes that it has made many predictions, which have simply proved to be false. Popper should say that astrology is a science, because it has been refuted, proving its falsifiability; astrology’s (generally agreed) non-science status suggests there is something more to being a pseudo-science than mere non-falsifiability. However, astrologers offer reasons for predictive failure; see Astrology on the Attack in this article, which shows that astrologers can also use the same language to explain anomalies as scientists, but which echoes the language Popper found so objectionable in the followers of Marx, Freud and Adler, so perhaps Popper would continue to deny astrology's falsifiability.

To consider what this ‘something more’ could be, I will discuss a number of problems I see with Popper’s criterion: a) it does not escape the assumption of uniformity; b) it does not recognise different levels of predictive power; and, c) it undervalues supporting evidence.

a) Assumption of uniformity

Hume notes that we have a habit or custom to assume that things will tend to stay the same. Peter Lipton draws a distinction between this sort of inductivist and the opposite sort, to illustrate Hume's issue with induction:
To illustrate the problem, suppose our fundamental principle of inductive inference is ‘More of the Same’. We believe that strong inductive arguments are those whose conclusions predict the continuation of a pattern described in the premises. Applying this principle of conservative induction, we would infer that the sun will rise tomorrow, since it has always risen in the past; and we would judge worthless the argument that the sun will not rise tomorrow since it has always risen in the past. One can, however, come up with a factitious principle to underwrite the latter argument. According to the principle of revolutionary induction, ‘It’s Time for a Change’, and this sanctions the dark inference. Hume’s argument is that we have no way to show that conservative induction, the principle he claims we actually use for our inferences, will do any better than intuitively wild principles like the principles of revolutionary induction. Of course conservative induction has had the more impressive track record. Most of the inferences from true premises that it has sanctioned have also had true conclusions. Revolutionary induction, by contrast, has been conspicuous in failure, or would have been, had anyone relied on it. The question of justification, however, does not ask which method of inference has been successful; it asks which one will be successful. 
The point is, we have no more justification for what will be successful, per Hume, if we are conservative or so called revolutionary inductivists.

But imagine that the world actually has conformed (and will conform) to the revolutionary inductivist principle ('It's Time for a Change'). For the deductive argument we must first propose a generalisation. What would a generalisation look like if we lived in Lipton’s ‘revolutionary inductivist’ world, in which nature is not uniform? All blackbirds are any colour? What goes up will sometimes come down and sometimes not? A non-conservative-inductivist world renders the conjectures we make all-inclusive, so it would be impossible to eliminate potential outcomes by falsification – there would be no events that fall outside the conjectures. Maybe Popper would say the conjectures should exclude the uniform – All blackbirds are any colour but their current one? What goes up will do whatever it did not do last time? I’m not sure why we should exclude the possibility of things staying the same in a revolutionary inductivist world so this counter doesn’t convince. Falsification in such a world would mean uncovering an instance where the status quo is maintained. Positing a generalisation describing constant change is paradoxical, and casts doubt on just how far Popper has escaped the problem of induction with his deductive argument.

b) Variations in predictive power

Candidate theories could be divided into the following categories:

1) The theory cannot make predictions.
2) The theory can make predictions, but they are not met.
3) The theory can make predictions, but some are met and some aren’t.
4) The theory can make predictions and all are met.

As mentioned previously, Popper’s criterion is a yes or no demarcation, so for him any candidate in (1) is pseudo-science and any in (2), (3) and (4) is science. This means that a theory with no corroborating evidence but with predictive power is as scientific as one whose every prediction (so far) provides corroborating evidence. To be fair, Popper does accept there is a difference; he says ‘some theories are more testable...than others; they take, as it were, greater risks’, but his criterion does not make any allowance for this granularity within theories, and indeed, disciplines.

Candidates for scientific status range from the so-called ‘hard’, or established sciences, like physics, chemistry and biology, to ‘soft’ sciences like psychology and sociology, and then to those outside the scientific fold currently: astrology, homeopathy and creationism, for example. The hard sciences are accepted as scientific, but even within these disciplines theories can arise that cannot make predictions. Famously Popper said that ‘Darwinism is not a testable scientific theory’. He later retracted this, but it highlights the difficulty in drawing a sharp distinction; it was not at all clear what predictions Natural Selection could make. The immense number of variables which come to bear in the real world makes prediction difficult. To predict that speciation will occur is plainly not specific enough, but to predict particular speciations in mammals, for example, would require an almost omniscient knowledge of environmental changes and thousands of years within which to experiment.

As a further example, climate science might predict stormier summers, all other things being equal, but that prediction could be disrupted by a freak volcanic eruption. Avoiding the consequences of a failed prediction by appealing to such variables has much the same appearance as the behaviour that Popper found so dissatisfactory amongst the followers of Marx, Adler and Freud. But with disciplines that operate at a higher level of complexity than the ‘harder’ sciences, where it is easier to isolate variables, this is a reasonable explanation for failed predictions, and may not point to unscientific behaviour. Carroll says this about modern cosmology:
We can't (as far as we know) observe other parts of the multiverse directly. But their existence has a dramatic effect on how we account for the data in the part of the multiverse we do observe. It's in that sense that the success or failure of the idea is ultimately empirical: its virtue is not that it's a neat idea or fulfills some nebulous principle of reasoning, it's that it helps us account for the data. Even if we will never visit those other universes.
Cutting edge science often flirts with untestability. In short, sometimes it’s not clear if a theory is unfalsifiable in principle or just in practice, while Popper’s discussion of Freud and Adler indicates he is targeting those theories that he thinks are not falsifiable in principle. Perhaps establishing a theory as unfalsifiable in principle cannot even be established.

c) Supporting evidence

By solving the induction problem by removing it, Popper loses a valuable way of distinguishing between theories. Consider this thought experiment: you are going on holiday and when you arrive at the airport you have a choice of two planes to get to your destination: Plane 1 is supplied by Hume Airlines, and Plane 2 by Popper Airways. Planes of Plane 1’s design have successfully completed 1000 flights while none of Plane 2’s design has yet flown. Don’t worry though, says the chief executive of Popper Airways; the theory behind their plane’s design is as falsifiable as Plane 1’s, and for all anyone knows Plane 1 is going to crash on the next flight anyway! Which would we prefer?

For Popper, both planes’ design theories are equally falsifiable, and equally scientific, but I think it’s obvious which plane we would prefer; that we can see a difference between these scenarios suggests there is some value to the accumulation of inductive evidence, and to call this recourse to inductive inference non-scientific is hard to defend. Granted Popper does discuss corroborating evidence, but his bald formulation does not recognise it as scientific.

Darwin set out to show God’s hand in nature but accumulated anomalies eventually resulted in a ‘paradigm’ change for him, revealing unguided natural processes at work rather than God’s hand (Kuhn’s ‘paradigm’ is ‘an example of scientific practice that scientists in a certain tradition tacitly accept and follow’). Kuhn’s more inductive, perhaps less rational and occasionally unscientific, reading of how science operates is perhaps a more accurate description than Popper’s. Darwin’s careful collection and documentation of data over decades looks like Kuhn’s ‘normal science’, and, again, to call this non-scientific (as Popper calls normal science) is too restrictive.

In speciation there is no hard line between ducks and their non-ducky ancestors, but when something walks like a duck, swims like a duck and quacks like a duck, we call it a duck. Ironically this inductive inference we also use when we draw a familial resemblance between theories we call scientific, drawing on many factors, such as predictive power, parsimony, consilience with other disciplines, internal consistency and, yes, supporting data. That there is no hard line between science and non-science reflects how science changes over time and how varied it is. Paul Feyerabend calls science a ‘narrow-minded institution’ (pp. 174–5), but, contra that characterisation, it is instead a wide-ranging enterprise encompassing many different methods and techniques; Pigliucci's 'loosely defined toolbox'.

If knowing when a theory is wrong increases our knowledge, then Popper’s falsifiability is an important element in the progress of science. But science has elements and episodes that could be called pseudo-scientific, and pseudo-sciences have elements and episodes that could be called scientific. The assumption of uniformity, variations in predictive power and different levels of supporting data combine to show that the line between science and non-science is fuzzy and cannot be drawn by falsifiability.

Bibliography:

Chimisso, C. (2011) Knowledge, Milton Keynes, The Open University.

Cottingham, J. (ed.) (2008) Western Philosophy: An Anthology, Oxford, Blackwell Pub..

Okasha, S. (2002) Philosophy of Science: A Very Short Introduction, Oxford, Oxford University Press.

Popper, K. (1976) Unended Quest. An Intellectual Autobiography. LaSalle, IL: Open Court.

Feyerabend, P. (1978 [1975]) Against Method, London, Verso

Read more »