Sunday, 21 December 2014

Angry at God

Philosopher Stephen Maitzen has an excellent piece, called Perfection, Evil, and Morality, which will appear in a forthcoming volume edited by James P. Sterba. It details some of the reasons atheists find incompatible the existence of a perfect god with the suffering all too evident in the world around us. In particular it highlights how obligated such a perfect being would be to prevent the suffering we see. I recommend it.

One response he has seen suggests that morality has a significance in and of itself. But as he points out it's not something that needs to exist or is worth saving, because it only exists because suffering exists. No suffering, no morality. There is no morality required in a universe populated by senseless rocks.

At the end Maitzen mentions something else that is worth highlighting; the notion that atheists complaining about the problem of evil are somehow "angry at God"; Maitzen writes:
Living in a society still dominated by an inherited theistic outlook, atheists like me are not infrequently accused of being “angry at God” and venting our anger in the form of arguments such as those I’ve offered here. The accusation is patronizing, question-begging, and false. Any atheist who can think straight knows that anger at God makes no sense. I’m no more “angry at God” than I’m angry at Santa Claus for failing to relieve me of the burden of Christmas shopping. If I’m angry at anyone, it’s at those of my fellow human beings who (to extend the metaphor) would say morally outrageous things in order to defend the Santa Claus story as true and to excuse Santa Claus for repeatedly failing to do what the story makes it clear he ought to do. 
That summarises well how absurd that particular accusation comes off to me, and I think he is right to blame some of this on an 'inherited theistic outlook'. It still puzzles me that theists don't see how abhorrent their attitude to suffering is. For just one example, consider the discussion thread here. A theist called CodyGirl824 (I presume not a Poe) says, in response to an atheist discussing the problem of evil:
The fact that only humans are capable of evil, because evil requires formulation of intent and acting on that intent. So every evil act is an act of free will. Without free will, there can be no evil. If God were to choose to "prevent" every evil act of any and every human being, He would take away all free will, since God can't just intervene when an evil act is about to occur without obliterating that evil-doer's free will, and the consequences thereof. Without free will, there can be no love. Love is God's purpose in creation. (The Bible tells us so). So we people of faith understand perfectly why there is evil in the world as it exists. As we learn from the metaphorical, allegorical, mytho-poetic narrative of Adam and Eve, all acts of evil are acts of disobedience of God. We really do have all of the atheists' and naturalists objections covered. They simply refuse to recognize this fact.
...and continues in much the same vein despite her many errors being pointed out repeatedly. A sterling job done by her responders on that thread.

Now, to be fair, we cannot judge all theists on one rather obtuse example, but the appearance of callousness in this response does seem to recur in many a theist's response to the problem. What I find callous is the acceptance of suffering in their accounts, when we are taught, and perhaps know, that we should ameliorate it. They are explaining why suffering is necessary, when we (surely) know that it's not.

At least, a suffering-free world appears to be logically possible, and it is surely what is anticipated in heaven, or what existed before this vale of tears was supposedly created. One would expect our world to reflect its maker, if its maker were perfect, and it's simply not. This doesn't strike me as a particularly difficult notion to grasp, and repeated attempts at theodicy suggest that many theists do grasp it.

In the end something must give; their god's perfection, or the wrongness of suffering. Too many refuse to give up their god's perfection.

Read more »

Monday, 29 September 2014

The Armstrong Paradox

Stephen Law has written an open letter to Karen Armstrong, in response to this article in the Guardian. I think he addresses well Armstrong's mistaken idea of secularism, which is, as Gandhi knew, a friend to religion, not an enemy. This secularism, or Secularism, as Law has it, is a pluralist vision enabling a society in which many flowers may bloom.

But I wanted to draw attention to the secularism that Armstrong instead paints. This "aggressive secularism" may be more to blame for the violence we see than religion. She suggests this by ending her piece:
Many secular thinkers now regard “religion” as inherently belligerent and intolerant, and an irrational, backward and violent “other” to the peaceable and humane liberal state – an attitude with an unfortunate echo of the colonialist view of indigenous peoples as hopelessly “primitive”, mired in their benighted religious beliefs. There are consequences to our failure to understand that our secularism, and its understanding of the role of religion, is exceptional. When secularisation has been applied by force, it has provoked a fundamentalist reaction – and history shows that fundamentalist movements which come under attack invariably grow even more extreme. The fruits of this error are on display across the Middle East: when we look with horror upon the travesty of Isis, we would be wise to acknowledge that its barbaric violence may be, at least in part, the offspring of policies guided by our disdain.
While Armstrong concedes in a number of places that there are religious elements in the causes of violence, she is keen to highlight the phrase "the myth of religious violence", as if there is no such thing as religious violence. Here is a quote from the article which I think sums up much of Armstrong's thinking:
In almost every region of the world where secular governments have been established with a goal of separating religion and politics, a counter-cultural movement has developed in response, determined to bring religion back into public life. What we call “fundamentalism” has always existed in a symbiotic relationship with a secularisation that is experienced as cruel, violent and invasive. All too often an aggressive secularism has pushed religion into a violent riposte. Every fundamentalist movement that I have studied in Judaism, Christianity and Islam is rooted in a profound fear of annihilation, convinced that the liberal or secular establishment is determined to destroy their way of life. This has been tragically apparent in the Middle East.
This highlights a paradox in Armstrong's views, which goes something like this:

  1. Religious violence is a myth.
  2. Aggressive secularism is responsible for the violence.
  3. Secularism targets religion, by separating it from politics.

If Armstrong wants to maintain that religious violence is a myth, then attacking secularism would hardly be the way to show that, if (as she seems to think, contra Law) secularism is cruel, violent and invasive to the religious sensibility. If religion wasn't an engine of violence then secularism would have nothing to provoke. To be fair, Armstrong points out religion's close association with politics in many of its forms, but this just re-iterates the issue for secularists: some religions want to monopolise the body politic and in a pluralist society that is undemocratic. This would be so for any ideology that looks to dominate (such as communism); but religion is the most prevalent form of this sort of authoritarianism and is also a privileged form. Religions have co-opted sacredness to inoculate them from criticism; some more successfully than others.

Now, I suspect that Armstrong does think that there is some religious element in much of the violence that is attributed to religion, but she maybe thinks it's overstated. If that is so, then I think her mission would be better served by acknowledging more clearly that religion is to blame for some of it, and to avoid phrases such as "the myth of religious violence".

Her article seems historically well-informed but is fatally flawed by this constant need to deflect the proper appropriation of blame away from religion to all the other, admittedly diverse, causal factors of violence. A reasonable modern atheist doesn't look to blame religion for all society's ills; she looks to assign the level of blame that properly attaches to religion but which for centuries has been diverted by religious privilege. Sadly some people, like Armstrong, still work to maintain that religious exceptionalism.

Read more »

Saturday, 27 September 2014

Rawls and Nozick and Distributive Justice

Distributive justice attempts to answer the question of who gets what in society. To illustrate, let's consider a typical question that a theory of distributive justice should hopefully answer: are we entitled to the full rewards of exercising those talents we just happen to have been born with?

To answer this, I need, for any rewards I receive, an account that justifies that change in holdings, from others to me. The distributive justice debate is the search for such an account.

John Rawls and Robert Nozick provide the background to the debate. Rawls first proposes a hypothetical ‘original position’ (OP) in which we should putatively establish ‘the principles of justice for the basic structure of society’. This position is one in which ‘no one knows his place in society, his class position or social status, nor does anyone know his fortune in the distribution of natural assets and abilities, his intelligence, strength and the like’. Such a position, Rawls thinks, will universalise a notional rational person and remove self-knowledge so that neutral judgements are made. The concept of rationality he employs is that of ‘taking the most effective means to given ends...’, highlighting that the outcome, the end pattern of distribution, is important to Rawls.

By hypothetically removing arbitrary advantages, Rawls establishes a fair OP from which he thinks a rational person will be forced, in a sense, to establish basic structures. The two principles he proposes reflect this: the first is ‘equality in the assignment of basic rights and duties’ and the second, dubbed the difference principle (DP), holds that ‘social and economic inequalities...are just only if they result in compensating benefits for everyone, and in particular for the least advantaged members of society’ (ibid). He does not think that a person in the OP could rationally sacrifice themselves for the greater good, and these principles reflect that rejection of utilitarianism. He suggests almost a Kantian imperative that everyone would sign up to structures that benefited all if they are behind this ‘veil of ignorance’. It is Kantian in that the principles, Rawls claims, command rational assent. However, reason has been deployed in self interest (though self-ignorant) and not to determine duty, per Kant.

So we have a clear statement by Rawls that society’s basic principles should be (hypothetically) established without knowledge of our natural assets and abilities. Equal pay legislation, for example, is consistent with Rawls’s two principles. Outcomes are important, and Rawls places the first egalitarian principle ‘lexically prior’ to the second (we consider equality before we apply the DP), so it seems to follow that a man should not get paid more than a woman, all other things being equal, gender being a natural accident. But then, equal pay legislation for different natural talents would seem to follow too from Rawlsian principles. Should it be illegal, for example, to reward Wayne Rooney more than me (if we both happen to play up front for Manchester United!) on the grounds that he is more talented?

Rawls writes about nullifying ‘the accidents of natural endowment’ ‘as counters in quest for political and economic advantage’, and Rooney is surely no more responsible for his natural talent than I am for my lack of them. Certainly he may have spent more time honing his skills growing up, but he is not ultimately responsible for his ability to work at his skills, or for being in the position to develop his skills. But do Rawls’s principles explicitly support these conclusions and give us a particular answer to our question?

It’s not clear that they do. Moving from a fairly equal distribution to a more unequal distribution could be consistent with Rawls’s principles, so long as everyone benefits. Pay dictated by demand for certain natural talents could well deliver such outcomes. Further, the principles say little about who is entitled to what. The principles are indifferent to me earning a footballer’s salary or Wayne Rooney, so long as the distributions are structurally equivalent, so it’s difficult to establish any particular individual entitlement from the two principles.

Nozick identifies this problem too; ‘one traditional socialist view’ argues for workers’ entitlement to ‘the full fruits of their labour’, he writes. But what he calls ‘time-slice’ distributions are indifferent to who has what; 'time-slice' distributions are judged on their structure as they stand, regardless of the history behind that distribution, which appears to be the Rawlsian approach; remember, Rawls was interested in outcomes, the end pattern of distribution. Nozick is noting that left-wingers also think they are entitled to the full rewards of exercising their talents, so they recognise that history is important to determining a just distribution. He argues for a ‘historical’ principle to justify entitlements, which contrasts with Rawls’s ‘current time-slice principles’ of distribution outcomes. If original acquisitions and subsequent transfers are in ‘accordance with the principle of justice’ then the distribution is just. Whereas, if we concern ourselves with outcomes, initial just patterns of distributions followed by just transfers can result in unjust patterns of distribution, and constant state interference will then be required to make corrections, violating individual liberty and autonomy.

So Nozick rejects various patterns of distributive justice, such as moral merit, need, effort and indeed ‘natural dimension’; he simplifies his ‘entitlement conception’ to ‘From each as they choose, to each as they are chosen’. So long as acquisitions and transfers meet this principle the resulting (unpatterned) distributions are just. In reality no-one’s existing holdings will be just under the principle (because they won't have been arrived at through Nozick's principles), but, to be fair, Nozick is only talking in principle.

Consider Nozick’s example of talented basketball player Wilt Chamberlain: if we imagine an ideal distribution of wealth at the start of a basketball season, and during the season people freely choose to upset that ideal distribution by each paying a small amount to see Chamberlain play, how is that new distribution unjust, even if the worst off are now worse off and Chamberlain is now wealthy? If a just starting position is followed by just transfers, a just distribution surely results, Nozick concludes, and
Chamberlain is fully entitled to what others have chosen to pay him.

Entitlement, then, is left to market forces; just desert is whatever anyone chooses to pay you from their justly acquired holdings. One is not so much being rewarded for one’s talent as being rewarded for exploiting one’s talent. People with no useful skill but a talent for persuading people they want to part with their money would be just as entitled to their rewards as someone with a talent that adds value. But this manipulation casts doubt on just how free people’s choices are in the market place.

Nozick’s maxim has at least three problems, I think.

First, whilst an opening distribution of holdings may be just, that does not necessarily mean people have an unfettered right to do with it as they will. We grant that people can own land, but if they start to pursue a scorched earth policy which renders that land unusable to anyone else, we would consider that unacceptable. In other words, in many holdings there is still a common interest.

Secondly, the opportunity afforded to individuals by societal infrastructures is not recognised in unfettered transfers of holdings. If Wilt Chamberlain were born at a time when basketball were not organised, he would be unable to exploit his talent. The advantages afforded by this commonwealth deserve to be recognised when we analyse distributions.

Thirdly, recognising self-ownership does not mean we have precisely the same rights over the products of our talents as we have over our talents.

To illustrate, consider the ‘eye lottery’ thought experiment, suggested by Jo Wolff, which exploits the libertarian notion that the same property rights attach to our holdings as to our natural endowments. This imagines a state where a minority of people are born without eyes, and people with two eyes are forced to give up one eye to benefit the blind. This evokes a visceral objection to property redistribution, but is disanalogous to the re-distributive project in two key ways: it does not recognise any common interest in holdings and it doesn’t differentiate between natural endowments and the rewards of those natural endowments.

More analogous would be to imagine a state where anyone born short-sighted is issued with spectacles. Through accidents of historical distribution some people have ended up with many spectacles – altruistic people with better eyesight have passed their spectacles on to the more short-sighted for ‘spares’, people have inherited specs, and so on. When people are born with poor eyesight, those with spares are forced to give them up. The spares provide no additional benefit to the holder (they have enough for their own use and cannot wear them all at the same time) and were state supplied in the first place, and no violation of bodily integrity occurs. This spectacle lottery seems much less objectionable, and more justified, than the eye lottery, so objections to redistribution should likewise be tempered.

Nozick’s principle justifies transfers that could impoverish many – the untalented, or, rather, those who are not good at selling their services - which is unpalatable to some for whom Rawls’s protection of the worst off is important. Nevertheless, Nozick’s linking of people’s choices to entitlement is reminiscent of some conceptions of moral responsibility, which recognise that while rational free agents act deterministically, so are not ultimately responsible for their behaviour, they are nevertheless individually responsible if they make intentional choices uncoerced by external factors. Rawls recognises this point when, while saying that people may not deserve their place in the ‘distribution of natural endowments’, they can still be credited for using them:
A basic structure satisfying the difference principle rewards people, not for their place in that distribution [of native endowments], but for training and educating their endowments, and for putting them to work so as to contribute to others’ good as well as their own. (p.75)
That people should not be rewarded for their place in the distribution of natural endowments suggests Rawls would answer our opening question in the negative; he does, however, identify intentional acts with entitlement, and if ‘putting talent to work’ is ‘exercising’ it, then Rawls might instead answer in the affirmative. But while rewarding people for exploiting their talent satisfies the DP (echoes of Nozick there), it is hardly predicated by it, and since also equality of rights and duties takes priority over the DP, such entitlement doesn’t follow uncontroversially from his principles.

Rawls recognises that his two principles may not be the final word, but expects a ‘reflective equilibrium’ to eventually settle them. Despite eschewing utilitarianism, his principles have a forward-looking consequentialist flavour, but massaging outcomes disrupts even the thin notion of individual desert that he allows. Under Nozick, anyone freely paid from holdings justly acquired would be fully entitled to their rewards (though even he allows that a minimal state will require some small percentage of people’s earnings). Nozick’s criticisms of fair distributions are well made, and his backward-looking historical account provides a simple way to establish entitlement, but it reflects little debt to community, and, practically, it may be impossible to establish any holdings that are justly acquired under his principle.

So I find neither account very satisfactory, but lean to the egalitarian because outcomes ultimately must matter more than processes. I can take enough from both accounts to conclude that I am not entitled to the full rewards of exercising those talents I just happen to have been born with, but am entitled to some of them.


Cottingham, J. (ed.) (2008) Western Philosophy: An Anthology, Oxford, Blackwell Pub..

Pike, J. (2011) Political Philosophy (A222 Book 6), Milton Keynes, The Open University.

Rawls, J. (2001) Justice as Fairness: A Restatement, Cambridge, MA, Harvard University Press.

Read more »

Sunday, 10 August 2014

Some Good Old New Atheists

I've written previously that New Atheists are not new, despite their current negative reputation in the media, which appears to be driven by an overweening respect for religion. I've listed examples of older atheistic writing by luminaries such as David Hume and Bertrand Russell which show that a healthy disrespect for many religious ideas is nothing new.

In a recent edition of Philosophy Now's quarterly magazine, Barbara Smoker, (who pre-dates Dawkins et al by a number of years!) writes on the mystery of existence (behind a paywall), but on the way talks about her atheism. For example:
My years of mental turmoil before managing to rid myself of childhood theistic indoctrination entailed sufficient search – through thinking, reading, listening and debating – to last me for life. We are not expected to keep a lifelong open mind on such hypotheses as the existence of Santa Claus and the Tooth Fairy, so why should an exception be made in the case of God? That hypothesis, like those other stories for children, is merely asserted, without due evidence, by dissembling, or deluded, authorities.

She mentions two priests who wrote about their atheism. Jean Meslier (1664-1729) wrote a Testament that was only discovered after his death. In it he writes:
Perhaps you will think, my dear friends, that in such a number of fake religions in this world my intention was at least to exclude from that number the Catholic religion, which all of us profess, and which we say to be the only one which teaches pure truth, the only one which acknowledges and worships the true God as it should, and the only one who leads men on the true way to salvation and eternal happiness. But open your eyes, my dear friends, open your eyes and get rid of everything that your pious and ignorant priests, or your mocker, self-seeking doctors, show zeal in telling you and in having you believe, under the fake pretext of the infallible certainty of their would-be sacred and divine religion. You are not more beguiled nor more abused than those who have been abused and beguiled the most. You are not less in error than those who have been the deepest in it. Your religion is not less vain or superstitious than any other; it is not less fake in its principles, nor less ridiculous and absurd in its dogmas and maxims. You are not less idolatrous than those whom you are not afraid to blame and condemn for their idolatry. The ideas of pagans and yours only differ by their name and appearance. In one word, everything your doctors and priests preach with so much zeal and eloquence about the splendour, the excellence and the holiness of the mysteries that they make you worship, everything they tell you so solemnly about the certitude of their alleged miracles, and everything they recite with so much self-confidence concerning the magnificence of the rewards of heaven, and touching the dreadful castigations of hell, are nothing but delusions, errors, lies, fictions and impostures. 
'Nothing but delusions, errors, lies, fictions and impostures'. That could have been written by Dawkins.

Then Smoker mentions Joseph McCabe (1867-1955), a priest turned atheist who wrote much on the dangers of religion. In his pamphlet From Rome to Rationalism, published in 1897, he explains why he left the church. Here he talks about faith in God:
The majority of men, little addicted to introspection, can give no reason, or only mutter a few superficial and crudely assimilated phrases, when asked for the motive of this, their fundamental belief. A theologian would say that God has provided a mysterious power, called faith, that links securely the minds of the unthinking majority to their belief. A more matter-of-fact observer would see either that they never reflect on the fact that they take this traditional doctrine with little or no proof, or that, from an instinctive feeling of the difficulty of the problem, they readily acquiesce in the most superficial arguments, or, from a confusion of the provinces of faith and reason, they consider it unlawful to indulge in speculation on the problem at all. But the more reflective, and their number is legion now, know that faith - the acceptance of a doctrine on divine authority - necessarily presupposes a knowledge of God, acquired and verifiable by rational methods.
It's plain that he sees a number of different forms of faith (as I recently discussed). McCabe writes well on the moral argument:
On the one hand, we have the inherited experience of innumerable ancestors and the deeply impressed associations of our early training pointing out certain lines of conduct as moral; on the other hand, we have the consciousness of our connection with a society from which our life derives half its happiness, the knowledge that each immoral act and habit tends to undermine a state of society which it is our supreme interest to support and develop. A mind withdrawn from the influence of religion feels no more than this; but this covers the whole ground of the moral code, and it is all we have to explain in conscience. We need no higher legislator to classify our actions, and to impose upon us a sense of obligation to abstain from immorality.
On Catholicism he writes:
But Roman theology is a masterpiece of ingenuity in exegetics. From Christ’s simple words, “Whose sins you shall retain they are retained,” the whole hideous system of the Confessional is evolved; from a medicinal remark of James comes the curious dogma of Extreme Unction; from some strong language of the sorely-tempted Paul is pressed Original Sin and Baptismal Regeneration; from the farewell supper of Christ the extraordinary doctrines of the Eucharist and the Mass, with all their complicated ceremonies; and the Immaculate Conception is proved from a stray remark in the Genesis version of an old Babylonian legend. Scripture must not be taken alone, they tell us; tradition embodies revelation with equal authority. But what is tradition? From the heterogeneous contents of the writings of the Fathers what are we to choose as revealed? Well, the Pope is infallible; but it turns out that even he has no inner revelation or positive assistance in the matter; he must be convinced from Scripture and tradition like ourselves, and it is extremely difficult sometimes to see the connection between his dogmatic conclusions and the scriptural data he alleges for them.
More stridency! Sadly, he paid somewhat for his apostasy:
With the sword of Damocles overhead, I have pursued my inquiry to the end, and avowed my convictions. And for that I stand before the world branded as a criminal by the Church of Rome. My dearest friends have abandoned me as though I were stricken with leprosy, if they did not indeed turn upon me with bitter and insulting language, for I was an apostate, and my word availed nothing against my calumniators. And this is an age of light and freedom and Christian charity. May the days soon come in which men will agree to differ on intellectual questions, and unite in social activity; when social ostracism will not be the inevitable consequence of honesty.
Every time people stand up against religion, a prejudice, perhaps born of an ingrained and unwarranted respect for religion, rears its head, and so those who dare to state their objections to religious institutions and thought come to be demonised. This is to be the fate of the new atheists, despite the fact that there is nothing about their statements and tone that is peculiar to them, as these extracts from venerable, older, non-believers show.

Read more »

Sunday, 3 August 2014

Does Falsifiability Identify Science?

There have been some interesting discussions of falsifiability and science recently: Sean Carroll's essay Falsifiability is a good starting point:
In complicated situations, fortune-cookie-sized mottos like "theories should be falsifiable" are no substitute for careful thinking about how science works. 
I tend to agree. Alan Sokal wrote a series of articles at Scientia on the definition of science:
The bottom line is that science is not merely a bag of clever tricks that turn out to be useful in investigating some arcane questions about the inanimate and biological worlds. Rather, the natural sciences are nothing more or less than one particular application — albeit an unusually successful one — of a more general rationalist worldview, centered on the modest insistence that empirical claims must be substantiated by empirical evidence.
Professor of Astrophysics Coel Hellier wrote:
It is the model that needs to be falsifiable, not every statement deriving from a model. Thus falsification remains important in science, but it is wrong to reject an idea such as the multiverse owing to an over-simplistic application of falsifiability.
And Massimo Pigliucci wrote in answer to the hypothetical proposition 'There is a specifiable “scientific method” that possesses some definable core or essential steps, used by all genuine sciences':
No. Philosophers of science have looked for just such an algorithm (e.g., the famous “hypothetico-deductive” method [1], or Popper’s falsifiability [2]) and have come up short. The post-Kuhn [3] consensus is that there is no such method, and that science helps itself to a loosely defined toolbox of methods, heuristics and intuitions. 
So how valid is Popper's criterion?

In the mid twentieth century when Karl Popper was writing, science was identified with the empirical method – observations, and inductive inference from them. He noticed how this process could be abused by theories he came to think were not science; he wanted ‘to distinguish between science and pseudo-science’ (while ‘pseudo-science’ is different from ‘non-science’, since Popper employs it I shall use it here synonymously with ‘non-science’). Popper thought that the followers of Marx, Freud and Adler could fit any conceivable evidence into their worldview. Furthermore he agreed with David Hume that inductive inference – that the past is a guide to the future - could not be logically justified, and so induction does not guarantee knowledge.

To differentiate knowledge-delivering science, then, from non-science, Popper suggests the criterion of falsifiability. By positing a principle that might hold universally (a generalisation) we can establish a deductive argument that allows us to refute it, should observation disconfirm it.  The argument is in the form modus tollens, denying the consequent:
If Theory then Prediction
Not P
Therefore, Not T
Predictions met would corroborate the theory, but in deductive terms to draw the conclusion that T is true from repeated Ps would be the fallacy of affirming the consequent. So we can never prove a theory true by this construction; just prove it false. Popper says:
...the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability...
So I think it's fair to say that he thinks falsifiability is both necessary and sufficient for scientific status. It does not matter how conjectures are formed, whether by induction or simple invention; testing them will eliminate the false conjectures. Indeed Popper also thinks that conjectures arise from our expectations prior to observation, a reversal of the traditional view.

I’m tempted to look for a ‘science’ that is not falsifiable to refute Popper, but, although his thoughts were prompted by Einstein overturning Newton’s theories, there is something normative about his proposal; he is saying that this is what science should be, and, if it’s not, it should not be called science. So if I offered a science that was not falsifiable, Popper would say it should not be called 'science'.

Thomas Kuhn. in his analysis in The Structure of Scientific Revolutions, defines ‘normal science’  as ‘research firmly based upon one or more past scientific achievements that some particular scientific community acknowledges for a time as supplying the foundation for its further practice’. Kuhn is describing the mundane day-to-day science he thinks more accurately represents the bulk of scientific work; in his view Popper’s ‘critical attitude’ is restricted to those rare periods in science when revolutionary changes occur which alter the way that the scientific community look at the world; he even thinks this is more philosophy than science. In his response to Kuhn, Popper agrees that ‘normal’ scientists exist, but insists they are ‘badly taught’, lacking the critical attitude. He identifies the critical attitude with the ‘scientific attitude’ and its converse, the dogmatic attitude, with the pseudo-scientific. So he thinks they are not doing science at all during the ‘normal science’ phase, and it should not be called 'science'.

Conversely, though, there are disciplines which we don’t include in the sciences which are falsifiable; astrology is a discipline that both sides have used to illustrate their views. Popper cites it as a pseudo-science built on a ‘mass of empirical evidence’ and Kuhn agrees it is not a science, but notes that it has made many predictions, which have simply proved to be false. Popper should say that astrology is a science, because it has been refuted, proving its falsifiability; astrology’s (generally agreed) non-science status suggests there is something more to being a pseudo-science than mere non-falsifiability. However, astrologers offer reasons for predictive failure; see Astrology on the Attack in this article, which shows that astrologers can also use the same language to explain anomalies as scientists, but which echoes the language Popper found so objectionable in the followers of Marx, Freud and Adler, so perhaps Popper would continue to deny astrology's falsifiability.

To consider what this ‘something more’ could be, I will discuss a number of problems I see with Popper’s criterion: a) it does not escape the assumption of uniformity; b) it does not recognise different levels of predictive power; and, c) it undervalues supporting evidence.

a) Assumption of uniformity

Hume notes that we have a habit or custom to assume that things will tend to stay the same. Peter Lipton draws a distinction between this sort of inductivist and the opposite sort, to illustrate Hume's issue with induction:
To illustrate the problem, suppose our fundamental principle of inductive inference is ‘More of the Same’. We believe that strong inductive arguments are those whose conclusions predict the continuation of a pattern described in the premises. Applying this principle of conservative induction, we would infer that the sun will rise tomorrow, since it has always risen in the past; and we would judge worthless the argument that the sun will not rise tomorrow since it has always risen in the past. One can, however, come up with a factitious principle to underwrite the latter argument. According to the principle of revolutionary induction, ‘It’s Time for a Change’, and this sanctions the dark inference. Hume’s argument is that we have no way to show that conservative induction, the principle he claims we actually use for our inferences, will do any better than intuitively wild principles like the principles of revolutionary induction. Of course conservative induction has had the more impressive track record. Most of the inferences from true premises that it has sanctioned have also had true conclusions. Revolutionary induction, by contrast, has been conspicuous in failure, or would have been, had anyone relied on it. The question of justification, however, does not ask which method of inference has been successful; it asks which one will be successful. 
The point is, we have no more justification for what will be successful, per Hume, if we are conservative or so called revolutionary inductivists.

But imagine that the world actually has conformed (and will conform) to the revolutionary inductivist principle ('It's Time for a Change'). For the deductive argument we must first propose a generalisation. What would a generalisation look like if we lived in Lipton’s ‘revolutionary inductivist’ world, in which nature is not uniform? All blackbirds are any colour? What goes up will sometimes come down and sometimes not? A non-conservative-inductivist world renders the conjectures we make all-inclusive, so it would be impossible to eliminate potential outcomes by falsification – there would be no events that fall outside the conjectures. Maybe Popper would say the conjectures should exclude the uniform – All blackbirds are any colour but their current one? What goes up will do whatever it did not do last time? I’m not sure why we should exclude the possibility of things staying the same in a revolutionary inductivist world so this counter doesn’t convince. Falsification in such a world would mean uncovering an instance where the status quo is maintained. Positing a generalisation describing constant change is paradoxical, and casts doubt on just how far Popper has escaped the problem of induction with his deductive argument.

b) Variations in predictive power

Candidate theories could be divided into the following categories:

1) The theory cannot make predictions.
2) The theory can make predictions, but they are not met.
3) The theory can make predictions, but some are met and some aren’t.
4) The theory can make predictions and all are met.

As mentioned previously, Popper’s criterion is a yes or no demarcation, so for him any candidate in (1) is pseudo-science and any in (2), (3) and (4) is science. This means that a theory with no corroborating evidence but with predictive power is as scientific as one whose every prediction (so far) provides corroborating evidence. To be fair, Popper does accept there is a difference; he says ‘some theories are more testable...than others; they take, as it were, greater risks’, but his criterion does not make any allowance for this granularity within theories, and indeed, disciplines.

Candidates for scientific status range from the so-called ‘hard’, or established sciences, like physics, chemistry and biology, to ‘soft’ sciences like psychology and sociology, and then to those outside the scientific fold currently: astrology, homeopathy and creationism, for example. The hard sciences are accepted as scientific, but even within these disciplines theories can arise that cannot make predictions. Famously Popper said that ‘Darwinism is not a testable scientific theory’. He later retracted this, but it highlights the difficulty in drawing a sharp distinction; it was not at all clear what predictions Natural Selection could make. The immense number of variables which come to bear in the real world makes prediction difficult. To predict that speciation will occur is plainly not specific enough, but to predict particular speciations in mammals, for example, would require an almost omniscient knowledge of environmental changes and thousands of years within which to experiment.

As a further example, climate science might predict stormier summers, all other things being equal, but that prediction could be disrupted by a freak volcanic eruption. Avoiding the consequences of a failed prediction by appealing to such variables has much the same appearance as the behaviour that Popper found so dissatisfactory amongst the followers of Marx, Adler and Freud. But with disciplines that operate at a higher level of complexity than the ‘harder’ sciences, where it is easier to isolate variables, this is a reasonable explanation for failed predictions, and may not point to unscientific behaviour. Carroll says this about modern cosmology:
We can't (as far as we know) observe other parts of the multiverse directly. But their existence has a dramatic effect on how we account for the data in the part of the multiverse we do observe. It's in that sense that the success or failure of the idea is ultimately empirical: its virtue is not that it's a neat idea or fulfills some nebulous principle of reasoning, it's that it helps us account for the data. Even if we will never visit those other universes.
Cutting edge science often flirts with untestability. In short, sometimes it’s not clear if a theory is unfalsifiable in principle or just in practice, while Popper’s discussion of Freud and Adler indicates he is targeting those theories that he thinks are not falsifiable in principle. Perhaps establishing a theory as unfalsifiable in principle cannot even be established.

c) Supporting evidence

By solving the induction problem by removing it, Popper loses a valuable way of distinguishing between theories. Consider this thought experiment: you are going on holiday and when you arrive at the airport you have a choice of two planes to get to your destination: Plane 1 is supplied by Hume Airlines, and Plane 2 by Popper Airways. Planes of Plane 1’s design have successfully completed 1000 flights while none of Plane 2’s design has yet flown. Don’t worry though, says the chief executive of Popper Airways; the theory behind their plane’s design is as falsifiable as Plane 1’s, and for all anyone knows Plane 1 is going to crash on the next flight anyway! Which would we prefer?

For Popper, both planes’ design theories are equally falsifiable, and equally scientific, but I think it’s obvious which plane we would prefer; that we can see a difference between these scenarios suggests there is some value to the accumulation of inductive evidence, and to call this recourse to inductive inference non-scientific is hard to defend. Granted Popper does discuss corroborating evidence, but his bald formulation does not recognise it as scientific.

Darwin set out to show God’s hand in nature but accumulated anomalies eventually resulted in a ‘paradigm’ change for him, revealing unguided natural processes at work rather than God’s hand (Kuhn’s ‘paradigm’ is ‘an example of scientific practice that scientists in a certain tradition tacitly accept and follow’). Kuhn’s more inductive, perhaps less rational and occasionally unscientific, reading of how science operates is perhaps a more accurate description than Popper’s. Darwin’s careful collection and documentation of data over decades looks like Kuhn’s ‘normal science’, and, again, to call this non-scientific (as Popper calls normal science) is too restrictive.


In speciation there is no hard line between ducks and their non-ducky ancestors, but when something walks like a duck, swims like a duck and quacks like a duck, we call it a duck. Ironically this inductive inference we also use when we draw a familial resemblance between theories we call scientific, drawing on many factors, such as predictive power, parsimony, consilience with other disciplines, internal consistency and, yes, supporting data. That there is no hard line between science and non-science reflects how science changes over time and how varied it is. Paul Feyerabend calls science a ‘narrow-minded institution’ (pp. 174–5), but, contra that characterisation, it is instead a wide-ranging enterprise encompassing many different methods and techniques; Pigliucci's 'loosely defined toolbox'.

If knowing when a theory is wrong increases our knowledge, then Popper’s falsifiability is an important element in the progress of science. But science has elements and episodes that could be called pseudo-scientific, and pseudo-sciences have elements and episodes that could be called scientific. The assumption of uniformity, variations in predictive power and different levels of supporting data combine to show that the line between science and non-science is fuzzy and cannot be drawn by falsifiability.


Chimisso, C. (2011) Knowledge, Milton Keynes, The Open University.

Cottingham, J. (ed.) (2008) Western Philosophy: An Anthology, Oxford, Blackwell Pub..

Okasha, S. (2002) Philosophy of Science: A Very Short Introduction, Oxford, Oxford University Press.

Popper, K. (1976) Unended Quest. An Intellectual Autobiography. LaSalle, IL: Open Court.

Feyerabend, P. (1978 [1975]) Against Method, London, Verso

Read more »

Monday, 16 June 2014

The 'Faith' Debate

Believe me, it was this big
There has been much consternation among theists caused by Peter Boghossian's book, A Manual for Creating Atheists. Consider, for example, this list of rather splenetic reviews. Former believer Eric Macdonald discusses why he thinks Boghossian's definition is 'silly' here and here. Although the book is clearly aimed at non-believers, offering advice on how to 'talk people out of their faith' or, as Michael Shermer puts it in the foreword, to 'reprogram minds [something Orwellian about that!] into employing reason instead of faith, science instead of superstition', theists are, probably understandably, upset by it. After all, if you are told that you are only pretending to hold what you consider to be your most deeply held beliefs, I think you have every right to feel aggrieved! And this does appear to be what Boghossian tells the faithful when he defines faith as:
pretending to know things you don’t know (Kindle Locations 262-263)
Frankly, this seems a needlessly antagonistic conception of faith, but in context I think it's clear that it is part of a strategy to move people of faith to see things from without their worldview; instead of saying you have faith that God exists, for example, imagine that you instead say I pretend to know that God exists. Not very subtle, perhaps, but it's hard to argue that a significant number of believers do actually pretend to know things they don't. When I believed, that was exactly what I did, on the advice of theist friends (if the 'leap of faith' could be described as 'pretending to know', which I think it can). Biologos contributor Ted Davis agrees when he says that many Christians do match the stereotype of the unjustified leap of faith. And debates that long predate the new atheists presuppose an opposition between justified belief and something that is, at least, less justified; consider Clifford's exhortation that 'it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence' and William James's rejoinder,  'The Will to Believe', in the late nineteenth century. James describes his piece, written in 1896, as:
an essay in justification of faith, a defence of our right to adopt a believing attitude in religious matters, in spite of the fact that our merely logical intellect may not have been coerced.
This gap between merely logical intellect and faith is what Boghossian is attacking, so this is not some fresh accusation from the new atheists.

One response to Boghossian's definition has been to ask for empirical evidence that this is what faith is. Empirical evidence is not necessary to explore concepts, and philosophers specialise in concepts. The SEP article on faith explores faith without feeling the need to support the discussion with empirical evidence that these concepts conform with what theists claim faith is. It's a discussion of the history of ideas as it relates to the concept of faith. Christians are quite entitled to explain how they conceive faith if they disagree with the concepts as discussed (as we shall see), and they don't need to produce statistical population analyses to support their ideas; likewise, neither do non-believers like Boghossian. Empirical evidence may be used to discount particular concepts of faith, for example, but no-one could seriously suggest that Boghossian's definition does not derive from some part of the concept of faith. In that event, he is quite at liberty to exploit it for his 'evangelical' aims. Its success, however, will surely be limited by its application to individual believers.

I confess, it does seem to me to be an accurate description of one type of faith, even if it's pejorative. What I think it communicates is the volitional element of faith, hinted at by James above, without which religious faith becomes rather supererogatory; it is important that a person wills their assent to a religious belief, rather than simply responds with belief  to the normal empirical inputs of the natural world. There is no virtue in my believing that I am sitting at my desk typing this blog, but there is virtue, supposedly, in a theist's believing in Jesus, or Allah, or Jehovah, and so on. Now, to be fair, universalism exists, which suggests that ultimately this does not matter, but the most prevalent forms of religion the new atheists like Boghossian address surely do differentiate between believers and non-believers, with bad consequences for non-believers (or, at least, consequences that are not as good).

For me, any account of faith needs to make sense of this distinction; it will not be good enough to say tu quoque. For example, some may draw an analogy between competing religious faiths and competing scientific theories. Scientific theories are often (some would say, always) underdetermined by the data, so we cross an epistemic gap when we commit to one theory or another just as the religious believer does when she commits to a religious worldview. But this seems to render faith rather empty; if atheists are 'faithful' too, just what is it that is so virtuous about being a theist?

We do not think there is any virtue in believing one scientific theory over another; we accept that some people evaluate the evidence differently. Of course, some people don't value evidence, but I take it theists are not arguing that evidence is not important in establishing beliefs. Theism suggests there are some real consequences to the very act of believing something. And again, this is not because of the consequential acts of their belief, although they sometimes point to these things in justification. The belief distinguishes the faithful from the faithless in a way that makes the former praiseworthy, for some reason.

When people disbelieve scientific theories, we might criticise them for not applying rigorous scientific principles to their reasoning; for example, climate change denialists and anti-vaxxers. Of course, people can still believe scientific principles without being rigorous scientifically, and we may criticise them for that. But with faith the method does not define its ultimate worth. After all, the religious account accepts there are many who practice a faith but are wrong in the content of their belief. The method itself is not what wins god's rewards; it's important what the faithful actually believe. This seems unfair, since some of our beliefs, including many religious ones, seem to arise more from the environment we grow up in than from any impartial, supposition-free method of knowledge acquisition.

Religious faith is complex, that much is true. The SEP lists seven broad characterisations of faith:
the ‘purely affective’ model: faith as a feeling of existential confidence
the ‘special knowledge’ model: faith as knowledge of specific truths, revealed by God
the ‘belief’ model: faith as belief that God exists
the ‘trust’ model: faith as belief in (trust in) God
the ‘doxastic venture’ model: faith as practical commitment beyond the evidence to one's belief that God exists
the ‘sub-doxastic venture’ model: faith as practical commitment without belief
the ‘hope’ model: faith as hoping—or acting in the hope that—the God who saves exists.
The last three certainly appear to concord somewhat with Boghossian's characterisation, if we allow that pretending to know is an accurate way to describe believing beyond the evidence, which is what is communicated by doxastic venturing and hoping.

The SEP notes that Christians consider faith a gift from God and, as I indicate above, something:
...requiring a human response of assent and trust, so that people's faith is something with respect to which they are both receptive and active. 
The article also notes a similar tension to the one between concluded belief and willed belief, between the supposed 'gift' of faith and the willed venture.

Blogger aRemonstrant has helpfully compiled some Christian definitions of faith. Keith Ward says it's 'the practical commitment to a relationship with God that will progressively transform your life, liberating it from hatred, greed and ignorance, and enabling it to become a more effective mediator of transcendent beauty, joy, compassion and benevolence'. Well, maybe, but either this can occur regardless of what the faith is in, or it presupposes that the faith is not misplaced. How does one know it's not misplaced? In the same book Ward says 'the test of genuine belief in God is whether or not your life is directed towards sharing in and learning to increase in the world around you beauty, bliss and goodness'.

This is odd, since it seems to link faith (or at least 'genuine' faith) with its consequences. But we know many people who say they have 'faith' whose lives do not appear to be directed towards sharing in etc, and many non-believers who do appear to meet that test. One assumes, though, that Ward restricts genuine believers to just the small subset who believe in the right god, and whose lives are directed towards sharing in etc. Indeed, he accepts that many religious believers are full of selfishness, spite, ambition and ignorance; he wants religion to transform them, but until they are transformed presumably they will not be rewarded with the fruits of 'genuine' faith.

John Polkinghorne says:
Religious faith does not demand irrational submission to some unquestionable authority, but it does involve rational commitment to well-motivated belief.
As far as I can see, Polkinghorne's motivations for belief might go beyond scientific evidence, but he contends it is still  justified; just before the above quote he says:
...the beliefs of religious belief are sufficiently well-motivated for them to be able to commit themselves, despite knowing that in principle they may be mistaken.
This could be interpreted in two ways, I think. Committing to a belief in the knowledge that it could be mistaken looks very much like 'pretending to know things you don't know'. I think Polkinghorne though is talking more about how scientists mediate between theories; thanks to the underdetermination problem mentioned earlier, we often have insufficient data to choose between theories, so theories are adopted tentatively and considered provisional in principle. The motivation for that adoption is justified, but beyond the evidence. But it's not clear just how provisional a person's faith can be to qualify as genuine, and we seem to be flirting with the tu quoque again; if well-motivated religious beliefs are no different in principle to other well-motivated beliefs, like scientific ones, then everyone is faithful and there is no need to be a theist. Polkinghorne is careful to distinguish between the two, however; again in the same book he says:
Religious knowledge is much more 'dangerous' than scientific knowledge, for it can imply consequences for the way we live our lives, requiring not only the assent of the intellect but also the assent of the will.
There is that volitional element again, and consequences à la Ward. And here Polkinghorne seems dangerously close to drawing an ought from an is. More importantly, he distinguishes religious knowledge as being more 'dangerous' than scientific knowledge.

This highlights the dilemma for the faithful: as soon as someone like Polkinghorne makes such a distinction then a non-believer can attack it as an epistemic gap that justifies the characterisation of 'pretending to know'; but if they don't draw the distinction, then we are are all full of faith, and a non-believer is as virtuous as a believer.

A couple more 'faith' definitions taken at random to illustrate the problem:
By faith, then, as a first approximation, we mean trusting, holding to, and acting on what one has good reason to believe is true, in the face of difficulties.
Either the 'difficulties' are peculiar to faith, in which case there is a gap non-believers can attack, or they are difficulties we all face when we conduct our daily lives, in which case we are all really believers.
The Bible knows nothing of a bold leap-in-the-dark faith, a hope-against-hope faith, a faith with no evidence. Rather, if the evidence doesn’t correspond to the hope, then the faith is in vain, as even Paul has said.
Confirming evidence against hope is open to the charge of confirmation bias, to which we are all subject. But again, if the faithful are simply humans suffering from the cognitive problems to which we are all subject, why the reward for being subject to one set of biases, and not another? Why should one be rewarded for landing on the supposed one true belief amongst many competing ones because one has been given a hope based on one's upbringing, which is arbitrary?

None of these accounts of faith adequately tackles this problem, so non-believers are left wondering what exactly faith is that makes it so special while leaving it immune to the sort of attack Boghossian launches. Even Eric's analysis throws no light on the problem; perhaps because the problem is insoluble; his notion of faith as a worldview is something that is:
...not only epistemic. It has other dimensions of meaning that [Boghossian] has a vested interest in ignoring. It’s called confirmation bias, and he falls into this particular trap throughout the book. Of course, he has faith in his wife. He just calls it trust, even though trust is an aspect of faith, and perhaps the more important aspect.
Well, perhaps; but then Boghossian and I are faithful too, like the Pope! Well not exactly like the Pope, since he has no spouse. But there's the rub; the target of the faith is important.

I hasten to add, this is no endorsement of Boghossian's book, which I have not read in its entirety because its writing doesn't appeal. I'm not persuaded that faith is a cognitive sickness; I suppose it's possible we could describe humanity as suffering from a pandemic of cognitive sickness, but the processes involved seem so fundamental to what makes us human, it would cast everyone as sick, and that hardly seems helpful. My difficulty, after all, is providing an adequate account of the distinction between the faithful and the faithless. To me the faithful are simply mistaken, not sick. I'm no psychiatrist, though!

But this means that believers too will need to establish much more rigorous (not more verbose, they could hardly be more verbose) accounts of faith than any I've seen if they want to persuade non-believers that there is nothing to the charge that they are 'pretending to know'. Some of us have personal experience that that is exactly what it is!

Read more »

Saturday, 7 June 2014

Hume on Miracles

Many people take David Hume's argument against miracles as discounting the possibility of miracles, or somehow loading the dice against theism. Consider Craig Keener in his 1248 page magnum opus Miracles:
[Hume] argues, based on “experience,” that miracles do not happen, yet dismisses credible eyewitness testimony for miracles (i.e., others’ experience) on his assumption that miracles do not happen. (Kindle Locations 4325-4326)
I will argue that to succeed logically [Hume's] approach must presuppose atheism or deism. (Kindle Location 4320)
Thus, on the usual reading of Hume, he manages to define away any possibility of a miracle occurring, by defining “miracle” as a violation of natural law, yet defining “natural law” as principles that cannot be violated. (Kindle Locations 4680-4682)
I cannot see that Hume assumes miracles do not happen, nor that he requires a presupposition of atheism or deism, nor that he defines natural law as principles that cannot be violated; the conclusion of his argument is simply that a rational person would not believe a miracle claim. This, though, is based on his view of knowledge acquisition; the argument itself looks valid to me (see below), if I'm reading him right, so to attack Hume one would have to attack the premises of his argument and show it to be unsound. The argument does not, in fact, rule out miracles a priori, but if someone agrees with his account of knowledge acquisition (which account does not exclude theists, though they might not consider it exhaustive) he shows that they would not believe miracle claims, if they were being rational.

Has a miracle occurred? Just to consider this question is to allow that miracles are not ruled out a priori. If, for example, a miracle is defined as a supernatural event, as it commonly is, and a person thinks the supernatural is impossible, they would conclude that miracles are impossible. This ‘hard naturalist’ argument goes something like:
Premise 1  Miracles are violations of natural law
Premise 2 Natural law is eternally inviolable
Conclusion Miracles never occur
Natural law has developed over the centuries and seems likely to continue to develop, so the problem is: we don’t know what the natural laws are precisely, so when an event violates the natural laws as we understand them, we do not know if this is because we have the natural laws wrong or because the event is genuinely supernatural. The hard naturalist above would counter that this is just an epistemological problem; in principle, there are inviolable natural laws, so in principle supernatural events are impossible. But a posteriori we have not confirmed their inviolability, so this is just an assertion. A believer in miracles could simply assert in response that natural law is just very rarely violated.

Hume takes a more subtle approach. Only our experience guides us in ‘matters of fact’, he writes in ‘Of Miracles’. Bear in mind his famous 'fork':
All the objects of human reason or enquiry may naturally be divided into two kinds, to wit, Relations of Ideas, and Matters of fact. Of the first kind are the sciences of Geometry, Algebra, and Arithmetic ... [which are] discoverable by the mere operation of thought ... Matters of fact, which are the second object of human reason, are not ascertained in the same manner; nor is our evidence of their truth, however great, of a like nature with the foregoing.
The distinction between 'relations of ideas' and 'matters of fact' is similar to the a priori/a posteriori distinction, and is commonly made by philosophers; Descartes drew a similar distinction in his method of doubt, for example. So we take into account all our (sometimes competing) experiences and draw conclusions about the world from them in proportion to the evidence they supply. If our conclusions are based on ‘an infallible experience’ we consider this a full proof for the future; if not, we proceed as cautiously as our past experience dictates. Hume is describing here how people generally behave, and perhaps should behave; by weighing the available evidence; that weighing is surely the source of the word rational, with its ‘ratio’ root.

Hume discusses human testimony, and agrees that there is a ‘useful conformity of facts to the reports of witnesses’. Experience tells us that memory is ‘tenacious’, people are usually truthful and being caught lying is shameful. Testimony can be false, but very well attested reports are a ‘proof or a probability’ that an event has occurred. If that most unusual event, a miracle, occurs, and the testimony is so good that it would normally be considered a proof, we have a ‘proof against proof’.

Hume's argument, then, does not rely on a premise that testimony is unreliable per se, although this is often mentioned when he is brought up; consider theologian Randal Rauser here, in response to a miracle sceptic:
Mike is apparently invoking an epistemic principle like this:
Testimony skepticism principle (TSP): Carefully documented testimonial evidence has negligible evidential value because testimony has been shown to be unreliable.
While testimony can be unreliable (and that is an important part of Hume's argument - see P4 below) this does not mean it has 'negligible evidential value'; in fact, Hume specifically says well attested reports can be a 'proof' that an event has occurred:
And as the evidence, derived from witnesses and human testimony, is founded on past experience, so it varies with the experience, and is regarded either as a proof or a probability, according as the conjunction between any particular kind of report and any kind of object has been found to be constant or variable.
But since Hume defines a miracle as ‘a violation of the laws of nature’, which have been established by our ‘firm and unalterable experience’, there can be no better argument from experience than the one that supports the laws of nature . Hume invites us to weigh two arguments from experience; this is not an a priori matter, but an empirical one. A rational person must agree, then, that the chances of the natural laws being wrong are less than the chances that the best imaginable testimony could be wrong. So Hume’s argument is more like this:
Premise 1 The evidence for matters of fact are established by empirical enquiry
Premise 2 Both natural law and testimony are matters of fact
Conclusion 1 The evidence for natural law and testimony is established by empirical enquiry
Premise 3 Empirical enquiry records those things that occur reliably, without violation, as natural law
Premise 4 Empirical enquiry shows that testimony is often reliable, but not without violation
Premise 5 A rational person believes matters of fact in proportion to the evidence gathered by empirical enquiry.
Conclusion 2 A rational person believes a violation in testimony is more probable than a violation of natural law
Premise 6 Miracles are a violation of natural law
Conclusion A rational person always believes a violation in testimony before she believes a miracle has occurred
A note on P6; other definitions are available. Some say that divine intervention, while supernatural, does not necessarily violate natural law. But without the unusual stamp of a violation, any other divine event would appear to fall more properly under the notion of divine providence, covering the orderly conduct of the cosmos. Although in a footnote Hume says:
Sometimes an event may not, in itself, seem to be contrary to the laws of nature, and yet, if it were real, it might, by reason of some circumstances, be denominated a miracle; because, in fact, it is contrary to these laws. Thus if a person, claiming a divine authority, should command a sick person to be well, a healthful man to fall down dead, the clouds to pour rain, the winds to blow, in short, should order many natural events, which immediately follow upon his command; these might justly be esteemed miracles, because they are really, in this case, contrary to the laws of nature.
...which suggests that any divine intervention is miraculous. Even if an event does not seem contrary to natural law, it could still be. P3 derives from this famous sentence:
A miracle is a violation of the laws of nature; and as a firm and unalterable experience has established these laws, the proof against a miracle, from the very nature of the fact, is as entire as any argument from experience can possibly be imagined.
Keener surely agrees with P3 when he says:
Natural law is, after all, merely our construct of how nature functions. (Kindle Locations 4675-4676 - my emphasis)
...but then says:
If one chooses to define natural law in such a way as to make variation from it impossible, one has simply redefined words about reality rather than made an argument, and someone else could counter by redefining “miracle” as part of that reality. (Kindle Locations 4676-4677) 
But Hume clearly doesn't define natural law to make variation impossible; he defines it as Keener does. It should hopefully be clear by now that Hume is not defining natural law as uniform and inviolable; he is saying what we experience as uniform and inviolable we call natural law (it is 'our construct', to use Keener's words) - an important difference because it means natural laws can be violated - we can be wrong about the uniform and inviolable.

Keener cites many critics objecting to Hume's definition of miracle in P6. That is fine, but theists seem to be stuck between a rock and a hard place; either miracles are part of the natural world, in which case their effects can be monitored like every other natural event, and evidence accumulated in favour or against, or they're not, and Hume's definition appears closer to anything they might offer. A miracle needs to be an exception, not prosaic, and this definition communicates this well, so I will use it for the purposes of this discussion.

The argument looks perfectly valid to me, with no hidden assumption of atheism; it simply states what Hume takes to be how humans acquire knowledge, be they theist or atheist. There is no premise that states that miracles do not happen. They may happen, in fact, but, the argument says, a rational person could not accept any report of them happening.

P1 Hume draws from his ‘fork’, which, as we've seen, distinguishes between abstract reasoning, like mathematics, and matters of fact – the a priori and a posteriori. Hume is firmly setting miracles in the domain of a posteriori arguments, and would reject the hard naturalist argument presented further up, since it is not an a posteriori argument. The premise can be attacked on the grounds that there are ‘other ways of knowing’; in particular, revelation, intuition and religious experience. If there is justification other than empirical enquiry for belief in matters of fact, the argument would be unsound, and to some there is a certainty to some matters of fact that does not submit to empirical proof. Consider these quotes from William James’s The Varieties of Religious Experience from theists:
I don't think I ever doubted the existence of God, or had him drop out of my consciousness.
What I felt on these occasions was a temporary loss of my own identity, accompanied by an illumination which revealed to me a deeper significance than I had been wont to attach to life. It is in this that I find my justification for saying that I have enjoyed communication with God.  
The suggestion is that there are supernatural things we know a priori and not a posteriori, so the fork should be a trident. It’s conceivable too that we know natural things a priori, but that is not important in this scenario; what is important is the claim that supernatural facts are not discovered empirically. This objection can be addressed by tightening the argument; we replace ‘matters of fact’ in P1, P2 and P5 with ‘matters of natural fact’. This is stretching Hume’s words a little, but it is still a pretty close reflection of Hume’s argument. This still rules out believing miracle testimony since, if the premises are correct, the communication of any miracle claim is natural (testimony is communicated naturally), so its reliability is tested empirically, and empirically it must be less reliable than natural law. The only defeater here would be to establish non-natural testimony, which leads us to the second objection.

The original argument might be attacked on the grounds that empirical enquiry reveals supernatural facts among the natural facts. Hume asks us to reject the greater miracle, and the ironic title of J. L. Mackie’s The Miracle of Theism hints at something. Mackie writes:
...I hope to show that [religion’s] continuing hold on the minds of many reasonable people is surprising enough to count as a miracle in at least the original sense. (p.12)
The ‘original sense’ here is ‘something surprising or marvellous’ (p.11), but the very survival of churches to the present day, the faith professed by the ‘many reasonable people’ down the centuries, and the wisdom and miracles recorded in holy books could amount to a testimony which is a miracle in itself. So, the argument goes, a reasonable person should consider it a greater miracle that this ‘testimony’ be wrong than that the laws of nature are violated. Is it not a more parsimonious explanation of this long history that some supernatural agency does intervene in unusual ways occasionally, and testifies through the Bible or the Quran, for example, and delivers revelations and experiences to people whose testimony survives, supernaturally?

This is possible, and the history does call for an explanation. The objection attacks P3, suggesting that empirical enquiry does not just record regularities as natural law, but also records irregularities from the natural law as, at least, marvels, and at best, miracles. Keener raises a similar objection, saying Hume's argument is:
...a circular argument that excludes the evidence of the claim supposedly under consideration and other claims like it. (Kindle Locations 4387-4388)
The point is, I think, that P3 excludes the violations which are under debate, automatically disallowing miracles. But, as discussed, Hume's argument simply states what humans do: we call regularities 'natural laws'. There is no circularity, just a description of the processes involved, and an observation of the respective evidences. In any case, it's easy to adjust Hume's premise to account for some irregularities and completely sidestep the objection:
Empirical enquiry records those things that occur reliably, almost without violation, as natural law
After all, even well-established natural laws such as Newton's continue to be used as natural laws even though we know that the movement of Mercury, for example, does not conform to them (and with Einstein we obviously have a new more accurate understanding of natural laws). Science accepts we have an incomplete and tentative understanding of the natural laws, and this is very much in keeping with Hume's epistemology, which is a sceptical approach to enquiry that falls short of Descartes' extreme approach to knowledge - the method of doubt. Nevertheless there is still a large asymmetry between the evidence for the regularities (the physical sciences) and the evidence for the irregularities (historical enquiry and theology), and that asymmetry is increasing, so the objection is unconvincing. There is less and less space left for miracles.

The ‘miracle of theism’ claim also falls foul of Hume’s further objections to miracles. To employ these in Hume’s order: firstly, despite the claims of supernatural intervention there does seem to be an available natural explanation for even the most successful religion. Great intelligence is no barrier to deception nor indeed to self-deception, and ardency can be a sign of an ulterior motive or that reason is no longer being applied to a person’s beliefs. To prefer any natural explanation does imply that natural explanations must be more likely than miracles, but given miracles’ status as exceptional events evoking wonder and awe, compared to providence, this is not an unreasonable assumption.

Secondly, humans derive a great sense of ‘surprise and wonder’ from miraculous reports and this feeling pre-disposes us towards believing them and repeating them.

Thirdly, many miracle reports date from pre-industrial times, reported and written down by people who were ignorant of many natural facts about the world. Their credulity is excusable, and perfectly natural.

Fourthly, there is such a diversity of religious belief in time and space that any miracles used in the service of any religion are outweighed by the miracles used in the service of all the others. The maths seems inescapable; the resurrection of Jesus testifies that Christ is the son of God, but many believe that the prophet Mohammed mounting a flying horse testifies against Jesus’ divinity. Miracles are also used in the service of Judaism, Hinduism, Buddhism and more. So even if the supernatural is allowed, the evidence for any subset of miracles is outweighed by the evidence for the remainder. If miracles are supposed to testify to something vaguer than a religion’s specific claims, the less power it has to persuade. The less a miracle is attributable to a particular religion, the less pertinent it is to that particular religion’s claims.


There are, no doubt, other objections to Hume’s logic, but simply detailing hundreds of miracle claims does not cut the mustard, unless the weight of all that evidence outweighed our 'firm and unalterable experience' of the natural law. Needless to say even a book of 1248 pages does not come close to that; the theist would have to surmount all the evidence of all the physical sciences. This is really unlikely, so theists should save their breath and attack the premises of Hume's actual argument instead. Stories of 'miraculous' recoveries from illness, for example, while heart warming, will simply never outweigh the evidence for the natural laws.

One further twist: what if one experienced an event that looked miraculous? If our senses are natural, our own ‘testimony’ is subject to the same objection Hume presents – a rational person would believe the weight of evidence that our senses are more likely to be wrong than the natural laws, as currently understood. I confess I wonder if I could dismiss such an experience, if it is particularly convincing! After all, many have been  hoodwinked by charlatans and I'm just as susceptible to a vivid experience as the next man. Ultimately, though, unless there is some unambiguous way to distinguish natural from supernatural events, one should dismiss miracle claims and even miracle experiences.

This has obvious ramifications for many theistic claims. Consider the resurrection; many theists have made great efforts to establish the historicity of the resurrection. At this remove, Hume's argument would suggest that this is impossible on empirical grounds, and it's hard not to disagree. Note that theists are not obliged then to not believe in the resurrection; if they think they have non-empirical grounds, or a method to distinguish natural from supernatural experience, this is open to them. But unless theists can remove testimony from the realm of the a posteriori, miracle testimony should never be used as the justification for miracle belief, even by them.


Chappell, T. (2011) The Philosophy of Religion , Milton Keynes, The Open University.

Cottingham, J. (ed.) (2008) Western Philosophy: An Anthology, Oxford, Blackwell Pub..

Keener, Craig S.. Miracles: The Credibility of the New Testament Accounts. Baker Publishing Group. Kindle Edition. 

Mackie, J.L. (1982) The Miracle of Theism, Oxford, Oxford University Press.

Read more »