Wednesday, 17 February 2016

Do People Choose What is Good for Them?

From Blick
Jerry Coyne has a post up titled How Iranian women would dress if the theocracy disappeared. It shows a number of (western) fashionably clad Iranians from the 1970s. It seems obvious that (some) women would dress differently if there were no theocracy in Iran, although there is perhaps an implicit assumption behind the title that the women depicted before the theocracy are free from societal coercion, which seems unlikely; we are all subject to conformation pressures, after all. An apologist for Islam might point out that the extra flesh displayed by the 70s women indicates sexism in the 'western' East just as the covering of flesh indicates sexism in the modern-day Islamic East. As one of the commenters, Heather Hastie, says, "Things are better in the West, but there are still big problems.".

I think a *diversity* of dress is indicative of a free society, but we observe that humans are clubbable animals so they are not making these choices in a vacuum (if they are making choices at all, pace free will debates). Some people might choose to show more flesh, some might choose to show less flesh; the reasons for their choices can range from fashionable to political.

Serious scholars have observed that some women *chose* to don the veil in the 1970s ("In the 1970s, often to the consternation of parents and siblings, certain progressive young Arab women voluntarily donned the veil." - http://www.amazon.co.uk/Veil-Modesty-Privacy-Resistance-Culture/dp/1859739296). Should we defer to the autonomy of these women in the choices they make? Or should we complain that their choices are somehow malformed? What is to stop the Muslim apologist suggesting that it is women in the West who are slaves to culture?

These questions give me a chance to discuss some philosophy of well-being. Some think that final values (those things we *should* pursue) reside in objective things, like life, knowledge, community, health, wealth and so on; Martha Nussbaum argues for a flavour of this.  Others argue that we should simply satisfy our preferences; Harriet Baber, for example. Simplistically, the problem with the first idea is determining what that list is, if it is objective. One problem with the second idea is: what to do if people 'prefer' to embrace normally 'illiberal' things, like the veil - we should not object because we should let people follow their preferences? An objection to this sort of preferentism is that preferences can be malformed in some way; these so-called 'adaptive' preferences might, therefore, pose a problem for preferentists. They may, for example, cause difficulties if we want to improve the conditions of people in apparently straitened circumstances. I shall examine those difficulties and illuminate them through Nussbaum’s arguments and Baber’s replies.

Terms

Preferentism, or ‘preference-satisfaction welfarism’ (Barber, 2014, p.47), is the theory that final value resides in a person’s well-being, and well-being is delivered by satisfying a person’s preferences. By final value I mean what people should rationally direct their efforts to attaining; as Baber says, ‘what’s basic or fundamental, good in and of itself’ (The Open University, 2014a, 0:30). It contrasts with hedonism, which also proposes well-being as the final value, but delivers it with happiness; and with objectivism, which instead suggests there is an objective list of things in which final value resides, such as life, knowledge, community, health and so on. Preferentism benefits over objectivism in avoiding the dangers of an authoritarian imposition of values that are not universally agreed upon, but opens itself up to charges of relativism – perhaps anything could have final value under this theory.

The meaning of adaptive preferences is not altogether clear. Alex Barber says a ‘person’s set of preferences is adaptive if he or she has taken it on in response to limited options’ (Barber, 2014, p.63). But some of these preferences appear unproblematic, such as someone wanting to be a football referee rather than playing, say, because they lack footballing ability. Nussbaum’s conception of the term reflects her theory: she suggests an objective list of capabilities that all humans should have and want. These ‘Central Human Capabilities’ (Nussbaum, 2001, p.154) include things like being able to have good health, being able to laugh and play, and being able to enjoy a life of normal length.

Nussbaum writes:
People’s liberty can indeed be measured, not by the sheer number of unrealizable wants they have, but by the extent to which they want what human beings have a right to have. (p.160)
Presumably the perfectly free, then, will want Nussbaum’s capabilities. So if someone does not want the capabilities that, she suggests, constitute basic human rights that would reflect their lack of liberty. For example, any preference that threatens the ability to have good health reflects a restriction on that person’s freedom to choose, so would be adaptive. And while considering Amartya Sen’s work, Nussbaum mentions ‘life-long habituation’ and says ‘most of the interesting cases do involve life-long socialization and absence of information’ (p.161).

The Problem

Nussbaum thinks that adaptive preferences pose a problem for preferentism because they prevent us from challenging institutions that threaten the capabilities she considers central, because ‘some existing preferences are actually bad bases for social policy’ (p.154). In particular, she is addressing the limited options women have in many societies. Her approach recognises the deeply-ingrained nature of certain power structures so that even the victims of those power structures consider them desirable.

This hints at a wider concern about preferentism: that without some objective standard it’s difficult, if not impossible, to push for change where people express satisfaction with the status quo.

Baber

Harriet Baber, following J.C Harsanyi, draws a distinction between a person’s manifest preferences (those they express and reveal by their behaviour) and their true preferences. For a person’s preferences to be true, she says:

1) they must be fully informed, so that those that result from misinformation are deformed;
2) they ‘must be free in the broadest sense’ (Baber, 2007, p.107), so that those that manifest as a result of high passion, for example, are deformed;
3) they cannot derive from moral obligations.

Baber thinks Nussbaum ignores the distinction between manifest and true preference above and also the ‘dispositional nature of preference’ (p.108); that our behaviour sometimes does not reveal our true preferences - for example, many will shop at Claire’s Accessories when they might rather shop at Tiffany’s. Baber says that:
"Adaptation" is irrelevant: if I want something, getting it is good for me regardless of how I came by that desire; if getting what I choose does not benefit me, it is because what I chose is not something that I want. (p.110)
Baber denies that the genesis of a preference matters, whilst also allowing (in the 3 points above) that a preference’s genesis can be flawed – an apparent contradiction.

The quote does reflect preferentism’s ‘apparent ass-backwardness’ (The Open University, 2014a, 10:20), which suggests that what is good is what we prefer, not that we prefer what is good. What follows is that an individual’s true preference does not represent an unjust state of affairs; which further suggests that people’s true preferences define the justice or injustice of institutions.

Adaptive True Preferences?

Baber argues plausibly that we all have a preference ranking, and that to choose, and to express, a sub-optimal preference does not show that the preference is deformed or adaptive; someone may just consider it the choice that achieves the best outcome in the circumstances. People ‘have a certain fundamental character represented by their preference rankings’ (9:55), revealed if someone jumps at an opportunity if given it. She cites evidence from Nussbaum’s work to show that Nussbaum’s subjects (Nussbaum makes a study of women in non-western cultures) do betray a series of preference rankings, since they jump at the chance to exercise political power when given the chance. The extent to which people would jump at such opportunities, Baber says, exposes how unjust a state of affairs is.

So here she presumably agrees with Nussbaum that some preferences are bad bases for social policy. Ultimately, however, Baber is still committed to a final set of preferences that is fundamental, and Nussbaum can target those preferences as adaptive whilst granting that expressed preferences can change according to circumstance. Consider the following diagram:



Pa to Pz is a person’s ordered set of preferences, Pz being the final preference;
Pm to Pz measures the injustice a person suffers, per Baber;
Pz to P? is the putative adaptive preference suggested by Nussbaum, suggesting an additional injustice.

Preference Inception

Baber speculates:
...if [Jayamma] were offered a promotion or a raise she would jump at it, since there is no reason to think that she is any different from most people who prefer more money to less money and would rather not spend their days hauling bricks if other options were available. (Baber, 2007, p.111)
Well, perhaps, but the notion that some people are not driven by money and prefer simpler, more basic, work is not so outlandish that Baber can assume this is not the case for Jayamma. Baber thinks it’s a fundamental preference of her own that she would never like to go shopping for clothes (The Open University, 2014a, 9:40). Presumably she goes shopping for clothes occasionally, but that is just a manifest preference; her true preference is never to go shopping for clothes. It seems likely that the subjects of Nussbaum’s research, like Jayamma, also only occasionally go shopping for clothes. Jayamma may have a manifest preference to occasionally go shopping, because of her limited options; but her true preference may be to go shopping or to not go shopping (like Baber); we don’t know.

But should we believe Baber when she says she doesn’t want to go shopping, and not Jayamma, if that’s what she says? Nussbaum has an account that can answer this, whilst Baber’s seems inadequate.

Either way, manifest or true, Nussbaum can suggest the preference has been habituated, and there is another preference (P?) which the subject is not free to prefer. And she could suggest that Baber’s true preference is habituated too. Maybe her upbringing has prevented her from appreciating the joys of shopping; perhaps she has had an ascetic, academic upbringing that has prevented her from appreciating some of the finer things of life, like beautiful clothes?

Baber says that Nussbaum ‘doesn’t seem to realise how little room to manoeuvre most of us have’ (The Open University, 2014b, 3:30). According to Baber we all have revealed preferences that change according to the room we have to manoeuvre, but we also have true preferences that are, if not cemented in, still our final preferences.

But those final preferences are dispositional and ultimately down to the individual, as Baber concedes when she recounts the story of the Harvard academic who has chosen to spend her time counting blades of grass rather than something more apparently worthwhile, like teaching her students or writing papers. In the end, being a preferentist, she says ‘...De gustibus. Keep counting.’ (The Open University, 2014a, 12:00).

Unjust Institutions

But some true preferences seem to result from institutions that are obviously unjust. Baber struggles with the example of human trafficking and the story of Srey Mom, for instance, ‘rescued’ by Nicholas Christof from a brothel, but who then returned to it. ‘It is not so clear’, she writes, ‘that it would have been better for Srey Mom to go back to her village or get an honest job sewing sneakers’ (Baber, 2007, p.120). This is the same problem I suggested might arise with Jayamma. We just don’t know what someone’s true preference is, and it’s possible Srey Mom’s state of affairs is her truly preferred state of affairs or just a manifest preference.

This presents two problems for the preferentist:

1. While Baber maintains that the difference between manifest preference and true preference is the measure of how much a person would change their circumstances, given the option, (Pm-Pz), we don’t know what that is ahead of time, so it’s difficult to know what actions to take to relieve individual situations.
2. Even if we grant the preferentist account, it is resistant to any objective measurements, such as health indicators and life expectancy. While preferentists must commit to satisfied preferences being the good, there will be no impetus to improve those measures (except insofar as people prefer them).

As a straight matter of fact, then, if one’s goal is to improve those indicators, preferentism has a problem. Baber would reply that our goal should not be to improve those indicators, per se. Even if the preferentist grants some adjustment to people’s revealed preferences for problems in their provenance, ultimately the preferentist is committed to prioritising autonomy, from wherever it springs, over any objective measures of well-being. In the end, a preferentist like Baber must bite the bullet and accept that a human-trafficked prostitute’s situation can be just.

Conclusion

Preferentism would deliver a world where people’s autonomy is observed and a wide range of diverse lifestyles and cultures would be accommodated and respected. However, it presents an epistemic problem when we are confronted with an apparent case of injustice – is it really unjust? And, further, it is plausible that individual autonomy is good, but the notion that what we choose just is good relies on an idealised self rather than the messier self of real life; we are none of us causa sui, and we have all been habituated to a degree.

As a liberal Millian type of character, I want to encourage self-expression, even if it doesn't reflect my values - in fact, *because* it doesn't reflect my values. But simply deferring to agents' autonomous wills does not seem workable without some anchoring in our state of being; some recognition of our human condition must be included in any account of rational action to avoid a rational relativism which can be destructive to human lives. So I'm a little sceptical when folk claim how things would be, given the removal of some obstacle to freedom, even if we agree that the obstacle (like theocracy) should be removed; unadulterated choice is still not available, and something like Nussbaum's 'Central Human Capabilities' is needed if we are to navigate our way toward a healthy society.

Bibliography:
Baber, H. E. (2007) ‘Adaptive preference’, Social Theory and Practice, vol. 33, no. 1, pp. 105–26.
Barber, A. (2014) Reason in Action (A333 Book 2), Milton Keynes, The Open University.
Nussbaum, M.C.. (2001) ‘Nussbaum on adaptive preferences and women’s options’ in Barber, A. (ed) (2014) Reason in Action (A333 Book 2), Milton Keynes, The Open University.
The Open University (2014a) ‘Baber on welfarism and adaptive preferences (Part 1)’ [Audio clip], A333: Key questions in philosophy. Available at https://learn2.open.ac.uk/mod/repeatactivity/view.php?id=458507 (Accessed 11 Jan 2015).
The Open University (2014b) ‘Baber on welfarism and adaptive preferences (Part 2)’ [Audio clip], A333: Key questions in philosophy. Available at https://learn2.open.ac.uk/mod/repeatactivity/view.php?id=458508 (Accessed 11 Jan 2015).

Read more »

Wednesday, 13 January 2016

Liberalism

Liberalism poster, by Floris van den Berg

Read more »

Monday, 21 December 2015

Richard Dawkins FOR Children Believing Santa Claus, Christian Apologist AGAINST


The common perception of Richard Dawkins is of a baby-eating, kitten-crushing ultra-realist, denying fun to everyone, especially children. See here, here, here and here, for example, in which various papers accuse him of wanting to deny all the fun of Santa Claus to the kiddy-winks, and MP Tom Watson calls him a 'soulless bore' for wanting to 'ban fairy tales'.

In fact, Dawkins sees some values in fairy tales; as he explained:
Fairy stories might equip the child to reject supernaturalism when the time comes … Santa Claus again could be a very valuable lesson because the child will learn that there are some things you are told that are not true. Now isn't that a valuable lesson? Unfortunately it doesn't seem to have had the desired effect in some cases, because after children learn that there is no Santa Claus, mysteriously they go on believing that there is a God.
So the media caricature of Dawkins is wide of the mark again, even if he is still somewhat Professor Yaffle-like.

William Lane Craig has weighed into this momentous and seasonal debate with a new Q&A, in which he recommends children are not led to believe in Father Christmas, but to 'make-believe' him:
Saint Nicholas was a historical figure, an early church bishop. We can teach our children about who he was and explain how people like to make-believe that he comes and brings children presents today at Christmas time. Children love to make-believe, and so you can invite them to join in this game of make-believe with you. When you see a Santa at the shopping mall, say, “Look, there’s a man dressed up like Saint Nicholas! People pretend that he is Saint Nicholas. Would you like to tell him what you want for Christmas?”
Hoorah for Xmas! Ho-ho-no! There's a pretend Santa over there! Would you like to tell pretend Santa what you want for Christmas? 'What the hell for?' might be the reasonable reply.

How nice it would be if the media advertised the religious who want to banish magic and fun from the festive season as much as they do puppy-murdering atheists.

Craig's parenting has led to some unhappy friends, I suspect:
My daughter said that our policy of telling the children Santa is make-believe led to “some interesting conversations” at school with children who said that Père Noël exists. “No, he doesn’t!” Oops! I find it rather ironic that it was our children who were the free-thinkers and sceptics when it came to Santa Claus. Best to tell your children that while we know Santa is a just a fun, make-believe figure, they shouldn’t upset other parents who haven’t been so honest with their children as we have.
So Dawkins is less the killjoy than Craig on this occasion; one can imagine the other parents advising their kids not to listen to Craigs Minor. Stop spoiling the magic, WLC! At one point he seems to be channeling Dawkins when he says 'Maybe the whole Christmas story is a myth which thinking adults should outgrow'. Hallelujah! But, sadly I think he means the story of Saint Nicholas, not the Nativity.

Nevertheless, good to see Craig instilling some scepticism in his children; maybe they can carry that through to their religious beliefs too.

Read more »

Friday, 27 November 2015

Atran on Coyne


Jerry Coyne recently posted a piece called 'Once again, Scott Atran exculpates religion as a cause of terrorism', complaining about Scott Atran's apparent apology for religion in a recent article in the Guardian. A typical taste of Coyne's complaints:
When I read Atran’s brand of Islamic apologetics, and when I think of the terrorists’ cries of “Allahu Akbar” that accompanied their Kalashnikov fire, and when I ponder why young men out for just “a good time, a cause, and brotherhood” would do these deeds knowing they were surely going to die (and probably believing that, as martyrs, they’d attain Paradise), and when I think of the other deeds they do—the slaughter of Christians, Yazidis, apostates, atheists, and gays, and of the way they treat women like chattel, raping their sex slaves and stoning adulterers—when I think of all this, and the explicitly Islamic motivations the terrorists avow, I have to ask people like Atran: “WHAT WOULD IT TAKE TO MAKE YOU ASCRIBE ANY OF THEIR ACTIONS TO ISLAM?”
Perhaps fair enough, although I'm not sure I fully understand Atran's position; this article suggests he does acknowledge that motivations aren't entirely political:
Some officials speaking for Western governments at the East Asia summit in Singapore last April argued that the Caliphate is traditional power politics masquerading as mythology. Research on those drawn to the cause show that this is a dangerous misconception. The Caliphate has re-emerged as a seductive mobilizing cause in the minds of many Muslims, from the Levant to Western Europe.
Atran appears to have commented on Jerry's piece, saying:
I recommend some of the commentators, as well as the principal author, read some of our scientific papers inScience, PNAS, BBS, and reports of others in Nature, You might also glance at articles and editorials in the NY Times, Foreign Policy, Wall Street Journal etc. I never made an argument that “religion” is not a cause of terrorism. “Religion,” in fact, is as empty a notion (scientifically speaking) as “culture.” What I said is that the propositional content of some religious canon is not a principal predictor for may joining Al Qaeda (and now ISIS), and that, the principal predictors have to do with social network factors. Intel and military have used these finding to help break up those networks. Counter canon narratives have done absolutely nothing at all to stop violence or dissuade ISIS volunteers. In other findings, most recently reported in PNAS and NATURE, we detail how commitment to strict sharia of a form practiced by the Islamic State Caliphate, and Identity fusion (a particular type of social formation), although independent (largely uncorrelated) interact to predict costly commitment to costly sacrifices, including fighting and dying.
Mr. Coyne, like Mr. Harris, are not interested in the science, at least on this issue, but in continuing their declamations against “liberal apologetics.” Neither has ever had any dealings with volunteers or fighters from ISIS and Nusra (accepted perhaps reformed ones in safe settings), they have never been to the frontlines of combat zones to see for themselves what motivates fighters. They have never systematically interviewed or psychologically tested volunteers for such movements. And they have never tried, or been asked by those actually fighting ISIS or Al Qaeda to help in the fight because their proposals are, quite frankly, ridiculous. They are like angry children who believe that yelling at the top of their lungs will change the world. Like many politicians and pundits, willful ignorance of the science that bears on this issue is understandable (good argument is, by and large, used for persuasion and victory in social discourse, not discovery of the reason). The sad thing is that their followers believe they have scientific credentials that must give them knowledge ot support their arguments. But even Nobel prize winners have no special insight into social and political affairs, and their views should be scrutinized without passion by their peers (wishful thinking, I know).
It's again difficult to tell what his position is, because he denies that he argues that '“religion” is not a cause of terrorism" (apologies for the double negative) but goes on to describe it as an empty notion (scientifically speaking), and to cite 'social network factors' as the principal predictor of joining 'Al Qaeda (and now ISIS)', which together appear to suggest that he is arguing that religion is not a cause of terrorism. I've posted this comment:
Thanks for making this comment. I assume it’s genuinely Scott Atran! It would be helpful if you could recommend one or two links that you think particularly address the issues raised here. You say that you ‘detail how commitment to strict sharia of a form practiced by the Islamic State Caliphate, and Identity fusion (a particular type of social formation), although independent (largely uncorrelated) interact to predict costly commitment to costly sacrifices, including fighting and dying’. Apologies, but I don’t understand what that means! So bear with an interested bystander for a mo, if you can.
I think as a layman I can appreciate that a frankly perverse organisation like ISIS has multifarious causes; obviously billions of religious people don’t behave that way, so ‘religion’ is not explanatory in that sense, and might be, as you say, an ’empty’ notion. But a similar observation could be made about the term ‘politics’ and yet no-one would deny (or would they?) the political motivations of communism as an important factor in Stalin’s actions, for example. Perhaps the vast majority of communists would not have indulged in purges, so it would be correct to say that there is some other predictor of those particular actions. Nonetheless, the communism played a part, is it reasonable to say?
Furthermore, just about every theist I’ve met would not recognise their religion as an empty notion.
This suggests that saying that ‘religion’ is an empty notion in *some* sense is a weak rejoinder to anyone who argues for or against the effects of religious beliefs, and unlikely to persuade either the irreligious or the religious that religious beliefs should not be criticised (or praised).
So someone who thinks that way can accept your (no doubt firmly supported empirically) view that ‘the principal predictors [for joining Al Qaeda (and now ISIS)] have to do with social network factors’, whilst still decrying the deleterious effects of religious beliefs within the complex matrix of factors that have caused these phenomena.
For example, it seems silly to claim that religious beliefs could be used to predict who would commit acts of terrorism in the Northern Ireland troubles (both Protestants and Catholics did, of course). But it would surely be fair to point out the role that religion played in the underlying complex mix of history and culture that brought those two communities to that point.
For another example, it seems to me that one can differentiate between the attack on Charlie Hebdo and the one on Bataclan by reference to a particular religious doctrine – blasphemy. The CH attack is more obviously religiously motivated than Bataclan, prima facie. You seem to be saying that your research suggests that both attacks are predicted more by social network factors than religious ones, and I bow to your superior knowledge on that. But how could Charlie Hebdo be *singled out* for attack (amongst the enormous Western infidel media pack) if it weren’t for their particularly blasphemous (according to Islam) actions? This is surely an attack where the religious belief is ‘critical’ to the motivations of the terrorists. The Bataclan attack, less so, imo, but still an underlying, important, factor.
It is this sort of specificity of action that, again, to a layman like me, would not occur without the religious doctrine. And you perhaps acknowledge this when you say that the content of religious beliefs aren’t a ‘principal predictor’; are they a secondary one?
So the question from a complete ignoramus like me who wants to understand the differences between you and Coyne is this: Coyne suspects you would not even ‘ascribe any of [the terrorists] actions to Islam’. It’s still not clear from your comment how you respond to his question. Even if the religious doctrines aren’t a principal predictor of *who* acts, do you acknowledge that they do effect the behaviour of jihadists in Syria and in attacks on the West? If you do, and I get the impression you might, then I’m not sure what Coyne is saying that you disagree with. Is it just the emphasis he puts on ‘religion’ when these atrocities occur? He clearly cites other factors – ‘disaffection, the need to feel part of something greater than oneself, innate aggression of young males, and, yes, the mishandling of many Middle Eastern situations by the West’, so he’s not denying those other causes. Just because people bemoan one factor does not mean they discount all others.
If, on the other hand, you don’t think such doctrines have an effect on terrorist behaviour, I should like to see the papers that support that conclusion, in the (perhaps forlorn!) hope that I could understand them.
If you’ve got this far, thanks for reading, and apologies if I misconstrued your position!
It's interesting that Atran says that 'Counter canon narratives have done absolutely nothing at all to stop violence or dissuade ISIS volunteers'. This doesn't counter Coyne's complaints about religious causes, but it does perhaps point to why Atran is frustrated at 'New Atheists'; their complaints are pointless, because attempts to change religious views have not worked, according to whatever metrics Atran has used in his studies.

That may be true in Atran's studies, but the idea that societal progress cannot be made by addressing deleterious religious beliefs seems to deny the last 200 years, from the Enlightenment onwards, which has seen a secular, rational, scientific push-back against such beliefs that has had a civilising effect. Now, to be fair, many religionists would deny that this civilising effect is particularly secular, rational or scientific, but for me the evidence is pretty overwhelming.

I do wonder what 'counter canon narratives' have been attempted, because, as far as I can see, not so much has been done to counter the blasphemy narrative since CH. Indeed some western countries still outlaw it! So a meaningful counter canon narrative would have to be substantial and accord religion a lot less respect than just about every country, including in the west, does currently. Until we see this happening I suspect many of us will still see plenty in religion to complain about.

Read more »

Friday, 6 November 2015

Knowledge, Testimony and Reductionism

Does reductionism - the notion that testimonial evidence simply reduces to facts derived from memory and experience - succeed in explaining how we can know things on the basis of testimony?

I shall consider two problems in reductionism that Jennifer Lackey highlights to show that reductionism does not succeed in explaining how we can know things on the basis of testimony.

Parameters

To ‘know things’ I take to mean that we have a true belief that is justified in some way. That justification is what reductionism and its alternatives look to provide. For brevity’s sake I will concentrate on one-to-one testimony, although there are many types of testimony (what Lackey calls ‘epistemic heterogeneity’, 2006, p.441). I shall focus on a global reductionist account:
... a hearer must have non-testimonially based positive reasons for believing that testimony is generally reliable. (p.440)
Testimony

We are told things as children and adults that we rarely investigate. Our parents tell us our name is x and we were born in y. The rule we follow is something like:
If the speaker S asserts that p to the hearer H, then, under normal conditions, it is correct for H to accept (believe) S's assertion, unless H has special reason to object. (Adler, 2015)
Jonathan Adler calls this the default rule (DR) and it plausibly describes our everyday behaviour. But the question is: does simply telling someone something, rather than, for example, showing them, confer knowledge?

Reductionism

Reductionism is commonly identified with David Hume (1711-1776). In Of Miracles he writes:
[O]ur assurance in any argument of this kind is derived from no other principle than our observation of the veracity of human testimony, and of the usual conformity of facts to the reports of witnesses.
..and:
Were not the memory tenacious to a certain degree; had not men commonly an inclination to truth and a principle of probity; were they not sensible to shame, when detected in a falsehood: Were not these, I say, discovered by experience to be qualities, inherent in human nature, we should never repose the least confidence in human testimony. (Hume, 1777, p.174)
Hume is claiming that a reasonable person ‘reduces’ testimonial evidence to facts derived from their memory and experience – it is not justifiably knowledge otherwise.

Contrast this with the non-reductionist view from Hume’s contemporary Thomas Reid (1710-1796). He claims there is a principle of veracity (PV), that people tend to tell the truth, and a corresponding principle of credulity, that people tend to believe what they are told. The principle of veracity stems from the connection between thoughts and language; the very purpose of language is to communicate the truth of one’s thoughts, and while some may lie occasionally, even liars tell the truth more than they lie. From this principle we can reasonably assume the truth of testimony in and of itself, unless we are given reason to doubt it.

Reid’s PV doesn’t seem very different from Hume’s appeal to people’s truthfulness (‘had not men commonly an inclination to truth and a principle of probity’, p.174). But to establish knowledge from testimony a priori Reid needs a justification that is a priori; that is, justified without appealing to evidence, such as the notion that 1+1=2. Hume justifies the truth of testimony a posteriori, by appealing to experience.

Problems with Reductionism

Reid makes some trenchant criticisms of reductionism. In a reductionist world he notes that ‘[s]uch distrust and incredulity would deprive us of the greatest benefits of society’ (Reid, 1764, p.177). But if believing testimony is beneficial to us as social animals it does not follow that it is necessarily knowledge-imparting. A parent could tell their children that there are dragons in the wood across the busy main road nearby, to deter them from crossing the road. This would benefit the children by preventing them from risking their lives on the road, but it does not impart any knowledge to them. The alternative, truthful, method of telling the children of the perils of the road might be less effective. This breaks the a priori connection between thoughts and language to which Reid appeals.

But if we must find reasons in our background knowledge, from perception, memory and inference, for justifiably believing testimony, we need to establish the general rule of testimony’s reliability, and, as Lackey points out (p.440), two issues arise:

1. It’s not clear we can be confident that we have received enough reports to establish the general rule that testimony is reliable (TR).
2. It’s not clear we have sufficient access to the facts of the world to judge a testimony’s truthfulness (TT).

Consider the first instance of testimony a child receives from her parent – say, her mother introduces a man and says, this is your uncle; the child believes it (what else can she do?).

On TR, she has only one record in her testimonial database, so she has nothing to judge her mother’s testimonial reliability by.

On TT, charitably she will know a few facts about the world; teddy bears are warm and fluffy, spoons are cold and hard, for example. But no child can investigate her uncle’s provenance, so, per reductionism, on TR and TT grounds, that the man is her uncle is not knowledge. Nonetheless this belief will be stored as a fact.

Later, the mother introduces another child as her uncle’s son, calling him the child’s cousin. By now, perhaps, the child has a larger testimonial database showing 90% reliability for her mother’s testimony, so the TR issue is somewhat mitigated. But, still, the child only has data for her mother and perhaps a few close family and friends, which can hardly be projected to establish testimony’s general reliability. One might at this stage appeal to a local reductionist approach, that ‘the justification of each particular report or instance of testimony reduces to the justification of instances of sense perception, memory, and inductive inference’ (Lackey, p.440). But this introduces problems of chains of testimony that are insoluble, I think, without an appeal to testimony’s general reliability.

On TT, the child might observe facts about her cousin – that he lives with her uncle, for example – that could give her good empirical evidence that he is her cousin. If we ignore the TR problem above, reductionism then suggests that she knows who her cousin is. But this fact itself is based on a background ‘fact’ that is not knowledge, per reductionism – that her cousin’s father is her uncle. Basing facts on non-facts looks fatal to reductionism as a coherent account of knowledge acquisition.

A reductionist might counter that by extrapolation from our limited datasets we can be justified in our beliefs on both TR and TT grounds; maybe we could confirm background beliefs retrospectively as experience increases. But we are then left with the problem of keeping track of our beliefs and their status. I’m not aware anyone does this; I’m really only aware of background beliefs, not background confirmed facts and unconfirmed ‘facts’.

A couple of anti-reductionist suggestions point to some more issues that an enlightened reductionist account should address.

Testimony to be trusted?

Paul Faulkner draws a distinction between practical testimony, such as the ‘dragons’ example, and epistemological testimony. Echoing Reid’s ‘distrust’ objection, he says that reductionism ignores ‘...the practical dimension of testimony. It misses out on the reasons that trust provides’ (The Open University, 2014, 2:23). Faulkner’s ‘assurance view’ (ibid, 4:22) suggests that by trusting testifiers we can take what they say as knowledge. There does seem to be a trust component to testimony; when that trust is broken, we take it very seriously. One of the ten commandments is not to lie; if journalists are found out telling falsehoods it can end their career; and likewise for scientists who falsify evidence in scientific papers.

But this doesn’t seem to help at all in the ‘dragons’ testimony case; children nearly always trust their mother’s testimony and they are rarely let down. But the mother’s testimony is split between the practical and the epistemological, so there has to be an account that distinguishes the knowledge-imparting from the pragmatic, and trust doesn’t seem to provide it. The children are right to trust their mother, but not because she is imparting knowledge. From a reductionist viewpoint, there can be plentiful evidence available to trust someone, but how could the trust be established in the first place given the problems of TR and TT above?

Testimony as a practice?

Alan Millar says:
...telling is a move in a practice. The practice may be conceived as that of informing through telling, but it should be understood that the practice embraces both informing through telling, understanding acts of telling, and adopting a stance towards what one is being told. (Millar, 2010, p.177-178)
So the testimony must be what Millar calls ‘felicitous’ (p.178). Testimony can be deliberately deceptive, in which case it is not felicitous. This allows us to distinguish the ‘dragons’ testimony from knowledge-imparting testimony; the mother is perhaps engaging in the practice of ‘safeguarding through telling’ rather than ‘informing through telling’.

Millar’s anti-reductionist account appeals to a perceptual-recognition account of knowledge acquisition that stands apart from a perceptual evidence account. He writes:
The crucial point though is that we account for the acquisition of knowledge in these cases in terms of the exercise of an ability to recognize a phenomenon as having a certain significance. It is the ability that is in the driving seat and its possession does not turn on independent support for any generalization that informs it. (p.187)
Generalisations form a large part of our knowledge acquisition skills, but Millar suggests that we can acquire knowledge by recognition; so we can recognise from tracks on a path that deer have been there. While a certain amount of the background knowledge to this judgement is plainly observational, there is, he claims, a recognitional skill that has been learnt that cannot be reduced to perception, memory and inference – ‘...we should also take seriously the idea that our knowledge that p from someone's telling us that p is recognitional as well’ (ibid).

This seems plausible, but would mean that those who haven’t acquired certain recognitional skills are simply incapable of acquiring knowledge. And it’s clear that many children, and I daresay adults, might suffer in this regard. Children who believe that dragons live in the woods nearby and that Santa delivers their Christmas presents are clearly underdeveloped in the perceptual-recognition stakes. But then how are any of their testimonial beliefs knowledge? Millar recognises this problem when he writes:
If early learning is to be conceived as the acquisition of knowledge through being told, as in the straightforward cases, then the knowledge will not meet the conditions I have laid down. (p.192)
And he offers an approach to the acquisition of such knowledge which is reductionist. He says that knowing that Hobart is the capital of Tasmania ‘consists in an ability to recall a publicly available, known fact, which has been gained from repeated encounters with reliable sources of information’ (p.192).

Millar’s non-reductionist suggestion addresses something like Adler’s DR; how we are correct in normal circumstances to accept someone’s testimony, absent reasons not to believe it. But he still posits reductionism for early learning, which leaves his account vulnerable to the same problem for background beliefs of reductionism simpliciter; facts relying on non-facts.

Conclusion

While the two alternatives to a reductionist explanation discussed here don’t work, they highlight some issues that reductionism misses. The assurance view recognises the importance of trust in testimony; perhaps a reductionist account of trust could be worked up to fully integrate this into reductionism. Millar’s approach observes that we learn skills that give us knowledge that resists reduction, and applies this principle to testimony. So, on the issues discussed here, since testimony has not been successfully excluded from a rigorous account of knowledge acquisition, I don’t think reductionism so far succeeds in explaining how we can know things on the basis of testimony.

Bibliography:
Adler, J. (2015) ‘Epistemological Problems of Testimony’, The Stanford Encyclopedia of Philosophy (Summer 2015 Edition), Edward N. Zalta (ed.)
Hume, D. (1777) ‘Hume on testimony and experience’ in Price, C. and Chimisso, C. (eds) (2014) Knowledge and Reason (A333 Book 5) , Milton Keynes, The Open University.
Lackey, J. (2006) ‘Knowing from testimony’, Philosophy Compass, vol. 1, no. 5, pp. 432–48.
Millar, A. (2010) ‘Knowing from being told’, in Haddock, A., Millar, A. and Pritchard, D. (eds) Social Epistemology, Oxford, Oxford University Press.
Price, C. and Chimisso, C. (2014) Knowledge and Reason (A333 Book 5) , Milton Keynes, The Open University.
Reid, T. (1764) ‘Reid on veracity and credulity’ in Price, C. and Chimisso, C. (eds) (2014) Knowledge and Reason (A333 Book 5) , Milton Keynes, The Open University.
The Open University (2014) ‘Faulkner on testimony (Part 2)’ [Audio clip], A333: Key questions in philosophy. 


Read more »

Saturday, 19 September 2015

CFI UK Debate - Does God Exist?

God and the Bible was the title of an all day event held by CFI UK at the Conway Hall in London.

A morning talk was given by Professor Francesca Stavrakopoulou on what she called The Real Religions of the Bible, or The Uncensored Bible. The theme of this talk was those bits of the Bible that are rarely mentioned. Most likely because they are rather priapic, or, at least, that is how it seemed, as the 'penis' references moved into double figures. The God of the Old Testament used to put it about a bit, it seems, and it's not at all clear how immaterial he was. Stavrakopoulou was very articulate and, to every issue raised, thoughtful and nuanced.

In the afternoon Professor Stephen Law and Professor Keith Ward debated whether God exists. This being the CFI, it was clear where the sympathy of the audience lay, and it wasn't with Professor Ward. His presentation was familiar to many of us who have read liberal theologians; his God is a mutable, vague beast that seems to be specially designed to reify the art of goal-posting moving, so that wherever one shoots, one is bound to miss. A couple of things, at least, defined this concept: first, that his God could not do anything he felt was immoral; and second, that his God concept could not run counter to scientific facts. In this way he inoculates his belief from attack, I suppose, because his God is bound to conform to his (Ward's) moral beliefs, so no awkward silences when the Canaanite genocide is mentioned, and if science shows some aspect of his belief to be factually wrong, he will simply change his belief, so no awkward silences when talking snakes are mentioned.

The panel, with some minor celebrity spotting thrown in.

Obviously this means that as Ward's moral sensibilities change during his life, so does his God, and as science changes our view of the universe, so does his God change. I got the impression Ward is distinctly relaxed about this, seeing it as an enlightened approach. I think it is to be encouraged amongst believers, because it seems to be almost the opposite of dogmatic, and comes close to the Humean ideal of proportioning one's belief to the evidence. But I suspect that few Christians would find this Will 'o the Wisp deity satisfying, and, as Stavrakopoulou pointed out, it lacks magic.

Later, in response to the problem of evil, Ward renounced omnipotence as a feature of his God, which puzzled many in the crowd who assumed Ward was a Christian. Someone asked him outright if he were, and his reply was that he was a priest in the Anglican Church. 'Not enough information!', I shouted.

In the end it seems there is something about Ward's experience of the world that renders him a believer; a sense of the noumenal, morals and beauty were mentioned.

Stephen Law did not specifically address Ward's talk, but presented his excellent Evil God challenge, which, briefly, points out the empirical fact that no-one (? Satanists?) believes in an evil god, although practically all (if not all) the arguments for God do not give us any clue to His goodness. If a theist can see that belief in an evil god is ludicrous (and they seem to), they should likewise agree with atheists that belief in God is too, since the arguments for both are effectively the same (although often mirrored).

I think here Ward falls back on his subjective experience; in a discussion of HADD, for example, which Law was citing as a defeater for Ward's belief in a hidden agent, Ward said he thought he had HADD, but it was given to him by God so he could sense Him! How could one argue anyone out of these sorts of beliefs?

An interesting debate, but I think I would have liked to have heard more from Ward about why he doesn't give up his God's benevolence, as well as His omnipotence, given Law's Evil God Challenge (or, indeed, His existence!). Presumably he just has an undeniable intuition that this thing he senses the existence of is good. That may be, but it seems a wholly unsatisfactory response to anyone who does not have that feeling.

Read more »

Monday, 7 September 2015

Craig describes SSM as 'a sort of legal fiction'


William Lane Craig has revealed some homophobia in the latest Q&A on his Reasonable Faith site, as well as recommending some seditious behaviour. In answer to someone questioning the Supreme Court's decision on SSM, and what Christians should do about it, he says:
[Same sex] marriages are a sort of legal fiction which we must respect.
He thinks there is an essence to marriage that resists legal re-definitions, just as horses and chairs could not be re-defined. This shows a curious blindness to the complexity of social institutions, which, by their very nature (one may say 'essence'), are defined by their social milieu. Simply perusing the Wikipedia article on marriage presents a bewildering variety of types, many of which do not feature just one man and one woman (WLC's preferred flavour). Of course, to describe the marriage of two people in love as any sort of fiction is deeply offensive to those involved.

Craig says:
The Supreme Court did not legalize, nor is anyone advocating for, gay marriage. What it legalized was same-sex marriage, regardless of sexual orientation.
...and he is right, strictly speaking. But let's not pretend that Craig is railing against two heterosexual men getting married; it is the homosexuality he is prejudiced against, and his language shows his prejudice:
Suppose you are a baker who is approached to make a wedding cake for a same-sex wedding or that you are a wedding photographer who is hired to photograph a same-sex ceremony. It’s hard to see how you can justifiably resist legal authority here and refuse to comply, regardless of how distasteful it may be to you, since your activity is not sin on your part.
Craig supposes it may be 'distasteful' to provide a service for a same-sex marriage; why 'distasteful'? Sin is not a matter of taste, after all. Plainly it's the homosexuality that is not to Craig's taste, rather than some contravention of his definition of marriage. If somebody redefined a horse as a chair, no-one would find that 'distasteful'; simply incorrect.

As for government authority, he says:
...we should and must resist authority if it requires us to act contrary to God’s will. 
So, he thinks it's necessary for Christians to impose God's will over and above the law of the land. As at least one study suggests, what God wants tends to be the same as what the individual believer wants. This is a recipe for insurrection, or at least civil disobedience à la Kim Davis, even if one has some sympathy for conscientious objectors in general.

It's nice to see that Craig can see the way the wind is blowing, though:
As our secular culture becomes more and more accommodating to same-sex marriage, the pressure upon Christians to compromise and conform will be heavy and unrelenting.
Yes, the repeal of anti-miscegenation laws delivered legal justice and, as a consequence, generated social pressure on racists to drop their prejudices; so too, now, will the pressure build upon the homophobic to drop their prejudices now that equality before the law has been granted to those of every sexual orientation.

Craig's response reveals the problem with a morality embedded in the aspic of ancient mores; they give the prejudiced ammunition in their fight against overwhelming reason and evidence. Just as many sensible practices and beliefs have arisen through our genetic and cultural history, so too has prejudice. We need the freedom to challenge our practices and beliefs to sort the sensible from the insensible, and religion, amongst other things, hinders that process.

Read more »