Searle’s Chinese Room argument addresses the notion of strong artificial intelligence (AI) – that simply running a computer program constitutes thinking. This he contrasts with weak AI – that computers can model aspects of the mind; an uncontroversial idea. Models are used to test theories of reality against reality itself; there is no illusion that the model is reality itself. A climate model doesn’t produce phenomena the way that climate does, for example.
Computers process algorithms. An algorithm is a series of unambiguous (at each step) instructions that deliver an expected result. Since this is all computers do, any biological process that cannot be represented by an algorithm cannot be computed. Algorithms are executed by manipulating symbols, represented at the lowest level by binary on-off states. All popular machines are based on the von Neumann architecture. The relevant elements of this are: step-by-step processing; storage of symbols in one memory location; a unique address for access; all controlled by a central processing unit (CPU). With this common architecture in mind, it also follows that the same instructions (programs) on different computers will produce the same results. Such machines, with sufficient capacity, are dubbed universal Turing machines (UTMs) - after Alan Turing, computer pioneer. Strong AI advocates consider that all mental processes can be represented algorithmically and therefore all UTMs can think – that there is nothing about the brain necessary for thought apart from what qualifies it as a UTM.
Searle allows that machines can think, since he considers we are machines and we can. But he doesn’t think that a program by itself constitutes thinking, and, more specifically, he thinks that algorithmic rule processing (syntax) is insufficient for mental content (semantics). He offers a ‘simple and decisive refutation’ with his Chinese Room thought experiment. Imagine we’re in a room with baskets of symbols we don’t understand (Chinese, in his example) and a rule-book we do understand. People outside pass in symbols and we pass out symbols according to the book’s instructions. If the rule-book is adequate, the ‘room’ appears to understand Chinese. But inside, we have no understanding of Chinese before or after.
Searle argues that this counts against computers based on the von Neumann architecture, but not against brains, because in brains the physical causes the mental in a way that is absent from computers. The biochemical processes are responsible for the mental contents, not the rule processing, so minds are not computer programs.
Searle’s argument is plausible. We see how processing can produce an output without apparent understanding. In fact, we experience this when we learn another language. Initially, we apply rules to translate the foreign language, delivering output without understanding. With practice and exposure to the language, we come to understand it, in a way we didn’t before. What is this difference between translating and understanding? Furthermore, if the brain is simply processing algorithms, how do these step-by-step unambiguous processes come to be creative, or even wrong? Error-prone computers fail catastrophically, rather than creatively. None of this lends support to the brain as algorithmic processor.
Searle asks us to consider what the person in the room understands (nothing). But he places the person in the position of the CPU, whereas his argument is against thinking being the running of a program. For the room to respond adequately, this does, as a matter of fact, require the CPU (the person) and a program (the rule-book) and data storage (the basket), as well as input/output. So we must ask: does the person, in conjunction with the program and data, and input and output, understand Chinese, not just the person? This is the core of the systems reply objection and the answer to its challenge is less clear-cut than Searle suggests. Agreed, it’s still hard to see how adding a hard disk with data and a program to a silicon chip, attaching a keyboard and monitor, could allow the ensemble to think.
The Loebner annual prize is awarded to the computer that performs best against other contestants in a Turing Test - Turing suggested that if a machine responds so that interrogators cannot distinguish it from a human, we should conclude that it’s thinking. It’s a measure of the difficulty of the test that no machine has come close to passing it in twenty years of competition. For example, in 2010 the winning computer told a joke and its interrogator said “Well, I believe that’s worth a LOL!”. The program responded with “Should I know that?”, misunderstanding a phrase that almost anyone would understand. As Daniel Dennett observes in Consciousness Explained, countering Searle’s argument, any program that responds to such comments sensibly would have a vast ‘world knowledge’ and complexity. Just understanding ‘LOL’ requires knowledge of textspeak and its background. But the rest of the sentence requires knowledge of how some people wouldn’t just say ‘LOL’, but would distance themselves from the exclamation by commenting on it in a more detached way. It gives a completely different impression of the interlocutor than just ‘LOL’, but still delivers the news that she found the joke amusing. All this ‘meaning’ would feed back into and inform the subsequent conversation. This is how minds embedded in the real world operate, and is the challenge for AI. So the test poses a valid question. If anything passes the test, shouldn’t we conclude that it’s thinking - displaying an embedded knowledge of the world?
A program could, by storing any number of associated ‘meanings’ and ‘relations’ in database tables called ‘meanings’ and ‘relations’, deliver answers that appear intelligent. But is it understanding? After all, the computer doesn’t know that a ‘meaning’ entry in the table ‘meaning’ has any meaning – it’s just a record in a table. Like the Chinese Room, it’s hard to see that the computer has conquered semantics internally. But to return to the process of learning a language, maybe the mechanistic translation algorithms become subsumed into the sub-conscious, along with the associated categories. By definition we then only ‘see’ the meaning of foreign phrases and the appropriate responses, not the algorithms and categories underlying. We move from knowledge that ‘this word’ means ‘that word’ to knowledge how to speak the language. If this is so, then we are simply passing our own Turing Test, just as an observer outside the Chinese Room considers the room passes the test, ignorant of the mechanism inside, and Searle’s argument counts against our own minds.
Maybe the most interesting premise in Searle’s argument is that brains cause minds. At one level, this is trivially accepted by any physicalist, but perhaps Searle is addressing issues such as the particularity of a self that forms the mind, and its self-starting nature. If software creates the mind, and that software can run on any hardware, how is the mind individual? And programs seem inert. How could they initiate action?
Searle doesn’t really engage with the case for how complexity might address these problems. His ‘Chinese Gym’ counter argument, to address more complex connectionist architecture, asks us to imagine lots of folk in the room manipulating symbols. He is suggesting that lots of processes will exhibit the same properties, and only those, of one process. This isn’t true of physical things, and I’m not sure it’s true for processes. At some point sufficient complexity will appear individual. We can see that complexity introduces something new; for example, patterns in windblown sand dunes, and flock behaviour in starlings. We have a pattern-seeking mind, and pattern-seeking can be programmed. And, just as we get false-positives (ducks seen in cloud shapes, faces in toast) so would pattern-seeking programs. If this is the source of creativity and synchronicity, this could be replicated by rule-processing.
The apparently self-starting nature of the mind is perhaps a deeper problem. Searle considers that the particular physical architecture of the brain is required for semantics, but we see that different people (and maybe different species) acquire meaning with some difference in the brain, so at least some independence from the physical is allowed. So whether mind is emergent from complex algorithms or ‘fizzing chemicals’ is moot. And if an analogue process is required to initiate thinking, this would be something that doesn’t simply provide input to a digital process, or else that input could be simulated. So it needs to initiate and provide something extra to the digital processing, and it’s not clear what this is. Searle’s contention that the mind is tightly coupled with the hardware, and that something about the brain, biochemicals or similar, causes the mental content is as inscrutable as the Chinese Room, so this is insufficient to discount algorithms as a basis for mind.
Searle notes that simulation is not duplication. For example, simulating digestion does not digest. The analogy appeals to function to dismiss simulated thought as genuine thought, but functionalists would note that some simulations do produce the same output as the thing they’re copying, so perform the same function. Much brain activity is simulating our macro reality in a way that aids our survival. Simulating that simulation is doing the same work in a way that simulating digestion is not. Mental content that has no correlate in reality, like pain, appears simulated. However, if minds can be simulated it does suggest that maybe characters in simulated realities have feelings, which is hard to countenance. Indeed, like the brain-in-a-vat thought experiment, it becomes difficult to discount that we are in a simulation. Nevertheless, it’s not clear that mind is immune to simulation.
So, on a functionalist understanding, Searle’s digestion analogy only stands if semantics derives from a process which does not produce the same outputs as the original. It’s plausible that the output of the Chinese Room is the only output required for semantics, but brain processes, in addition to delivering the intelligent response, also generate some heat, and maybe some biochemical and molecular changes. If these are necessary for true understanding, then only machines that produced them would understand as we do. Searle hasn’t established that, and his argument is inconclusive in discounting algorithmic processing as a route to meaning and mind.
Bibliography:
Wilkinson, R. (2002) Minds and Bodies (A211 Book 5), Milton Keynes, The Open University
Wilkinson, R. (2002) Minds and Bodies (A211 Book 5), Milton Keynes, The Open University