Copeland on Searle: Tensions Between Competing Chinese Rooms Nathaniel Braswell, CMC ’23 Staff Writer In this essay, I present the Computational Theory of Mind and explain Searle’s objection via his Chinese Room Argument (CRA). I then present the “Systems Reply” to the CRA and Searle’s response to that reply, which I adapt from Copeland (2002) into the “Revised CRA.”1 Finally, I object to the CRA on the grounds that its two versions possess contradicting entailments. The Computational Theory of Mind is the view that the brain is a naturally occurring computer. According to the theory, a computer is any system that undergoes rational transitions between representational states by manipulating symbols according to an algorithm. A primary aim of the computationalist view is to demonstrate that rational transitions between representational states give rise to cognition. Computationalism defines cognition as the ability to rationally transition between representational states using physical symbols according to an algorithm. “Rationally transition” refers to employing rules that track reality or, in other words, rules that allow representational transitions to result in truth. If I see 2 bananas and 2 apples and need to count the number of fruits, I can sum up the bananas and apples by using the rule of addition. I thus rationally transition from 2 bananas and 2 apples to 4 pieces of fruit. Representational states occur when there are symbols that aim to pick out or express something about an object. “2” is a symbol that picks out a property and is thereby a representational state. Thus, according to the computationalist, the physical manipulation of symbols according to a rationally transitioning algorithm is sufficient for cognition. This makes computers—including naturally occurring computers
such as the brain—inherent owners of cognition. In this way, computationalism stages the computer as a functional equivalent to the human mind, thereby claiming computers possess understanding, as human minds possess understanding and minds are just computers. To object to computationalism, Searle’s Chinese Room Argument presents the scenario of an individual (hypothetically named ‘Gabby’) with no knowledge or recognition of Chinese. Gabby is placed in a room with a manual of Chinese characters, a filing cabinet, and infinite paper/pencils. The manual provides if/then statements that tells Gabby what to write in response to any string of characters she might receive. One rule might read: 你為什麼要翻譯這個 –> 我沒有線索 With this, Gabby knows that if she were to read “你為什麼要翻譯這個,” she would reply with a written piece of paper that reads: “我沒有線 索.” Once placed in the room, Gabby starts receiving slips of paper with Chinese characters on them. For every slip of paper that comes into the room, Gabby checks the manual, writes down the manual’s appropriate response, and slips the paper back out. For the sake of the thought experiment, the manual is theoretically limitless; Gabby could receive any number of Chinese symbols and respond. The argument takes the following syllogistic form: 1. If computationalism is true, then Gabby understands Chinese. 2. Gabby doesn’t understand Chinese. 3. Therefore, computationalism is false.
I now explain each premise in turn. Premise 1 claims that if computationalism is true,
48