11 minute read

Copeland on Searle: Tensions Between Competing Chinese Rooms, Nathaniel Braswell, CMC ’23

Copeland on Searle: Tensions Between Competing Chinese Rooms

Nathaniel Braswell, CMC ’23

Advertisement

Staff Writer

In this essay, I present the Computational Theory of Mind and explain Searle’s objection via his Chinese Room Argument (CRA). I then present the “Systems Reply” to the CRA and Searle’s response to that reply, which I adapt from Copeland (2002) into the “Revised CRA.”1 Finally, I object to the CRA on the grounds that its two versions possess contradicting entailments.

The Computational Theory of Mind is the view that the brain is a naturally occurring computer. According to the theory, a computer is any system that undergoes rational transitions between representational states by manipulating symbols according to an algorithm. A primary aim of the computationalist view is to demonstrate that rational transitions between representational states give rise to cognition. Computationalism defines cognition as the ability to rationally transition between representational states using physical symbols according to an algorithm. “Rationally transition” refers to employing rules that track reality or, in other words, rules that allow representational transitions to result in truth. If I see 2 bananas and 2 apples and need to count the number of fruits, I can sum up the bananas and apples by using the rule of addition. I thus rationally transition from 2 bananas and 2 apples to 4 pieces of fruit. Representational states occur when there are symbols that aim to pick out or express something about an object. “2” is a symbol that picks out a property and is thereby a representational state. Thus, according to the computationalist, the physical manipulation of symbols according to a rationally transitioning algorithm is sufficient for cognition. This makes computers—including naturally occurring computers such as the brain—inherent owners of cognition. In this way, computationalism stages the computer as a functional equivalent to the human mind, thereby claiming computers possess understanding, as human minds possess understanding and minds are just computers.

To object to computationalism, Searle’s Chinese Room Argument presents the scenario of an individual (hypothetically named ‘Gabby’) with no knowledge or recognition of Chinese. Gabby is placed in a room with a manual of Chinese characters, a filing cabinet, and infinite paper/pencils. The manual provides if/then statements that tells Gabby what to write in response to any string of characters she might receive. One rule might read:

你為什麼要翻譯這個 –> 我沒有線索

With this, Gabby knows that if she were to read “你為什麼要翻譯這個,” she would reply with a written piece of paper that reads: “我沒有線 索.” Once placed in the room, Gabby starts receiving slips of paper with Chinese characters on them. For every slip of paper that comes into the room, Gabby checks the manual, writes down the manual’s appropriate response, and slips the paper back out. For the sake of the thought experiment, the manual is theoretically limitless; Gabbycould receive any number of Chinese symbols and respond. The argument takes the following syllogistic form:

1. If computationalism is true, then Gabby understands Chinese. 2. Gabby doesn’t understand Chinese. 3. Therefore, computationalism is false.

I now explain each premise in turn. Premise 1 claims that if computationalism is true,

Gabby understands Chinese. According to the definition of computationalism, to understand Chinese is to rationally transition between representational states using physical symbols according to an algorithm. Thus, if Gabby’s presence in the Chinese Room meets the conditions of computation, Gabby should be able to understand Chinese. The Chinese Room embodies all conditions of a computer. Rational transitions between representational states are realized in Gabby’s correspondence with someone outside the room. As Searle articulates, this person would view their correspondence with Gabby as indistinguishable from a conversation with any native speaker.The manipulation of physical symbols is realized in the physical Chinese symbols in the manual and on the input slips. The algorithm component is satisfied by Gabby’s use of the manual, which lists if/then statements in terms of physical symbols (Chinese text). For any symbols, Gabby finds the matching text in the manual and copies down the accompanying characters, thereby following sets of rules. Thus, Gabby in the Chinese Room embodies rational transitions between representational states by manipulating physical symbols according to an algorithm. Therefore, Gabby is a computer and understands Chinese, since understanding necessarily arises out of computation, as demonstrated earlier.

Premise 2 is that Gabby does not understand Chinese. To arrive here, say Gabby (who speaks English fluently) is given sets of English questions and Chinese questions. There is a difference in how she feels about her response to each, even though her responses are equally understandable by native speakers of both languages. Gabby’s first-person experience of speaking, writing, and thinking in English confirms that she understands it. If Gabby is bilingual in English and Spanish, she might not have the same feeling in her L2 (her secondlearned language) as in her L1, but would still have the experience of understanding both, just to potentially varying degrees. Likewise, Gabby might be able to speak fluent Chinese but not read/write it, in which case Gabby would have some experience of understanding spoken Chinese words. This differs from her experience of writtenChinese words, which requires her to utilize the manual and thereby demonstrates that she does not understand Chinese.

Thus, with premises 1 and 2 correct, computationalism is false. Searle concludes is that Gabby can complete any number of responses and still not understand Chinese. Accordingly, writing more responses is uncorrelated with a better understanding of the language.

A popular rebuttal against the Chinese Room Argument premise 1 is the “Systems Reply,” which concedes that Gabby doesn’t understand Chinese but maintains that she is a member of a larger system, the whole of which understands Chinese. In other words, Gabby might be a sort of central processing unit, but the manual, the cabinet, the symbols, and the pencils/paper constitute a larger system of which Gabby is only a part. All these parts together do understand Chinese, but any one of the individual items might not. Thus, the Chinese Room could still exhibit cognition (and understanding) without Gabby understanding.

At first, it might seemcounterintuitive that a system of mainly inanimate parts could host cognition at all, but since computationalism defends computers (which are inanimate) as viable hosts of understanding, the systems reply likewise dispels the “animacy” requirement, reducing understanding to purely physical terms. The systems reply refutes premise 1; it is no longer true that for CTM to be true, Gabby must understand Chinese.

An example is warranted. Say there are two people: one is incapable of addition but can multiply numbers, and the other cannot multiply but knows addition. If instructed to solve “5*5+6,” the multiplication expert would perform the multiplication and give his product to the addition expert, who would compute 25+6. Neither person understands how to solve the

original problem, but together—as a system— they do. The systems reply thus stands against premise 1 since it is not necessary for understanding that any individual element (Gabby) understand Chinese.

Searle responds by theorizing a scenario in which the entire system is internalized in Gabby’s head. Imagine a version of Gabby who has infinite long-term memory and has memorized the entire rulebook, replacing the need for a manual or filing cabinet. With these newfound abilities, Gabby could even walk around outside and respond to Chinese writing with her own. But Gabby’s production of relevant Chinese output is still a function of her ability to memorize and reproduce allegedly meaningless terms. Memorizing the rules does not magically expose the meaning of their symbols; they are mere syntactic specifications with no semantic value to Gabby. Internalizing the system only allows Gabby fluency in manipulating symbols, not in understanding them.

To show the weaknesses of this reply, I will systemize the argument into the following syllogistic form:

Premise 1: If CTM is true, then the system understands Chinese. Premise 2: Gabby encompasses the system. Premise 3: Gabby doesn’t understand Chinese. Conclusion: Thus, CTM is false.

This syllogism is invalid. Even if the wholesystem does not understand Chinese, it might still be true that part of the system understands Chinese. Premise 2 states that Gabby does not understand Chinese. It does not follow from this that the system does not understand Chinese; what if the system (which understands Chinese) exists as only a mere part of Gabby, and that the rest of Gabby does not understand Chinese? Gabby encompasses the system, but this does not mean that every part of Gabby is equal to the system. Thus, what if Gabby does not understand, but a subpart does? Assuming this is possible (which I will defend momentarily), the error in Searle’s response is the inverse of that in Searle’s original CRA. With the original CRA, claiming that part of the room does not understand fails to show there is no understanding from the whole. With the Revised CRA, claiming that the whole of the room fails to understand does not deny understanding from a part. Saying that Gabby “encompasses the system” does not require Gabby’s lack of understanding to apply to every system part. Searle thus needs to add a fourth premise to achieve validity:

Premise 1: If CTM is true, then the system understands Chinese. Premise 2: Gabby encompasses the system. Premise 3: Gabby doesn’t understand Chinese. Premise 4: If Gabby doesn’t understand Chinese, then no part of her understands Chinese. Conclusion: Thus, CTM is false.

I will refer to this fourth premise via Copeland’s term “Part-Of Principle.” If true, it must also be true that when part of Gabby understands something, the whole of Gabby does too. Recall that under CRA premise 2, Gabby does not understand Chinese because she has different intuitive experiences of understanding for English. I will refer to this as Searle’s “Incorrigibility Thesis.” Thus, if the whole of Gabby understands something, we would expect her to have an experience of that understanding. However, it is conceivable that part of Gabby can understand something in the absence of that experience.

An example of this is blindsight. Blindsight patients have lesions in their primary visual cortex that prevent conscious recognition of objects in their visual field. Despite this impairment, patients can detect, localize, and discriminate stimuli. In one instance, researchers set up obstacles (e.g., trash cans, filing cabinets, etc.) in a hallway and instructed a blindsight patient to walk to the end. Despite having no awareness of the items (and thus no experience of understanding them), he easily navigated around the hurdles.

For this to stand as a valid counterexample, two clarifications are warranted. The first is

that the blindsight patient has no subjective experience of understanding that the hurdles exist. This is true in virtue of him lacking introspective access to the contents of his visual field. When asked why he didn’t walk straight down the hallway, the participant appears confused and is unable to answer, indicating that he has no memory of experiencing an understanding of the obstacles. The contrast of this experience against that of a neurotypical patient facing the same task reveals that he does not understand that the obstacles are present, asper Searle’s own logic under CRA premise 2.

The second argument is that the patient’s visual system (and relevant motor areas) understands the obstacles to be navigated around. According to CRA, the system would need a subjective experience of understanding to grant this condition. This is obviously wrong. Understanding arises anytime we causally relate encoded information to implied goals. When learning how to skip rocks, we would say I understand when and only when the information I’ve acquired from practice and observation translates into a consistent and effective execution of my goal to skip a rock. Patients with severe anterograde amnesia (such as the famous Patient H.M.) can still learn skills without recalling 1) specific episodes of their learning or 2) a broader awareness of their progression.2 In one notable case, Patient H.M. was trained on a mirror tracing task, which requires participants to trace the outline of a star while looking at their drawing through a mirror. This task is generally difficult for participants and requires multiple reiterations to adequately complete.3 After a few sessions of practice at delayed intervals, Patient H.M. successfully completed the mirror tracing task without any memory of his practice sessions.4

From this, we might still say that the amnesiac understands rock skipping because they have acquired information about it that enables them to achieve its cohesive goal. The causal link between acquired information and pre-determined goals is distinct from an experience of the acquisition. It is therefore conceivable that the visual system understands the presence of obstacles; it has causally related representations of the hurdles to its harm minimization goal.

Thus, part of a person can understand something without the person intuitively experiencing understanding. If true, a part can understand something while the whole does not, since a whole entity understanding something is analogous to the entity experiencing that understanding (as per Searle’s incorrigibility thesis). Searle now confronts a dichotomy. On one hand, he could maintain his partof thesis and claim that part of an entity’s understanding necessitates the whole entity’s understanding. If so, the entity’s understanding no longer aligns with its experiencing of understanding, defeating the incorrigibility thesis needed for the original CRA. On the other hand, Searle could admit that, despite the part, the whole entity does not understand, thus defeating his Part-Of principle. Searle must either abandon CRA premise 2 or Revised CRA premise 3. Either way, Searle cannot claim an absence of understanding Chinese. It is no longer clear that computation in the Chinese Room is insufficient for understanding, or that additional ingredients are necessary to produce cognition. Searle’s conclusion that computationalism is false is unwarranted.

Endnotes 1 Jack B. Copeland, “Hypercomputation in the

Chinese Room,” Unconventional Models of Computation in Lecture Notes in Computer Science, vol. 2509 (2002), https://doi.org/10.1007/3-54045833-6_2. 2 Larry R. Squire, “The Legacy of Patient H.M. for

Neuroscience,” Neuron 61, 1 (2009): 8, https://doi.org/10.1016/j.neuron.2008.12.023. 3 Mona Sharon Julius and Esther Adi-Japha, “A

Developmental Perspective in Learning the Mirror-Drawing Task,” Front. Hum. Neurosci. 10, 83 (March 2016), doi: 10.3389/fnhum.2016.00083. 4 Squire, “Legacy of Patient H.M.,” 8.

This article is from: