johnSEARLE

Page 5

latter statement is evaluable (in fact, falsifiable) by an understood ('background') criterion for mountain height, like 'the summit is so many meters above sea level'. No such criteria exist for prettiness. Beyond this distinction, Searle thinks there are certain phenomena (including all conscious experiences) which are ontologically subjective, i.e. are experienced subjectively. For example, although it might be subjective or objective in the epistemic sense, a doctor's note that a patient suffers from back pain is an ontologically objective claim: it counts as a medical diagnosis only because the existence of back pain is "an objective fact of medical science".[17] But the pain itself is ontologically subjective: it is only experienced by the person having it. Searle goes on to affirm that "where consciousness is concerned, the appearance is the reality".[18] His view that the epistemic and ontological senses of objective/subjective are cleanly separable is crucial to his self-proclaimed biological naturalism. [edit]

Artificial intelligence See also: Chinese room and philosophy of artificial intelligence A consequence of biological naturalism is that if we want to create a conscious being, we will have to duplicate whatever physical processes the brain goes through to cause consciousness. Searle thereby means to contradict to what he calls "Strong AI", defined by the assumption that as soon as a certain kind of software is running on a computer, a conscious being is thereby created.[19] In 1980, Searle presented the "Chinese room" argument, which purports to prove the falsity of strong AI.[20] (Familiarity with the Turing test is useful for understanding the issue.) Assume you do not speak Chinese and imagine yourself in a room with two slits, a book, and some scratch paper. Someone slides you some Chinese characters through the first slit, you follow the instructions in the book, write what it says on the scratch paper, and slide the resulting sheet out the second slit. To people on the outside world, it appears the room speaks Chinese—they slide Chinese statements in one slit and get valid responses in return—yet you do not understand a word of Chinese. This suggests, according to Searle, that no computer can ever understand Chinese or English, because, as the thought experiment suggests, being able to 'translate' Chinese into English does not entail 'understanding' either Chinese or English: all which the person in the thought experiment, and hence a computer, is able to do is to execute certain syntactic manipulations.[21] Stevan Harnad argues that Searle's "Strong AI" is really just another name for functionalism and computationalism, and that these positions are the real targets of his critique.[22] Functionalists claim that consciousness can be defined as a set of informational processes inside the brain. It follows that anything that carries out the same informational processes as a human is also conscious. Thus, if we wrote a computer program that was conscious, we could run that computer program on, say, a system of ping-pong balls and beer cups and the system would be equally conscious, because it was running the same information processes. Searle argues that this is impossible, since consciousness is a physical property, like digestion or fire. No matter how good a simulation of digestion you build on the computer, it will not digest anything; no matter how well you simulate fire, nothing will get burnt. By contrast, informational processes areobserver-relative: observers pick out certain patterns in the world and consider them information processes, but information processes are not things-inthe-world themselves. Since they do not exist at a physical level, Searle argues, they cannot have causal efficacy and thus cannot cause consciousness. There is no physical law, Searle insists, that can see the equivalence between a personal computer, a series of ping-pong balls and beer cans, and a pipe-and-water system all implementing the same program.[23] [edit]

Social reality Searle extended his inquiries into observer-relative phenomena by trying to understand social reality. Searle begins by arguing collective intentionality (e.g. "we're going for a walk") is a distinct form of intentionality, not simply reducible to individual intentionality (e.g. "I'm going for a walk with him and I think he thinks he's going for a walk


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.