The Chinese room is a thought experiment presented by John Searle (1932-living) to challenge the claim that it is possible for a digital computer running a program to have a "mind" and "consciousness"[a] in the same sense that people do, simply by virtue of running the right program. The experiment is intended to help refute a philosophical position that Searle named "strong AI":

"The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[b]

To contest this view, Searle writes in his first description of the argument: "Suppose that I'm locked in a room and ... that I know no Chinese, either written or spoken". He further supposes that he has a set of rules in English that "enable me to correlate one set of formal symbols with another set of formal symbols", that is, the Chinese characters. These rules allow him to respond, in written Chinese, to questions, also written in Chinese, in such a way that the posers of the questions who do understand Chinese are convinced that Searle can actually understand the Chinese conversation too, even though he cannot. Similarly, he argues that if there is a computer program that allows a computer to carry on an intelligent conversation in a written language, the computer executing the program would not understand the conversation either.

The experiment is the centerpiece of Searle's Chinese room argument which holds that a program cannot give a computer a "mind", "understanding" or "consciousness", regardless of how intelligently it may make it behave. The argument is directed against the philosophical positions of functionalism and computationalism, which hold that the mind may be viewed as an information processing system operating on formal symbols. Although it was originally presented in reaction to the statements of artificial intelligence researchers, it is not an argument against the goals of AI research, because it does not limit the amount of intelligence a machine can display. The argument applies only to digital computers and does not apply to machines in general. This kind of argument against AI was described by John Haugeland as the "hollow shell" argument.[4]

Searle's argument first appeared in his paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since.

Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that he is talking to another Chinese-speaking human being.

The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese?[c] Searle calls the first position "strong AI" and the latter "weak AI".[d]

Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.

Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted as demonstrating intelligent conversation. However, Searle would not be able to understand the conversation. ("I don't speak a word of Chinese," he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.

Searle argues that without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore he concludes that "strong AI" is false.

Read more here:
Chinese room - Wikipedia, the free encyclopedia

Related Posts
December 26, 2014 at 1:36 pm by Mr HomeBuilder
Category: Room Addition