用户登录

塞尔的“中文房间”(汉英对照)

发布时间:2006-02-10    作者:未知    来源:心灵哲学词典 http://www.artsci.wustl.edu/~philos/Mi    

 

“中文房间”——由约翰.塞尔提出的旨在表明心智(mind)是不同于计算机的,并且表明“图灵测试”并不充分的论证。

介绍:

塞尔在他1980年发表的论文《心智、大脑和程序》中第一次系统表述了这个问题。此后一直到现在,它成为人们争论塞尔所谓的“强人工智能”(strong artificial intelligence)可能性的主要依据。“强人工智能”的支持者们相信,恰当编程的计算机不仅仅是心智模式的简单模拟,它实际上能被看作是一个心智。那就是说,它可以理解、有认知状态并且能思维。塞尔的论证(或更严格地说,思想实验)反对这种立场,他的“中文房间”的论证如下:

假设很多年后,我们制造出了一台其工作好象能够理解中文的计算机。换言之,这台计算机可以把中文字符作为输入,再去查阅大型查找表(就像所有的计算机被描述的那样做),并且接着能造出另外一些中文字符作为输出。假设这台计算机做起这个任务如此令人信服以至它轻松通过了图灵测试(Turing Test)。换言之,它能使得一个说中文的人(Chinese speaker)相信这台计算机也是一个说中文的人(而不是一台计算机)。这个人问的所有的问题都能得到恰当的回答,如此这般的以至这个说中文的人相信他或她实在同另一个中国人在说话。强人工智能的支持者们会描述出这种结论即计算机理解中文,就像人一样。

现在,塞尔要求我们假设他正坐在这台计算机里面。换言之,他在一个小房间,在这个房间里面。他接收到中文字符,在查找表中查阅这些字符,并且根据表格的指示给出另一些中文字符。塞尔特别提到,当然地,他对中文一字不通。此外他对显示出的东西缺乏理解,他论证说,计算机也不理解中文,因为计算机和它处于相同的处境。它们是无心的,是这些中文字符的任意的摆布者,就像塞尔一样——并且计算机也不理解它们“说”了什么,就像塞尔不理解自己“说”了什么一样。

回复:

对这个论证的最流行的两个回复(这两个回复都是塞尔(1980)想出的)是“系统回复”和“机器人回复”。简单地,系统回复仅仅指的是,在那个思想实验中尽管塞尔他自己不理解中文,但当说塞尔加上塞尔查阅了查找表就理解了中文而言完全正确。换言之,整个计算机理解中文,尽管中央处理器或别的部分或许不理解。是整个的系统对理解起到作用。在回应中,塞尔声称只要我们仅仅想象一下中文房间内的人记住了查阅表,我们就能对这个回复给出反例。

机器人回复实质上与之相近。机器人回复提到,我们不想把理解(understanding)归属于那个房间或塞尔所描述的计算机的理由,是由于系统不能与其周围环境有合适的互动。这也是认为图灵测试归属于思维或理解不够充分的理由。然而,如果我们修复这个问题——比如,我们把计算机合适地置入机器人的身体,这样它就能与环境形成互动,能感知事物、四处移动等等——我们接着就处于可以合适地将之归属理解的立场了。在回复中,塞尔特别提到这个回复的支持者部分地放弃了人工智能的教义,即认知是对符号的操纵。更严重的是,他提出他可以处于一个中文机器人中,就像他可以处于中文房间一样简单,并且,他仍然不理解中文。

(王球/译)


Chinese room -An argument forwarded by John Searle intended to show that the mind is not a computer and how the Turing Test is inadequate.

Introduction

Searle first formulated this problem in his paper Minds, brains and programs published in 1980. Ever since, it has been a mainstay of debate over the possibility of what Searle called `strong artificial intelligence`. Supporters of strong artificial intelligence believe that a correctly programmed computer isn`t simply a simulation or model of a mind, it actually would count as a mind. That is, it understands, has cognitive states, and can think. Searle`s argument (or more precisely, thought experiment) against this position, the Chinese room argument, goes as follows:

Suppose that, many years from now, we have constructed a computer which behaves as if it understands Chinese. In other words, the computer takes Chinese symbols as input, consults a large look-up table (as all computers can be described as doing), and then produces other Chinese symbols as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing Test. In other words, it convinces a human Chinese speaker that it is a Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese speaker. The conclusion proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.

Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese symbols, looks them up on look-up table, and returns the Chinese symbols that are indicated by the table. Searle notes, of course, that he doesn`t understand a word of Chinese. Furthermore, his lack of understanding goes to show, he argues, that computers don`t understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is - and they don`t understand what they`re `saying`, just as he doesn`t.

Replies

The two most popular replies to this argument (both of which Searle (1980) considers) are the `systems reply` and the `robot reply`. Briefly, the systems reply is simply that though Searle himself doesn`t understand Chinese in the thought experiment, it is perfectly correct to say that Searle plus look-up table understand Chinese. In other words, the entire computer would understand Chinese, though perhaps the central processor or any other part might not. It is the entire system that matters for attributing understanding. In response, Searle claims that if we simply imagine the person in the Chinese room to memorize the look-up table, we have produced a counter example to this reply.

The robot reply is similar in spirit. The robot reply notes that the reason we don`t want to attribute understanding to the room, or a computer as described by Searle is that the system doesn`t interact properly with the environment. This is also a reason to think the Turing Test is not adequate for attributing thinking or understanding. If, however, we fixed this problem - i.e. we put the computer in a robot body that could interact with the environment, perceive things, move around, etc. - we would then be in a position to attribute understanding properly. In reply, Searle notes that proponents of this reply have partially given up the tenet of AI that cognition is symbol manipulation. More seriously, he proposes that he could be in a Chinese robot, just as easily as a Chinese room, and that he still wouldn`t understand Chinese.


人大哲学院