Grammars, Programs and the Chinese Room DRAFT 1.0—6.3.2006; for comments Cem Bozsahin Middle East Technical University bozsahin@metu.edu.tr Searle (1980) in his Chinese Room thought experiment sets out to show that a purely formalist ac- count of mind is not possible. The particular claim he was arguing against is strong AI, which he identified with the slogan “the computer is not merely a tool in the study of mind; rather the appro- priately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have cognitive states.” This view according to Searle is bound to fail in its aspirations because the kind of computation it envisages is formal, i.e. operates over symbols with no content, whereas mind sets up relations between intentional states and the world via causal powers of the brain. In the same article (and subsequently in 1990), Searle addresses possible objections to his claim, many of which were companions to his target article, which mainly stem from the assumptions of the experiment and what we understand from understanding. Those that question the experimental setup are concerned with what is embodied in the Chinese room: an extended notion of body that includes not just Searle-in-the-box but the entire room, a robot with perceptual and motor skills, a brain simulator, a parallel rather than serial architecture, or a combination of the above. All these objections are constitutional. It is interesting that the debate continued between Searle, philosophers, psychologists and practition- ers of artificial intelligence, with almost no argument from linguistics (but cf. Carleton 1984). I offer one in this paper from philosophy of linguistics, to question whether Chinese room as imagined by Searle is possible. My argument is not constitutional; it is about what Searle considers computational, and about linguistic conception of the same notion. First a summary of Searle’s argument, from a more recent self statement (Searle, 2001): Imagine a native speaker of English, who has no knowledge of Chinese, locked in a room full of boxes of Chinese symbols (a database) together with a book of instructions in English (the program), which (s)he can interpret, for manipulating the symbols. More Chinese symbols are sent in to the room (questions), which the person in the room correctly answers in Chinese symbols by following the instructions for matching the database symbols and symbols in questions. The person passes the Turing (1950) test in understanding Chinese, for the outsiders cannot peek inside the room to see that the answers are simply look-ups. Yet (s)he does not understand a word of Chinese. The program and the database add no understanding of Chinese to the person, though (s)he already knows how to interpret symbols in one language, namely English. By extension, computers cannot understand Chinese (or any human language) by purely formal manipulation of symbols. The linguistic aspect of the experiment I think is as follows: What is Chinese in the Chinese room is the database, and fragments of the program that contains Chinese symbols and their abstractions (the program is in English, but it is about Chinese symbol correspondences). The program cannot be of in- finite size (otherwise it wouldn’t be a program), therefore the correspondences in the program cannot be phrase-to-phrase matching, for we know that there are (countably) infinitely many Chinese expres- sions. Hence the program must contain finitely characterizable symbols and their program-internal abstractions, such as calling a group of symbols a certain kind of variable, and certain combinations of variables to be other variables and so on, in other words, a generative grammar of Chinese. In the thought experiment we must assume that the program contains a generative grammar because 1