The Chinese room experiment: computers with minds?
This thought experiment leads us to wonder if a computer can ever have a mind.
The Chinese room thought experiment is a hypothetical situation posed by the American philosopher John Searle, to demonstrate that the ability to manipulate a set of symbols in an orderly fashion does not necessarily imply that there is a linguistic understanding or comprehension of those symbols. In other words, the ability to understand does not arise from syntax, which calls into question the computational paradigm developed by the cognitive sciences to understand the workings of the human mind.
In this article we will see what exactly this thought experiment consists of and what kind of philosophical debates it has generated.
The Turing machine and the computational paradigm.
The development of artificial intelligence is one of the great attempts of the 20th century to understand and even replicate the human mind through the use of computer programs. to understand and even replicate the human mind through the use of computer programs.. In this context, one of the most popular models has been that of the Turing machine.
Alan Turing (1912-1954) wanted to demonstrate that a programmed machine can hold conversations like a human being. For this, he proposed a hypothetical situation based on imitation: if we program a machine to imitate the linguistic capacity of speakers, then put it before a set of judges, and it manages to get 30% of these judges to think that they are talking to a real person, this would be sufficient evidence to demonstrate that a machine can be programmed in such a way that it replicates the mental states of human beings; and vice versa, this would also be an explanatory model of how human mental states work.
From the computational paradigm, a part of the cognitive current suggests that the most efficient way to acquire knowledge about the world is through increasingly perfected reproduction of the rules of information processing, so that, regardless of subjectivity or individual history, the most efficient way of acquiring knowledge about the world is through the reproduction of information.The mind would then be an exact copy of the world, so that, regardless of subjectivity or individual history, we could function and respond in society. Thus, the mind would be an exact copy of reality, it is the place of knowledge par excellence and the tool to represent the outside world.
After Turing's machine, some computational systems were even programmed some computational systems were even programmed to try to pass the test.. One of the first was ELIZA, designed by Joseph Weizenbaum, which responded to users by means of a model previously registered in a database, which made some interlocutors believe that they were talking to a person.
Among the most recent inventions that are similar to Turing's machine are, for example, CAPTCHA to detect Spam, or SIRI of the iOS operating system. But, just as there have been those who try to prove that Turing was right, there have also been those who doubt it.
The Chinese room: does the mind work like a computer?
From the experiments that sought to pass the Turing test, John Searle distinguishes between Weak Artificial Intelligence (that which simulates understanding, but without intentional states, i.e., it describes the mind but does not match it); and Strong Artificial Intelligence (when the machine has mental states like those of human beings, for example, if it can understand stories as a person does).
For Searle it is impossible to create Strong Artificial Intelligence.He wanted to test this by means of a mental experiment known as the Chinese room or the Chinese piece. This experiment consists of posing a hypothetical situation as follows: a native speaker of English, who does not know Chinese, is locked in a room and must answer questions about a story that has been told to him in Chinese.
How does he answer them? Through a book of rules written in English that serve to syntactically order the Chinese symbols without explaining their meaning, only explaining without explaining their meaning, only explaining how they should be used. Through this exercise, the questions are properly answered by the person inside the room, even if this person has not understood their content.
Now, suppose there is an outside observer, what does he see? That the person inside the room behaves exactly the same as a person who does understand Chinese.
For Searle, this shows that a computer program can mimic a human mind, but this does not mean that the computer program is the same as a human mind, because it it has no semantic capability and no intentionality..
Impact on the understanding of the human mind
Taken to the realm of humans, this means that the process by which we develop the ability to understand a language goes beyond having a set of symbols; other elements are necessary that computer programs cannot have.
Not only that, but, based on this experiment studies on how meaning is constructed, and where that meaning lies, have been expanded.and where that meaning lies. The proposals are very diverse, ranging from cognitivist perspectives that say that it is in the head of each person, derived from a set of mental states or that it is innately given, to more constructionist perspectives that ask how systems of rules and practices are socially constructed that are historical and that give a social meaning (that a term has a meaning not because it is in the head of people, but because it enters into a set of practices and rules of language).
Criticisms of the Chinese room thought experiment
Some researchers who disagree with Searle think that the experiment is invalid because, even if the person inside the room does not understand Chinese, it may be that, together with the surrounding elements (the room itself, the furniture, the rules manual), there is an understanding of Chinese.
To this, Searle responds with a new hypothetical situation: even if we remove the elements that surround the person inside the room, and ask him to memorize the manuals of rules for manipulating Chinese symbols, this person would not be understanding Chinese, which a computational processor does not do either.
The response to this same criticism has been that the Chinese room is a technically impossible experiment. In turn, the response to this has been that because it is technically impossible does not mean that it is logically impossible.
Another of the most popular criticisms has been made by Dennett and Hofstadter, which they apply not only to Searle's experiment but to the set of mental experiments that have been developed in recent centuries, since their reliability is doubtful because they do not have a rigorous empirical reality, but are speculative and close to common sense, which means that they are above all an "intuition bomb".
Bibliographical references:
- González, R. (2012). The Chinese Piece: a mental experiment with Cartesian bias? Revista Chilena de Neuropsicología, 7(1): 1-6.
- Sandoval, J. (2004). Representation, discursivity and situated action. Critical introduction to the social psychology of knowledge. Universidad de Valparaíso: Chile.
- González, R. (S/A). "Bombs of intuitions", mind, materialism and dualism: Verification, refutation or epoché? University of Chile Repository. [Online]. Accessed April 20, 2018. Available at http://repositorio.uchile.cl/bitstream/handle/2250/143628/Bombas%20de%20intuiciones.pdf?sequence=1.
(Updated at Apr 13 / 2024)