Fifteen years ago, John Searle posed a challenge to “strong” artificial intelligence (the program to create in an artificial medium intelligence comparable to that of humans). He confidently proclaimed his challenge would withstand the test of time, including any possible advances in computer speed, memory, and robotic appliances. His challenge, the so-called Chinese Room thought experiment, attracted considerable interest in its heyday, second in controversial appeal only to Alan Turing’s famous “Turing Test,” whose themes it reflected as if in a funhouse mirror. Today the piece has become something of a historical curiosity. The “strong” AI program is now widely acknowledged as a failure (although for somewhat different reasons than Searle argued), so it would seem that resurrecting Searle’s rhetorical tour de force would be akin to applying electrical shocks to assembled cadaver parts best left in peace. As the metaphor suggests, however, the Chinese Room is not so much dead as undead; while its ostensible purpose is moribund, the presuppositions and unconscious assumptions that inform it are still very much alive. I think the Chinese Room is worth a second look not for the force of its argument but for what it reveals about contemporary ideas on what constitutes the essence of the human, especially intelligence, consciousness, and meaning. Excavating these and juxtaposing them with current controversies over the boundaries of the human will enable us to see what has changed, why it has changed, and what the change signifies in the decade and a half that has passed since Searle delivered the coup de grace that failed to deliver.
The Chinese Room experiment is easily explained. Suppose, Searle writes, that “you are locked in a room, and in this room are several baskets full of Chinese symbols” (32). You do not understand Chinese, but you have a rule book (in English) that tells you how to manipulate the symbols. “So the rule might say: ‘Take a squiggle-squiggle sign out of basket number one and put it next to a squoggle-squoggle sign from basket number two’” (32). “Now suppose that some other Chinese symbols are passed into the room, and that you are given further rules for passing back Chinese symbols out of the room” (32). The instructions have been cleverly constructed so that an onlooker would think that after questions are passed into the room, appropriate answers issue forth. This perception, however, is a mistaken illusion, for on “the basis of the situation as I have described it there is no way you could learn any Chinese simply by manipulating these formal symbols” (32). The point he hastens to drive home is that this is just what a so-called “intelligent” computer program does. “If going through the appropriate computer program for understanding Chinese is not enough to give you an understanding of Chinese, then it is not enough to give any other digital computer an understanding of Chinese” (33). Why? “Because no digital computer, just by virtue of running a program, has anything that you don’t have. All that the computer has, as you have, is a formal program for manipulating uninterpreted Chinese symbols. . . a computer has a syntax, but no semantics” (33).
We begin our interrogation by asking what role Chinese plays in this scenario. Why not French, Swedish, or Thai? Presumably Chinese appears because, as a non-alphabetic, non-Indo-European language, it will defeat any attempt by “you” (an English speaker) to guess at word meanings through cognates or common roots. Chinese is also presumed to be so foreign to “you” that Searle’s English-speaking audience will immediately sympathize with his description of Chinese ideograms as “squiggle-squiggles” and “squoggle-squoggles.” Chinese thus functions as the inscrutable other, that which stands outside and apart from a reader’s cultural context and experience. Furthermore, this alienness is presumed to be absolute. No amount of “manipulating symbols” will ever give “you” any hint about what words mean through their association with one another-which symbol strings might function as nouns, for example, and which as verbs. The Orientalist function of Chinese thus reinforces an absolutely crucial point for Searle: that no bridge can ever be created between syntax and semantics on the basis of associating and manipulating symbols. Put another way, no bridge can be constructed between formal symbol manipulation and meaning. Genuine (human) intelligence, Searle insists, has more than syntax; it also has content. Thoughts and feelings are about something, and it is this “aboutness” (or intensionality, a term philosophers use to denote the “aboutness” quality) that marks the meaning-seeking drive essential to humans.
Given this narrative, one might object that given enough time, “you” may be quite likely to make associations and hence to find meaning, however nascent, in the symbols “you” manipulate. The division between syntax and semantics can remain inviolable only if no context can ever be created that would bridge the two–yet the simple act of mechanically matching symbols would be the first step in building such a context. Also in play is the locus of human meaning-making. By placing a human in a locked room and inviting readerly identification by using the second person, Searle performs an act of literary violence upon “you,” reducing “your” capacity for human understanding to a rote mechanical act. Nothing in the room is represented as extending your own cognitive capacities, which remain so severely stunted that “you” can function only as a non-comprehending automaton. This painful reduction of human capacity is then equated with a digital computer, a construction that has the effect of making the computer signify as an extremely inadequate person, an idiot savant incapable of ever attaining a properly full and rich human understanding. Only so does the odd construction make sense in which Searle writes that if you cannot understand Chinese, then no other digital computer can either. The formula goes like this: reduce the human to an automaton; equate the human automaton with the digital computer; imply that this reduction of human capacity is the natural (and only) state the computer occupies.
What does this construction suppress or make difficult to see? Perhaps most importantly, it implies that the computer’s cognitive capacities reside solely in the CPU, the Central Processing Unit for which the encapsulated human stands. But of course a computer is more than this, including memory, data storage and retrieval mechanisms, user interface, and so on. Searle attempts to respond to this objection by saying that even if one includes the Chinese characters, baskets, rule books, etc. as part of the computational system, the divide between semantics and syntax still remains absolute. He reasons that since none of these artifacts understand Chinese any more than “you” do, nothing essential changes.
Here his assumptions contrast starkly with contemporary theories of emergence in which systems exhibit behaviors that are more than the sum of their parts. In particular, his assumptions are refuted by the idea of an extended cognitive system, a model that Andy Clark in Supersizing the Mind: Embodiment, Action, and Cognitive Extension refers to as EXTENDED. Departing from fellow travelers such as Edwin Hutchins who argue that extended cognitive systems serve as scaffolding for human cognition, Clark performs the radically heuristic move of considering them as part of human cognition, a conclusion in direct contradiction to the model he calls BRAINBOUND. Although Searle has a more capacious view of cognition than many BRAINBOUND theorists, in that he considers feelings as well as thoughts to be part of mind, he participates in a BRAINBOUND worldview in many ways, for example by focusing his attention on the human automaton as the CPU, while relegating the rest of the room’s artifacts to non-cognitive status.
If, on the contrary, we adopt the EXTENDED view that everything in the room is part of “your” extended cognitive system, the room can be said to “know” Chinese, at least in a behavioral sense. As this qualification suggests, EXTENDED shifts the meaning of key terms, including “know,” “think,” and “understand.” Once non-human cognizers are admitted as part of the system, self-aware consciousness can no longer be an adequate measure of what it means for the system to “know” Chinese. The challenge posed by EXTENDED to Searle’s experiment is to question his assumptions that such terms must be understood in the context of self-aware consciousness, or at least embodied thought as represented by a human mind.
Another key term is meaning. To see how it shifts in EXTENDED, consider the intimate relation between meaning and context. As Othello discovers to his horror, meanings of words are notoriously context-dependent. The context of full human life, at once evoked and reduced in the figure of “you” locked in the room, becomes in EXTENDED a cascading series of contexts. The rule book, for example, knows which symbols match with which, the basket knows which symbols are incoming and outgoing, and so forth. The EXTENDED model implies that context is not self-identical or solely human-centered but rather a chain of partially overlapping, partially discrete contexts interacting with each other as different cognizers within the system coordinate their activities. In particular, meaning is tied in with the contexts in which information flows are processed. As Edward Fredkin succinctly observes, “The meaning of information is given by the processes that interpret it.” For a cell, these processes would include, for example, the flow of nutrients in and out of the cell walls, its metabolic activities, its expulsion of waste, and other such processes. To acknowledge the non-conscious nature of such activities, the EXTENDED model typically replaces consciousness with the broader term “cognition.” As I am using the term here, cognition requires as a minimum an information flow, embodied processes that interpret the flow, and contexts that support and extend the interpretive activities. In the Chinese Room, the basket counts as a cognizer, the rule book as another. Even the door slot through which the strings are passed can be considered a cognizer, for it receives information in the form of incoming characters, interprets these characters through processes that allow them to pass from outside to inside and from inside to outside, and constructs a context through its shape and position.
How does this view of cognition impact the absolute separation between syntax and semantics that Searle decrees? As we have seen, Searle associates syntax with mechanical ordering, whereas semantics implies that thoughts and feelings have contents, and moreover that these contents are crucial components of understanding and knowledge. If we agree with Searle that mind somehow emerges from brain, how does mind get from the brain’s non-conscious firing of neurons, mechanical operations of neurochemical gradients, and other non-conscious activities to visions of God or mathematical proofs? Contemporary answers to this age-old question, although still incomplete and controversial, nearly always involve the recursive feedback and feedforward loops acknowledged as crucial mechanisms necessary for emergence to occur. A range of cognitive models as diverse as Maturana and Varela’s autopoiesis, Churchland et.al’s neural nets, Edelman’s neuronal group selection and re-entry, and Hofstader’s fluid analogy computer programs incorporate recursive loops as central features. The loops are crucial because they provide the means to bootstrap from relatively simple components to interactions complex enough to generate emergence.
Emergence does an end-run around Searle’s absolute distinction between syntax and semantics, for it implies that syntactical moves, if combined in structures making use of recursive loops and employing complex dynamics, can indeed bootstrap into semantics.
Although Searle positions his thought experiment as a refutation of strong AI, he shares with it certain key assumptions, particularly the emphasis on individual human thought. Strong AI took as its model the single person thinking; this was the phenomenon researchers sought to duplicate in artificial media. It is not surprising, then, that the strong AI literature is replete with scenarios in which intelligent computers complete with or supersede intelligent humans (Searle rehearses some of these in his argument), for in this view computers and humans seek to occupy the same ecological niche. In the EXTENDED view, the emphasis on the individual is transformed by recognizing that the boundaries separating person from environment are permeable in both directions, inner to outer and outer to inner. Always already a collective, the individual human is less a self-evident entity than the result of a certain focus of attention. Shift the focus, and the scene modulates from “you” locked in a room to an extended cognitive system in which “knowing” Chinese is a systemic property rather than something inside “your” head.
Would this construction be likely to satisfy Searle? Probably not, for it involves re-thinking and re-positioning terms he takes for granted. From my point of view, this is precisely the point. The value of re-visiting his thought experiment is not to argue, once again, whether it is right or wrong but to use it as a touchstone to gauge how far contemporary models have moved from his assumptions: from cognition in the head to cognition distributed throughout a system; from the individual as the privileged unit of analysis to the complex dynamics of interacting components; from the language-specific unitary context of a culturally-bound and situated person (“you”) to a cascading series of overlapping contexts operating at macro- and micro-scale embodied specificities; from the human as a self-contained and self-determined person defined by its contrast to an automaton to the human as an assemblage containing both biological and non-biological parts, some of which may be automatons considered in isolation but which are capable of emergence when enrolled in an extended cognitive system. In brief, the shift is from a person defined by individual consciousness to a collective defined by emergence.
This is the transformation through which our society is currently living. Many would still identify with Searle’s assumptions, but EXTENDED poses a strong challenge that, at the very least, invites us to reconsider what constitutes the essence of the human. Just as individual consciousness was the lynchpin of the liberal humanist subject, so Deleuzian assemblages, cognitive collectives, and dispersed subjectivities are the hallmarks of an age when globalization is blurring the boundaries of nationhood, transnational economies are transforming socioeconomic relations, and computer technologies are creating networks that make global communication an everyday fact of life. As Gilles Deleuze observes in “Postscript on the Societies of Control,” the question is not whether the current configuration is better or worse than liberal humanism but rather what opportunities, challenges, and resistances are specific to the new models. The first step in answering these questions is to recognize what those specificities are. For that, we could do worse than to re-visit the Chinese room, excavating its assumptions as a measure of where we have come from so as better to decide where we want to go.
The “strong” AI proponents that Searle cites include Herbert Simon, Alan Newell, Freeman Dyson, Marvin Minsky, and John McCarthy.
Churchland, Paul. 1995. The Engine of Reason, the Seat of the Soul: A Philosophical Journey into the Brain. Cambridge: MIT Press.
Clark, Andy. 2008. Supersizing the Mind: Embodiment, Action, and Cognitive Extension. New York: Oxford University Press.
Deleuze, Gilles. 1992. “Postscript on the Societies of Control,” October 59 (Winter), 3-7.
Edelman, Gerald M. 1993. Bright Air, Brilliant Fire: On the Matter of the Mind. New York: Basic Books.
Fredkin, Edward. 2007. “Informatics and Information Processing versus Mathematics and Physics.” Presentation at the Institute for Creative Technologies, Marina Del Ray, May 25.
Hofstader, Douglas. 1995. Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. Cambridge: Basic Books.
Hutchins, Edwin. 1996. Cognition in the Wild. Cambridge: MIT Press.
Maturana, Humberto and Francisco Varela. 1973. Autopoiesis and Cognition: The Realization of the Living. Robert S. Cohen and Marx W. Wartofsky (Eds). Boston Studies in the Philosophy of Science 42. Dordrecht: D. Reidel Publishing Co.
Searle, John. 1984. Minds, Brains, and Science. Cambridge: Harvard University Press.
Turing, Alan. 1950. “Computing Machinery and Intelligence,” Mind: A Quarterly Review of Psychology and Philosophy, 49.236: 433-460.