Turing, Searle, and Thought
Nick Bourbaki [m]
In a recent issue of Scientific American, John Searle presented what is known as the "Chinese Room" argument to refute the validity of the Turing Test, and his article has engendered a good deal of criticism from the AI community. The purpose of this note is to summarize the positions of Searle and of some of his critics.
In the January 1990 issue of Scientific American , and in an earlier, more extensive article , John Searle presented an argument against using the Turing test as a definition of intelligence. His papers have engendered a good deal of criticism from the Artificial Intelligence (AI) community, some of whose members have accused him of a number of sins ranging from mysticism to bigotry. Since both Searle and his critics are engaged in a debate rather than a discussion, the issues involved in defining intelligence seem to be getting less clear over time. In a debate, neither party has any genuine interest in the opinions of the other, but each party's tendency to distort the views of opponents is generally confusing to innocent bystanders. This problem is further complicated because scientists and philosophers have different cultures and different modes of working and communicating - it's no wonder that a scientist and a philosopher debating can hardly make each other out. The purpose of this article is to present, in as fair a manner as I can, the views of Searle and some of his critics.
Before discussing the views of Searle and his critics, let's remind ourselves about Alan Turing's test. In a paper written in 1950 , Turing proposed an operational definition of intelligence that was chosen to be equally applicable to humans and machines. The test was described as a generalization of a parlor game Turing called the imitation game. The basic idea of this game is that an interrogator (or interrogators) attempts to determine the sex of one or more contestants by asking questions and receiving answers in writing. The goal of at least one contestant answering these questions is to cause an interrogator to make the wrong determination. No information is available to an interrogator other than the written answers, and at least one of the contestants answering questions is not obligated to tell the truth.
The Turing test, in its original form, is to replace by a machine one of the contestants of the imitation game who is not required to be truthful. If the results of the game are unaffected by the presence of this machine, then (by Turing's definition) this machine is said to be capable of thought. In other words, a machine that is indistinguishable from a human being solely on the basis of "written" interaction is considered to be capable of thought, where written interaction includes teletypes, questions transmitted via a third (human) party, typewritten questions and answers, etc.
In his paper, Turing motivates this definition by noting, in effect, that such written interactions are generally considered sufficient for assessing the intelligence (or lack of intelligence) of a person. He then discusses some of the implications of various computability results on this sort of definition. In particular, his discussion shows how this definition could be correct, provided you are willing to accept that the essential nature of intelligence is adequately captured by a formal system. [a] Turing also briefly considers a few objections to this definition, involving a range of topics from romantic love to extrasensory perception. While his treatment of these objections is not inflammatory, his attempts to critique his definition seem to me to be fairly weak.
For such a seminal work in a technical field, Turing's paper is unusually informal, but I expect that Turing thought it "obvious" that such a definition of intelligence was correct and saw no reason to be more formal. However, while Turing's definition is an obvious choice, it is quite possible to raise well-considered objections to it.
Searle's argument is subtle in a way that seems to confuse intelligent readers. To illustrate this subtlety, I want to propose a problem similar to the one Turing and Searle consider, but which is less important or at least less emotional. The problem is to find a definition of gender that applies equally to humans and machines. To properly imagine this problem, try thinking of Robbie the Robot instead of an IBM mainframe. We will find that there is no satisfactory operational definition of gender, for reasons that are plausible to scientists and which are similar to Searle's argument against the Turing test as a valid definition of intelligence.
One approach is to specify a test that involves only physical appearance that would work for humans (even that apparently isn't the view of the International Olympic Committee). Such a test probably would fail for machines.
A second approach is to define gender in terms of observed behavior: gestures, mannerisms, clothing, and sexual interactions, for example. It is easy to imagine a machine that could act like a male or a female. However, in an era of changing sex roles and surgical techniques, this definition doesn't seem to distinguish true human males from true human females.
A third approach is to define gender in terms of both observed behavior and potential observed behavior. Here we can include giving birth or the potential to give birth. [b] However, it is possible for there to exist women who have never and could never give birth. For example, defective organs or defective genetic material can prevent conception or birth. Trying to repair a definition based on potential observed behavior is difficult, because there is an arbitrary number of reasons why a true female could never give birth, and it seems impossible to list them all or to even characterize them.
Many people consider the concept of gender inseparable from its fleshly and biological origin and nature. That is, it is not merely the fact that physical structures are a certain way, but that they came to be that way without the aid of surgery and are due to the presence of certain chromosomal features. Any definition of gender that applies equally well to men and machines would not satisfy such a person.
In short, it seems to me that an operational definition of gender is not possible. The difficulty is that the concept of gender involves not simply operational (or behavioral) characteristics but also structural and functional characteristics. A structural characteristic is one that depends on the way a mechanism is put together, and a functional characteristic is one that depends on the purpose and mechanism of operation of a particular structure or combination of structures.
Someone concerned with the biological aspects of gender might intellectually appreciate an implementation-independent definition of gender, and might use it occasionally, or might use it to learn about the true nature of gender, but to such a person this definition would be of something related to but other than gender.
Searle is a person whole holds this view with regard to thinking.
In order to describe Searle's point of view, it will be expedient to introduce the idea of intentionality. "Intentionality is the characteristic of consciousness whereby that consciousness is aware of or directed toward an object." If you are thinking about what you had for lunch, trying to remember where your car is parked, or what your spouse's birthday is, you are being intentional. If you bark your shin on a coffee table in the middle of the night, you don't start being intentional until you begin to wonder why you persist in keeping the table in the path to the kitchen.
Searle has also stated that intentionality is "that property of the mind (brain) by which it is able to represent other things."  Here, the word 'represent' does not have quite the same meaning that computer scientists use. A 'representation' is a mental state that has a propositional content, which determines its conditions of satisfaction, and a propositional mode, which determines the 'direction of fit' of its propositional content. The direction of fit determines the means of failure of the conditions of satisfaction of the propositional content. If the mental state is a belief of some state of affairs, the direction of fit is that the state of affairs must match those specified by the propositional content of the belief - the direction of fit is mind-to-world. For example, if I believe it is raining, then my belief is true if it is indeed raining and not if water is falling from the sky (which can happen even if it is not raining). Here the propositional mode tells us that the belief is false when it is not raining. The wish that it were raining is also a representation, but it is not satisfied when it is raining, but when the believer sincerely wishes it would rain and it is not raining - the direction of fit is world-to-mind.
One important application of the concept of intentionality is to distinguish thought from mechanical action, so Searle's use of it in his argument is not surprising. Some attempts have been made to characterize intentionality in terms of properties, but it appears that these attempts are not generally regarded as successful,[c] and in Searle's argument the concept is treated as a primitive one.
Searle's "Chinese Room" argument is aimed against the principle of strong AI. The principle of strong AI is that an appropriately programmed computer is actually thinking, as opposed to simulating thought. This principle is equivalent to accepting the Turing test as a definition of thought, always provided a computer can actually pass the test. Searle's argument is quite simple. Imagine someone who understands only English sitting in a room with a set of rules that tells this person how to respond to questions written in Chinese in such a way as to pass the Turing test. In Searle's view, the person in the room does not "understand" Chinese, and so the Turing test is not testing for comprehension or thought in any realistic sense. [d] In other words, any test for thought which can be passed by systems which behave in purely formal ways can be reduced to something like the Chinese Room, and so is not an adequate test. The central point, to Searle, of the argument is that the person answering the Chinese questions is not engaged in intentional behavior with respect to these questions, and so cannot be said to be "thinking about" the answers. As a result, the test is not valid.
Searle goes on to contend that intentional behavior must be in response to real contingencies, not formal simulations of reality. If so, it would seem to follow that the capacity for intentional behavior is the result of what might be called phylogenic contingencies, and so would only be found in products of real evolution. In short, what Searle is saying is that, just like other biological activities such as digestion and photosynthesis, thought is intrinsically dependent on the biochemistry of its origin. Just as a formal simulation of digestion is not really digesting, a formal simulation of thought is not really thinking. And to tie this to the gender thought experiment, a formal simulation of gender is not gender, because it lacks biological structural and functional characteristics.
I think at this point that it will be useful to clearly state some things that he does not contend. Searle does not hold that all machines are incapable of thought, because he holds that the humans are precisely such machines; in other words, Searle holds that humans are a certain kind of "digital computer" that is capable of thought.  He also doesn't contend that no man-made machine can think. He holds that a sufficiently precise copy of a human, no matter how fabricated, would think. [e] Finally, Searle expresses no opposition to the attempt to create a machine that passes the Turing test, or to the idea that the understanding of such a machine could provide insight into the nature of human thought. He also believes that animals have intentionality. In other words, Searle is not taking refuge in the sort of vague, anti-scientific mysticism that contends that there is something inherently beyond understanding about thought and the mind.
Now let's return to the question of intentionality and how Searle uses it. According to Searle, intentionality and other mental states are 'caused by' and 'realized in' the structure of the brain. To understand this, Searle uses an analogy with liquid properties of water. No individual water molecule can be said to be wet, but the wetness of water is caused by the motions and other properties of those molecules as they interact, and wetness is realized in the structure of the material that is the collection of water molecules. That is, wetness is caused by the behavior of water molecules, and wetness is realized in the water - wetness is not something added to the collection of molecules, and it is not an epiphenomenon.
A key point of his analysis is that the causation of wetness by molecular behavior is possible because the concept of wetness is an abstraction or a property of the gross, statistical behavior of a large group of molecules, and whereas the only direct physical causality is molecule-to-molecule, there is a level-shift causality of the macro-properties by the micro-properties.
Intentionality is caused by and realized in the structure of the brain, but can it be caused by and realized in other structures? Searle appears to think this is possible in principle. A related question is whether "the same" intentionality can be caused by and realized in different structures? What I mean by this, is there some set of properties [f] such that two different sets of physical media or instrumentality have those properties, and all states caused by and realized in one medium are also caused by and realized in the other, at least at the level of abstraction we are interested in? For example, water is liquid and so is mercury, but water molecules are not mercury atoms. In fact, it is possible for scientists to construct novel molecules whose gross structure is liquid.
Furthermore, if there are such different media, is it reasonable to conclude that that set of properties - rather than the media - represents an abstract description of the origins of intentionality?
This reminds me: There is a problem understanding the relationship between software, hardware, and the behavior that software causes hardware to exhibit.
Every programming language has a clear mapping between programs and their meanings, where the meaning can be expressed, for example, as the behavior of functions over specified domains. For example, Scheme has a denotational semantics that can be used to assign the meaning of any Scheme program. Compilers and interpreters take programs and map them into instructions and data structures that exhibit the behavior specified by the program. There is also a meaning function from behavior as exhibited by a computer running a program to the same semantic domain as is used to assign meanings to programs. A compiler is correct when the meaning of every program is the same as the meaning of the running compiled program.
Computer hardware is designed to have well-defined behavior and to have suitable behaviors to implement the semantics of various programming languages. A large program - any program for that matter - is a structure that specifies a set of properties and relationships that, when the program is executed, cause a set of program states to exist and which are realized in the structure of the running program. Of course, as the program runs, the structure changes, where the changes are determined by the properties of the program. The implementation of the program on any particular computer is possible because the computer provides a medium that can be made to exhibit the properties that the program specifies.
The property of a state being caused by and realized in a physical medium is a means, according to Searle, to solve the mind-body or mind-brain problem. That problem is, approximately, about the relation between mental events and physical action. For example, how does my intending to raise my arm cause my arm to raise? If mental events are not physical, how can they cause physical events? And if mental events are really just physical events, what is a mental event anyway?
To Searle this dilemma is neatly solved because mental events are caused by and realized in physical (biological) structures.
The only apparent discrepancy between the model of mental events and computational events is that today's computers are not nondeterministic in ways that the brain might be.
The nature of the semantics (or meaning) of a computer program is interesting. Searle maintains that a program can contain no semantics because it is formal and subject to many interpretations.  On the other hand he maintains that a person can "impose intentionality on [his] utterances by intentionally conferring on them certain conditions of satisfaction which are the conditions of satisfaction of certain psychological states." 
Programs written in programming languages have "certain conditions of satisfaction" (semantics) intentionally conferred on them by reference to the semantic rules of the language, which are always written down, and often in precise language.
One of Searle's arguments is this:
What, in fact, are the differences between animal brains and computer systems that enable the Chinese room argument to work against computers but not against brains? The most obvious difference is that the processes that define something as a computer - computational processes - are completely independent of any reference to a specific type of hardware implementation. 
Of course, wetness, as we saw, is also independent of any reference to a specific type of molecule, so perhaps wetness is a poor analogy to use to explain the nature of intentionality.
Cogent Arguments Against Searle
Most of the critics of Searle's argument fail to respond directly to his points. This is actually not all that surprising in that most of the critics are scientists and Searle's argument is philosophical, but it means that doing a point by point critique of the argument is unlikely to be very useful. In this section I'll just discuss what I feel are the main (cogent) points culled from their responses, and I'll do so in no particular order. Also, since the sources of these counterarguments range from published articles to electronic mailing lists to informal conversations, I will make no attempt at attribution, except to say that none of the ideas in this section is original with me.
The most commonly raised objection is that intentionality, at least as defined by Searle, should not be considered necessary for thought. The idea is that intentionality is associated with the thought of biological entities, but requiring it to be part of all thought has the effect of arbitrarily restricting the use of the term. It is common practice in science to allow a concept to be applied to a new phenomenon, even one which is "essentially" different, provided it is not operationally different. For example, the insulin taken by a diabetic may be produced by an "artificial" means, but few people would regard it as being other than real insulin. Therefore, the fact that the thought of a man-made machine is not intentional may raise a warning flag, but it doesn't mean that it isn't thought.
Another objection is that restricting intentionality to biological systems is arbitrary and suspect. [g] While the capacity for intentional behavior certainly does arise from evolutionary processes, it need not do so exclusively. Searle's argument for this exclusivity is based on his thought experiment using the Chinese Room, but one can object that this thought experiment is seriously flawed, perhaps fatally flawed. For example, suppose someone suggests a thought experiment involving a paper cup containing all of the electrons in the universe, we might well balk until we understood how a paper cup was to be made without electrons. By the same token, a set of rules that could pass the Turing test in Chinese would be huge and tremendously complex, and if the rules were exercised manually by a single person, it is very doubtful that they would ever live long enough to answer a question, or even a "fraction" of a question. [h] The idea is that any system complex and elaborate enough to actually pass the Turing test must surely have a structure and behavior of a richness to rival that of a human brain, and anyone considering a brain in terms of the sequential behavior of individual neurons might easily be unable to perceive intentionality, just as we need more information to understand wetness than we can gather from a single water molecule. [i] The idea is that, since the intelligence of a Turing-smart machine may be very different from our own, we had best wait until we can realistically study it before we conclude anything about its possession of such subtle properties.
Others object that intentionality is not a primitive concept, but is simply a combination of subjective state and context. In other words, if, unbeknownst to him, you remove someone's brain and stick it in a machine which creates a perfect simulation of the world around them, then is their thought still intentional? If so, then how is it any different if any other machine is in their place in the simulation, and if not, doesn't that just show that intentionality is entirely subjective? To the extent that intentionality is a function of context, we may propose different requirements on the Turing test, but we can't dispose of it. To the extent that intentionality is subjective, we must examine these states in any machine as we would in any person, and this, too, is just an alteration of the Turing test. In either case an operational definition of thinking can still exist.
The next objection is what seems to me an obvious refinement of Turing's discussion of his test. In this view, intentionality is totally irrelevant. A theory of mind and thought need not concern itself overly with notions such as understanding and meaning, as these concepts are too vague and coarse to be basic. Just as chemists needed to abandon concepts such as phlogiston, so must scientists studying thought recognize that intentionality is at best a vague intuition and not a fundamental concept. [j] The tradition in science is that all fundamental concepts must have an operational or at least testable and observable definition, and so it must be in AI. [k]
This objection arises at least partly after reflection of the question "how do we know intentionality exists?" As Searle himself notes "we can't make sense of ... behavior otherwise."  Searle apparently believes that not all of the properties of neurobiological matter are captured by current computational systems, but he never makes it clear exactly which mental phenomenon convinces him of this.
Silly Arguments Against Searle
Having dealt with the cogent objections in the previous section, let's look at a couple of the "hysterical" ones. A silly objection that I have seen contends that Searle's argument reveals that he is a bigot. Even though Searle concludes that some essential quality of thought is missing from the "mental" activity of a computer, I think that it is premature to assume that he will stand in the way should his great-granddaughter decide to marry a robot.
I have also seen responses that claim Searle is a mystic, and these I treat more seriously, not because I think that they have more merit, but because such accusations are less likely to be regarded as frivolous. There is a danger that scientists may internalize the philosophical bases of their discipline so thoroughly that they mistake their point of view as "truth" and conclude that anyone with a different perspective is a mystic. Certainly I'm not suggesting that one should take seriously the views of every crank with a perpetual motion theory in hand-illustrated tracts, but it is possible to take exception to conventional views without being nuts.
This is really the point of my paper: I don't agree with Searle, but I think that he has some good points to make - what is the difference between thinking and simulating thought, what would cause intentionality in a computer system, is intentionality inextricably linked with biological systems, are the aims of AI too quixotic? I am convinced that the essential properties of intelligence are, in fact, purely formal, and that there can be a suitable purely operational definition of intelligence. I am also convinced, though, that this question is not closed, and that I might be wrong.
Many people I know find Searle's skepticism a refreshing alternative to the nonsense and hubris of the AI "boosters" who have loudly predicted a Turing-smart machine a constant five years in the future for the past thirty years, and pointed at some trivial, trick program they had their graduate students write for them as evidence that they were really on the right track. If Searle is somewhat motivated in his critique by a desire to burst their self-inflated bubble, then good for him.
Others say he is just whistling in the dark, uneasy about the prospect of a world in which the human monopoly on intelligence is broken. Perhaps that is so, but what difference does it make? I think that his arguments stand on their own, and whatever moves him to seek truth is less important than what he finds.
One of the problematic things about Searle's argument is that it has become caricatured by the Chinese Room. [l] This particular caricature is easy to ridicule and rebut, but the real argument is much more compelling and is backed up by many years of strong philosophical work.
I do not believe that we will be able to produce Turing-smart artifacts within the foreseeable future, and I also don't happen to believe that computers and programming languages as we know them today are the best vehicles for creating such machines, but I believe this because I know about these languages and computers, and about the techniques available to AI researchers or under development, not because Searle has cooked up a trivial, trick three-line proof of it.
In spite of the failure of Searle's arguments to change my mind, I think that this work is useful to science. Like all good philosophy, it can serve to help keep science honest, if scientists will let it.
 John R. Searle, "Is the Brain's Mind a Computer Program?," Scientific American, Vol. 262, No. 1, January 1990, pp. 26-31.
 John R. Searle, "Minds, brains, and programs," The Behavioral and Brain Sciences, Vol. 3, 1980, pp. 417-457.
 John R. Searle, Intentionality: An Essay in the Philosophy of Mind, Cambridge University Press, 1983.
 Alan Turing, "Computing Machinery and Intelligence," Mind, Vol. 59, Number 236, October 1950, pp. 433-460.
[a] If the mind can be viewed as a formal system, then you must either accept that the mind is a Turing machine, or show that Church's Thesis that all forms of computability are Turing machines is false. A Turing machine is a formal definition of a computing engine; a Turing machine has a finite store and an infinite tape that can be read and written. Every modern computer is a Turing machine if you assume it has infinite storage capacity. And if the mind can be viewed as a formal system, the definition of any mental attribute must also be formal (or equivalently, operational), and therefore Turing's definition of thought is at least a reasonable start.
[b] Clearly some sort of definition of children and birth that was equally applicable to humans and machines and that was operational would be needed, but that doesn't seem insurmountable.
[c] A cynic might say that the attempts are seen as failures precisely because they allow intentionality to occur in machines.
[d] Some argue that it is the room or entire system that understands Chinese. This objection is easily rebutted because one could modify the argument so that the person memorized the rules or had the rules surgically or chemically implanted in his or her brain.
Searle really wants to know which mechanism is exhibiting intentionality. The whole room is not because there is no understanding of how the parts, each of which is oblivious to the task at hand, can be contributing to an epiphenomenon that is attending to understanding Chinese. Furthermore, Searle rejects the suggestion that intentionality is an epiphenomenon at all.
An epiphenomenon is an attending or secondary phenomenon that appears in connection with another, but which does not or cannot have a causal effect on the primary phenomenon. Because consciousness is clearly something we experience, it could be regarded as a phenomenon, but there might be no sense in which a mental event can cause a physical one (within the brain or nervous system). To people who hold this view, consciousness is an epiphenomenon of brain or nervous system activity. An epiphenomenon has to be distinguished from a description or interpretation. An observer could conclude that another individual has consciousness because the second individual acts in a way that can be described or interpreted only as "conscious". However, because I experience consciousness directly, I am fairly certain it is a phenomenon, and my conclusion that other people are conscious is a combination of my observation of their behavior - including reports of their own experiences - along with the knowledge of my own consciousness. To Searle, there is possibly a third condition, which is the biochemical nature of other brains and nervous systems.
An interesting side observation based on this argument is that much human behavior is precisely of this nature: For example, most people perform long division by following rules, where the only intentional aspect is knowing that the rules will provide the answer to a question about how many times one number 'goes into' another.
When we start to think about how much human activity is rule-derived in ways similar to long division, it is clear that a lot of that activity is non-intentional. One interesting question might be what fraction of activity has to be intentional for a person to be judged "thinking"? Perhaps "potentially intentional" is sufficient.
[e] On the other hand, Searle reveals some requirement for biological origin of intentionality when he notes that it is irresistible to attribute intentionality to dogs because: "First, we can see that the causal basis of the dog's intentionality is very much like our own, e.g., these are the dog's eyes, this is his skin, those are his ears, etc. Second, we can't make sense of his behavior otherwise."  This second reason is interesting, because it is nothing less than the Turing test.
[f] I include as properties the structural connections and relationships among elements.
[g] It is not obvious to me that Searle makes this restriction, at least when one looks at the whole of Searle's essays on the topic over the last 10 years. It is more likely that he is simply pointing out that for now the biological origin of thought is necessary.
[h] For example, it is not unreasonable to believe that a Turing smart machine would be about a million times more powerful, both in size and speed, than today's fast workstations. Take as an example a 10 MIPS machine with 500 megabytes of disk storage, and assume that one deep question could take 5 minutes for such a machine to answer. Searle recommends using baskets full of Chinese symbols as the medium for manipulation, so let's assume that 2 bytes is sufficient to hold a Chinese character, and that each physical symbol should be about 2 inches on a side. The "table" that the man in the Chinese room will need is about 500 miles on a side (about the area of Texas), and if the man can do 1 rule (as Searle defines rules) per second, which is fast considering the need to move up to 700 miles to execute one rule, the man would take about 100,000 millennia to answer one question. Using everyone alive on Earth would take about 10 days, assuming there is a way to coordinate 5 billion people.
[i] We need to know temperature, pressure, density, and maybe something about gravity.
[j] Phlogiston was an imaginary element thought to cause combustion and to be given off during combustion. At the time of the introduction of the theory (by Stahl around 1700) it had good explanatory power for combustion - it was the first theory that accounted for the similarity between combustion and the oxidation of metals. The chief flaw with the theory has to do with phlogiston as an element rather than a set of properties - the weight of material before and after combustion argues against phlogiston being a substance, unless it is a substance with unusual properties (levity). Taken as a proposed element, it is wrong; taken as a set of properties, it is simply too complex and not useful.
[k] Computer programming illustrates the efficacy of this principle. When a bug is discovered in a program, the source of the bug must be found. The programmer has a model of the program that is a set of properties, constraints, relations, etc., often time-varying. This model is akin to a philosophical theory - it is derived from studying the program in various theoretical lights but generally with minimal direct observation of the program running. This model is most often used to guide the search for the error, by pointing out possible sources but also by eliminating large portions of the program from consideration as sources. Every programmer soon learns that some bugs cannot be fixed unless parts of that model about which he is absolutely certain are abandoned. So it is with science and philosophy: If a philosophical theory or belief is contrary to observed facts, it must be abandoned, and sometimes such a theory or belief needs to be abandoned to even begin to make scientific progress. And sometimes a scientific theory must be abandoned to make further scientific progress.
[l] Searle himself contributed to this caricature by focusing too strongly on it in his Scientific American article.
[m] Nick Bourbaki, a regular contributor to AI Expert and consultant to Lucid, Inc., is a self-taught computer scientist. He wrote this article while vacationing at the Salish Lodge in Snoqualmie, Washington.
Abelson, R. P. (1980). Searle’s argument is just a set of Chinese symbols. Behavioral and Brain Sciences, 3(3), 424–425. (Peer commentary on Searle, 1980).Google Scholar
Anderson, D. (1987). Is the Chinese room the real thing? Philosophy, 62(3), 389–393.CrossRefGoogle Scholar
Arthur, R. (1999). On thought experiments as a priori science. International Studies in the Philosophy of Science, 13(3), 215–229.CrossRefGoogle Scholar
Ben-Yami, H. (1993). A note on the Chinese room. Synthese, 95(2), 169–172.CrossRefGoogle Scholar
Bennett, J. (2003). A philosophical guide to conditionals. New York, NY: Oxford University Press.Google Scholar
Brooks, D. H. M. (1994). The method of thought experiment. Metaphilosophy, 25(1), 71–83.Google Scholar
Brooks, R. A. (1999). Cambrian intelligence. Cambridge, MA: Bradford Books/MIT Press.MATHGoogle Scholar
Brooks, R. A. (2002). Robot: The future of flesh and machines. London, UK: Penguin.Google Scholar
Brown, J. R. (1991). The laboratory of the mind: Thought experiments in the natural sciences. London and New York: Routledge, 1993 paperback edition.Google Scholar
Bunzl, M. (1996). The logic of thought experiments. Synthese, 106(2), 227–240.CrossRefGoogle Scholar
Clark, A. (1987). Being there: Why implementation matters to cognitive science. Artificial Intelligence Review, 1(4), 231–244.CrossRefGoogle Scholar
Cole, D. (1984). Thought and thought experiments. Philosophical Studies, 45(3), 431–444.CrossRefGoogle Scholar
Cole, D. (1991). Artificial intelligence and personal identity. Synthese, 88(3), 399–417.CrossRefGoogle Scholar
Copeland, B. J. (1993). Artificial intelligence: A philosophical introduction. Oxford, UK: Blackwell.Google Scholar
Copeland, B. J. (2000). The Turing test. Minds and Machines, 10(4), 519–539.CrossRefGoogle Scholar
Copeland, B. J. (2002a). The Chinese room from a logical point of view. In J. Preston, & M. Bishop (Eds.). Views into the Chinese room: Essays on Searle and artificial intelligence (pp. 109–122). Oxford, UK: Clarendon Press.Google Scholar
Copeland, B. J. (2002b). Hypercomputation. Minds and Machines, 12(4), 461–502.CrossRefMATHGoogle Scholar
Damper, R. I. (2004). The Chinese room argument: Dead but not yet buried. Journal of Consciousness Studies, 11(5–6), 159–169.Google Scholar
Damper, R. I. (2006). Thought experiments can be harmful. The Pantaneto Forum, Issue 26. http://www.pantaneto.co.uk.Google Scholar
Dennett, D. (1980). The milk of human intentionality. Behavioral and Brain Sciences, 3(3), 428–430. (Peer commentary on Searle, 1980).Google Scholar
Dennett, D. (1991). Consciousness explained. Boston, MA: Little, Brown and Company.Google Scholar
DeRose, K. (1991). Epistemic possibilities. Philosophical Review, 100(4), 581–605.CrossRefGoogle Scholar
Dietrich, E. (1990). Computationalism. Social Epistemology, 4(2), 135–154.Google Scholar
French, R. M. (1990). Subcognition and the limits of the Turing test. Mind, 99(393), 53–65.MathSciNetGoogle Scholar
French, R. M. (2000a). The Chinese room: Just say “no”! In Proceedings of 22nd annual cognitive science society conference (pp. 657–662). Philadelphia, PA: Lawrence Erlbaum Associates, Mahwah, NJ.Google Scholar
French, R. M. (2000b). The Turing test: The first 50 years. Trends in Cognitive Science, 4(3), 115–122.CrossRefGoogle Scholar
Gabbay, D. (1998). Elementary logics: A procedural perspective. Hemel Hempstead, UK: Prentice Hall Europe.Google Scholar
Gendler, T. S. (2000). Thought experiment: On the powers and limits of imaginary cases. New York, NY: Garland Press.Google Scholar
Gendler, T. S., & Hawthorne, J. (Eds.) (2002). Conceivability and possibility. Oxford, UK: Clarendon Press.Google Scholar
Gomila, A. (1991). What is a thought experiment? Metaphilosophy, 22(1–2), 84–92.Google Scholar
Hacking, I. (1967). Possibility. Philosophical Review, 76(2), 143–168.CrossRefGoogle Scholar
Hacking, I. (1975). All kinds of possibility. Philosophical Review, 84(3), 321–337.CrossRefGoogle Scholar
Häggqvist, S. (1996). Thought experiments in philosophy. Stockholm, Sweden: Almqvist & Wiksell.Google Scholar
Harnad, S. (1989). Minds, machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence, 1(1), 5–25.MathSciNetGoogle Scholar
Harnad, S. (2002). Minds, machines and Searle 2: What’s wrong and right about the Chinese room argument. In J. Preston, & M. Bishop (Eds.). Views into the Chinese room: Essays on searle and artificial intelligence (pp. 294–307). Oxford, UK: Clarendon Press.Google Scholar
Haugeland, J. (2002). Syntax, semantics, physics. In J. Preston, & M. Bishop (Eds.). pp. 379–392.Google Scholar
Hofstadter, D. (1980). Reductionism and religion. Behavioral and Brain Sciences, 3(3), 433–434. (Peer commentary on Searle, 1980).Google Scholar
Hofstadter, D. R., & Dennett, D. C. (1981). The mind’s I: Fantasies and reflections on self and soul. Brighton, UK: Harvester Press.MATHGoogle Scholar
Horowitz, T., & Massey, G. (Eds.) (1991). Thought experiments in science and philosophy. Lanham, MD: Rowman and Littlefield.Google Scholar
Jacquette, D. (1989). Adventures in the Chinese room. Philosophy and Phenomenology, 49(4), 605–623.CrossRefGoogle Scholar
Lewis, C. I. (1918). A survey of symbolic logic. Berkeley, CA: University of California Press.Google Scholar
Lewis, D. (1973). Counterfactuals. Cambridge, MA: Harvard University Press.MATHGoogle Scholar
Lycan, W. (1980). The functionalist reply (Ohio State). Behavioral and Brain Sciences, 3(3), 434–435. (Peer commentary on Searle, 1980).Google Scholar
Maloney, J. C. (1987). The right stuff. Synthese, 70(3), 349–372.CrossRefGoogle Scholar
McCarthy, J. (1979). Ascribing mental qualities to machines. In M. Ringle (Ed.), Philosophical perspectives in artificial intelligence (pp. 161–195). Atlantic Highlands, NJ: Humanities Press.Google Scholar
McFarland, D., & Bösser, T. (1993). Intelligent behavior in animals and robots. Cambridge, MA: Bradford Books/MIT Press.Google Scholar
Melnyk, A. (1996). Searle’s abstract argument against strong AI. Synthese, 108(3), 391–419.MATHMathSciNetCrossRefGoogle Scholar
Moor, J. H. (1976). An analysis of the Turing test. Philosophical Studies, 30(4), 249–257.CrossRefGoogle Scholar
Moural, J. (2003). The Chinese room argument. In B. Smith (Ed.). John Searle (pp. 214–260). Cambridge, UK: Cambridge University Press.Google Scholar
Newell, A. (1973). Artificial intelligence and the concept of mind. In R. C. Shank, & K. M. Colby (Eds.), Computer models of thought and language (pp. 1–60). San Francisco, CA: Freeman.Google Scholar
Newell, A. (1980). Physical symbol systems. Cognitive Science, 4(2), 135–183.CrossRefGoogle Scholar
Norton, J. (1996). Are thought experiments just what you always thought? Canadian Journal of Philosophy, 26(3), 333–366.Google Scholar
Peijnenburg, J., & Atkinson, D. (2003). When are thought experiments poor ones? Journal for General Philosophy of Science, 34(2), 305–322.CrossRefGoogle Scholar
Pfeifer, R., & Scheirer, C. (1999). Understanding intelligence. Cambridge, MA: MIT Press.Google Scholar
Preston, J. (2002). Introduction. In J. Preston, & M. Bishop (Eds.). Views into the Chinese room: Essays on searle and artificial intelligence (pp. 1–50). Oxford, UK: Clarendon Press.Google Scholar
Preston, J., & Bishop, M. (Eds.) (2002). Views into the Chinese room: Essays on Searle and artificial intelligence. Oxford, UK: Clarendon Press.Google Scholar
Putnam, H. (1975). The meaning of ‘meaning’. In K. Gunderson (Ed.), Language, mind and knowledge (pp. 131–193). Minneapolis, MN: University of Minnesota Press.Google Scholar
Rapaport, W. J. (1986). Searle’s experiments with thought. Philosophy of Science, 53(2), 271–279.CrossRefGoogle Scholar
Reiss, J. (2002). Causal inference in the abstract or seven myths about thought experiments. Technical Report CTR 03/02, Centre for Philosophy of Natural and Social Science, London School of Economics, London, UK.Google Scholar
Russow, L.-M. (1984). Unlocking the Chinese room. Nature and System, 6, 221–227.Google Scholar
Saygin, A. P., Cicekli, I., & Akman, A. (2000). Turing test: 50 years later. Minds and Machines, 10(4), 463–518.CrossRefGoogle Scholar
Schank, R. C., & Abelson, R. P. (1977). Scripts, plans, goals, and understanding. Hillsdale, NJ: Lawrence Erlbaum Associates.MATHGoogle Scholar
Scheutz, M. (Ed.) (2002). Computationalism: New directions. Cambridge, MA: Bradford Books/MIT Press.Google Scholar
Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–457. (Including peer commentary).CrossRefGoogle Scholar
Searle, J. R. (1983). Intentionality: An essay in the philosophy of mind. Cambridge, UK: Cambridge University Press.Google Scholar
Searle, J. R. (1984). Minds, brains and science: The 1984 Reith lectures. London, UK: Penguin.Google Scholar
Searle, J. R. (1997). The mystery of consciousness. London, UK: Granta.Google Scholar
Searle, J. R. (2002). Twenty one years in the Chinese room. In J. Preston, & M. Bishop (Eds.). Views into the Chinese room: Essays on searle and artificial intelligence (pp.␣51–59). Oxford, UK: Clarendon Press. Google Scholar
Seddon, G. (1972). Logical possibility. Mind, 81(324), 481–494.Google Scholar
Siegelmann, H. T. (1999). Neural networks and analog computation: Beyond the Turing limit. Boston, MA: Birkhäuser.MATHGoogle Scholar
Sloman, A., & Croucher, M. (1980). How to turn an information processor into an understander. Behavioral and Brain Sciences, 3(3), 447–448. (Peer commentary on Searle, 1980).Google Scholar
Smith, B. (Ed.) (2003). John Searle. Cambridge, UK: Cambridge University Press.Google Scholar
Sorensen, R. A. (1992). Thought experiments. New York, NY: Oxford University Press.Google Scholar
Sorensen, R. A. (1998). Review of Sören Häggqvist’s “Thought experiments in philosophy”. Theoria, 64(1), 108–118.Google Scholar
Souder, L. (2003). What are we to think about thought experiments? Argumentation, 17(2), 203–217.CrossRefGoogle Scholar
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.MathSciNetGoogle Scholar
Wakefield, J. C. (2003). The Chinese room argument reconsidered: Essentialism, indeterminacy and strong AI. Minds and Machines, 13(2), 285–319.CrossRefGoogle Scholar
Weiss, T. (1990). Closing the Chinese room. Ratio, 3(2), 165–181.Google Scholar
Wilensky, R. (1983). Planning and understanding: A computational approach to human reasoning. Reading, MA: Addison-Wesley.Google Scholar
Wilkes, K. V. (1988). Real people: Personal identity without thought experiments. Oxford, UK: Clarendon.Google Scholar
Wilks, Y. (1982). Searle’s straw men. Behavioral and Brain Sciences, 5(2), 343–344. (Continuing peer commentary on Searle, 1980).MathSciNetGoogle Scholar
Yablo, S. (1993). Is conceivability a guide to possibility? Philosophy and Metaphysical Research, 53(1), 1–42.MathSciNetGoogle Scholar