The Nature of Consciousness

Piero Scaruffi

(Copyright © 2013 Piero Scaruffi | Legal restrictions )
Inquire about purchasing the book | Table of Contents | Annotated Bibliography | Class on Nature of Mind

These are excerpts and elaborations from my book "The Nature of Consciousness"

The Turing Test Revisited

The Turing Test has been credited with starting a whole new branch of science. That is surprising, given that the test itself is not formulated in a scientific manner at all.

First of all, it is not clear whether Turing was concerned with intelligence, mind or consciousness. Is his test supposed to reveal whether a machine is intelligent or cognitive or conscious? The three are quite different. Nowhere does Turing bother to distinguish among them, though. Intelligence comes in degrees. Animals are intelligent, to some degree. It is debatable whether they are capable of thinking (conscious).  A mentally-retarded person may not be intelligent, but she is probably conscious. Turing does not discriminate and therefore does not tell us what his test is supposed to measure.

Second, when one proposes a test to the scientific community, one must be specific about the setting (e.g., what instruments will be used). Turing’s test uses a human being to decide whether a machine is as good as another human being. Thus both the instrument and one of the quantities to be measured are humans: good scientific policy would have required him to be specific about both. He does not provide a definition or a prescription for what the observer must be. Can a mentally retarded person perform the test? Can somebody under the influence of drugs perform it? Or does it have to be the most intelligent human? (The result of the test will obviously vary wildly depending on which one Turing chooses). As for the human to be tested against the machine, Turing doesn't specify which type of human he wants to test: a priest, an attorney, an Australian aborigine, an avid reader of porno magazines, a librarian, a physician, an economist...?

The observer has to determine whether the answers to her questions come from a human or a machine. If the unknown “answerer” is the machine, and she is led to think that it is a human, then the machine qualifies as “intelligent” (or cognitive or conscious, we are not sure). But Turing does not tell us what conclusions we have to draw if the unknown answerer is a human and the observer is led to think from her answers that she is a machine. In other words, if a machine fails the test, then Turing concludes that it is not intelligent: but what does Turing conclude if a human fails the test? That humans are not intelligent?

Therefore, one of the reasons why Turing’s paper has led to so much controversy is that Turing was not clear enough about what he was saying. 

Can a machine be a cognitive system? If one circumscribes cognition to the processes of  remembering, learning and reasoning, most scientists would agree that, yes it is possible. Does that make it "intelligent": in the sense that most people use that word, yes, probably yes. Does it make it also aware of being what it is? Not necessarily so. These are different questions for which there probably are different answers. But, if it were a well-formulated question, most people would agree on the answer.

Even if it were restated in a scientific manner, the Turing test per se would probably not amount to much: it is not right or wrong, but simply meaningless. Even if a machine could answer all questions, what would that prove? If we found a "thing" that can answer all our questions but does not eat, move, feel emotions and so forth, we would just consider it as a very sophisticated machine, not a human being. That is what the Turing test measures: how good the machine is at answering questions, nothing more.

The thesis of Turing’s test can be restated as: “Can a machine be built that will fool a human being into believing it is another human being?” Nowhere in his writings did Turing prove the equivalence of this question with the question “Can a machine think?” If we answer “yes” to the first question, we don’t necessarily answer “yes” to the second.

As the US computer scientist Stuart Russell remarked, Turing's definition is at the same time too weak and too strong. Too weak because it does not include “intelligent” behavior such as "dodging bullets" and too strong because it does include unintelligent beings such as Searle's Chinese-room translator. Most children, who cannot answer a lot of questions that an adult could answer, would not pass the test, but that does not make them machines. Turing's is a partial extensional definition, that fails to capture the intensional definition of intelligence.

Would we consider our peer an object or being that is not alive? To be conscious without being alive is a piece of nonsense. Before we ask whether machines can think, we should therefore ask whether they can be alive. 

Biological systems undergo growth. Machines cannot undergo physical growth.  In machines only the "mind" can grow over time. In biological systems the "mind" grows with the rest of the body. In machines the "mind" may never decay. In biological systems the mind decays with the rest of the body. The mind is closely tied to the body. For most people a mind without a body (that grows with it) is just not a mind.


Back to the beginning of the chapter "Machine Intelligence" | Back to the index of all chapters