The Nature of Consciousness

Piero Scaruffi

(Copyright © 2013 Piero Scaruffi | Legal restrictions )
Inquire about purchasing the book | Table of Contents | Annotated Bibliography | Class on Nature of Mind

These are excerpts and elaborations from my book "The Nature of Consciousness"

The Turing Test

In 1950 Turing proposed a test to determine whether a machine is intelligent or not: a computer can be said to be intelligent if its answers are indistinguishable from the answers of a human being.  The test can be performed on the machine alone or on the machine and a human. In the former case, the “observer” of the test must be led to believe, by the machine’s cunning answers, that the tested thing is a human. In the latter case, the observer must be incapable of telling which are the answers of the human and which are the answers of the machine.

Turing’s article (“Computing machinery and intelligence“) started the quest for the "intelligent" computer that led to Artificial Intelligence. 

Regardless of what Artificial Intelligence has achieved so far, a debate has been raging about whether an intelligent machine is possible or not at all.  After the invention of the computer, a number of thinkers from various disciplines (Herbert Simon, Allen Newell, Noam Chomsky, Hilary Putnam, Jerry Fodor) adopted a paradigm modeled after the relationship between the hardware and the software of a computer. They basically reduced “thinking” to the execution of an algorithm in the brain. 

John Searle is the foremost opponent of Artificial Intelligence. He argues that computers are purely syntactical and therefore cannot be said to be thinking.  In his thought experiment of the "Chinese room", a man who does not know how to speak Chinese, but is provided with formal rules on how to build perfectly sensible Chinese answers, would pass the analogous Turing test for understanding Chinese, even if he never knows what those questions and those answers are about. That opened the floodgates of the arguments that computation per se will never lead to intelligence.  Searle's Chinese room argument can be summarized as follows: computer programs are syntactical; minds have a semantics; syntax is not by itself sufficient for semantics.  Whatever a computer is computing, the computer does not "know" that it is computing it: only a mind can look at it and tell what it is.

Paraphrasing Fred Dretske, a computer does not know what it is doing, therefore “that”  is not what it is doing. For example, a computer does not compute 1+1, it simply manipulates symbols that eventually will yield the result of  1+1.

Countless replies have been provided. Some have observed that the man may not “know” Chinese, but the room (i.e., the man plus the rules to speak Chinese) does qualify as a fluent Chinese speaker. Some have found flaws in the premises (the theorem sets to prove what has been stated in the premises, that the man does not understand Chinese).

And, ultimately, it all depends on the definition of the word "understand". In a sense, Searle has simply slowed down and broken down the process of understanding, but what we do when we understand something is precisely what the man does in the room. So Searle’s objection is simply about the size of the information and the speed of information processing, and we would all assume that the man understands Chinese if he performed his task in a few milliseconds with the help of miniaturized microfilms invisible to us. Searle's objection sounds more like: if you can tell what the mechanism is that produces "understanding", then that cannot be true "understanding".

Searle does concede that a brain is a machine and that, in principle, we could build a totally equivalent machine that would then have consciousness. He does not agree that a computer is such a machine. Computation as defined by Turing is not sufficient to grant the presence of thinking.

The simulation of a mind is not itself a mind.


Back to the beginning of the chapter "Machine Intelligence" | Back to the index of all chapters