Intelligence is not Artificial

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )

(These are excerpts from my book "Intelligence is not Artificial")

A Brief History of Artificial Intelligence/ Prequel

One can start way back in the past with the ancient Greek and Chinese automata of two thousand years ago, or with the first electromechanical machines of a century ago, but to me a history of machine intelligence begins in earnest with the "universal machine", originally conceived in 1936 by the British mathematician Alan Turing. He did not personally build it, but Turing realized that one could create the perfect mathematician by simulating the way logical problems are solved: by manipulating symbols. The first computers were not Universal Turing Machines (UTM), but most computers built since the ENIAC (1946), including all the laptops and smartphones that are available today, are UTMs. Because it was founded on predicate logic, which only admits two values ("true" and "false"), the computer at the heart of any "intelligent" machine relies on binary logic (ones and zeroes).

Cybernetics (that can be dated back to the 1943 paper "Behavior, Purpose and Teleology" co-written by MIT mathematician Norbert Wiener, physiologist Arturo Rosenblueth and engineer Julian Bigelow) did much to show the relationship between machines and living organisms. One can argue that machines are a form of life or, vice versa, that living organisms are forms of machinery.

However, "intelligence" is commonly considered one or many steps above the merely "alive": humans are generally considered intelligent (by fellow humans), whereas worms are not.

Using digital electronic computers to mimic the brain is particularly tempting because Santiago Ramon y Cajal had discovered in approximately 1891 that neurons work like on/off switches. They "fire" when the cumulative signal that they receive from other neurons exceeds a certain threshold value, otherwise they don't. Plenty of mathematicians felt vindicated by that discovery: binary logic, invented in 1854 by the British philosopher George Boole in a book titled "The Laws of Thought", does seem to lie at the very foundation of human thinking. In 1943 an unlikely pair at the University of Chicago wed the digital electronic computer and the neuron: the psychiatrist and part-time poet Warren McCulloch and the young homeless runaway and math prodigy Walter Pitts described mathematically an "artificial" neuron that can only be in one of two possible states, and how to connect a population of such artificial binary neurons in a very intricate network to mimic the way the brain works. When signals are sent into the network, they spread to its neurons according to the simple rule that any neuron receiving enough positive signals from other neurons sends a signal to other neurons. It gets better: their seminal paper "A Logical Calculus of the Ideas Immanent in Nervous Activity" (1943) proved that such a network of binary neurons is equivalent to a Universal Turing Machine. As Von Neumann said (in a speech at the Hixon Symposium of 1948), the McCulloch-Pitts theorem proved that "anything that can be completely and unambiguously put into words is ipso facto realizable by a suitable finite neural network".

McCulloch, influenced by the mathematical logic of Leibniz that Pitts had described to him, and by the physiologist Arturo Rosenblueth (befriended at a Macy conference), was looking for the logic of the nervous system, and Pitts, who was studying (unofficially) with the influential philosopher of logic Rudolf Carnap, found a way to apply Boolean algebra to the behavior of the brain's neurons. The result was that the brain seemed to work according to the logical calculus discovered by Russell and Whitehead in the "Principia Mathematica". (Except that, of course, things are much more complicated in real neurons). Their paper only cited three works: Carnap's "Logical Syntax of Language" next to Bertrand Russell's "Principia Mathematica" and David Hilbert's "Foundations of Theoretical Logic". Carnap had moved from Europe to the University of Chicago, and Carnap's hostility towards metaphysics may have helped Pitts to think in terms of a new logical language, so much so that Pitts' "logical calculus of ideas" feels like an application of Carnap's logical calculus to the functioning of the nervous system. Note that the McCulloch-Pitts networks do not learn. There was no learning rule to automatically change the weights of the connections.

When in 1945 John Von Neumann published the lengthy and highly influential "First Draft of a Report on the EDVAC" about the "stored-program architecture" (which is still the architecture of today's programmable computers), he cited only one paper, the McCulloch-Pitts paper. He didn't even mention the colleagues of the University of Pennsylvania's Moore School of Electrical Engineering, notably John Mauchly and Presper Eckert, the designers of ENIAC, who had provided Von Neumann with the idea. Upon opening a panel titled "The Design of Machines to Simulate the Behavior of the Human Brain" during the Institute of Radio Engineers' Offsite Link Convention, held in New York in 1955, McCulloch would confidently state that "we need not ask, theoretically, whether machines can be built to do what brains can do" because, in creating our own brain, Nature already showed us that it is possible. (Of course, the McCulloch-Pitts theorem still fails because of Goedel's theorem, but that's a detail). Pitts, instead, moved to MIT and worked with Wiener, and the two had the prescient intuition of turning the deterministic laws of neurons into the statistical laws of behavior, but it was too early for the world to appreciate it. McCulloch and Von Neumann were working in a world that still had not seen the first digital electronic computer for sale (that would happen in 1951). The intellectual motivation was not "computer science" but cybernetics. The founders of cybernetics regularly convened from 1946 until 1953 at the Macy Conference on Cybernetics, organized by the Macy Foundation of New York when Willard Rappleye was its president. These conferences, that sometimes occurred twice a year, were truly interdisciplinary. Speakers at the first conference were: John von Neumann (computer science), Norbert Wiener (mathematics), Walter Pitts (mathematics), Arturo Rosenblueth (physiology), Rafael Lorente de No (neurophysiology), Ralph Gerard (neurophysiology), Warren McCulloch (neuropsychiatry), Gregory Bateson (anthropology), Margaret Mead (anthropology), Heinrich Kluever (psychology), Molly Harrower (psychology), Lawrence Kubie (psychoanalysis), Filmer Northrop (philosophy), Lawrence Frank (sociology), and Paul Lazarsfeld (sociology). It was at the Second Cybernetic Conference in 1947 that Pitts announced that he was writing his doctoral dissertation on probabilistic three-dimensional neural networks. Unfortunately, he burned his unfinished doctoral dissertation.

In January 1951 in Paris a cybernetic conference titled “Calculating Machines and Human Thought” was organized by Joseph Peres with help from Louis Couffignal (who one year later published the book “The Thinking Machines”). The participants included Norbert Wiener from MIT, Howard Aiken from Harvard University, Warren McCulloch from the University of Illinois, Maurice Wilkes from the University of Cambridge (the project leader of the EDSAC), Ross Ashby from Barnwood House Hospital (who in 1948 had designed the first homeostat), Uttley from the Telecommunications Research Establishment, William Grey-Walter from the Burden Neurological Institute (who in 1949 had built electronic tortoises,), the French physicist Louis de Broglie (the discoverer of the dualism of waves and matter), the Polish-born mathematician Benoit Mandelbrot (the future inventor of the science of fractals), Frederick Williams from the University in Manchester (who in 1947 had developed the tube for Random Access Memory), Tom Kilburn from the University of Manchester (the engineer who in 1948 had run the first computer program ever), plus representatives from Ferranti (that was about to introduce the first commercial computer) and from IBM. At this conference Wiener played against the electromechanical chess-playing automaton of Leonardo Torres y Quevedo, a Spanish builder of cableways, first demonstrated in 1914. (Trivia: in 1951 France food was still rationed, and three months later the countries of France, West Germany, Italy, Belgium, Holland and Luxembourg, meeting in Paris, created the European Coal and Steel Community, the forerunner of the European Union).

The impact of cybernetics extended beyond biology and engineering. For example, the Czech social scientist Karl Deutsch at Harvard applied cybernetic methods to sociopolitical problems in a series of articles beginning with "Mechanism, Organism, and Society" (1951), later collected in the book "The Nerves of Government" (1963) in which he tried to reduce even politics to neural networks.

The idea that computers were "giant brains" wasn't just a myth invented by the media. Some psychologists enthusiastically signed on to this metaphor. George Miller was a psychologist at Harvard University who in 1950 visited the Institute for Advanced Study in Princeton, one of the pioneering centers in computer science. The following year he was hired by MIT to lead the psychology group at the newly formed Lincoln Laboratories (a hotbet of military technology for the Cold War) and published an influential book titled "Language and Communication" in which he launched the program of studying the human mind using the information theory just developed by Claude Shannon at Bell Labs in his article "A Mathematical Theory of Communication" (1948).

The "Turing Test", introduced by the same Alan Turing in his paper "Computing Machinery and Intelligence" (1950), has often been presented as the kind of validation that a machine has to pass in order to be considered "intelligent": if a human observer, asking all sorts of questions, cannot tell whether the agent providing the answers is human or mechanical, then the machine has become intelligent (or, better, as intelligent as the human being).

To start with, Claude Shannon at Bell Labs, shortly after writing his groundbreaking "A Mathematical Theory of Communication", delivered a lecture titled "Programming a Computer for Playing Chess" at the national IRE (Institute of Radio Engineers) convention of March 1949 in New York. According to Shannon's biographer Rob Goodman, later in life Shannon (a jazz player and a juggling unicyclist himself) planned a memorial parade for his own funeral, featuring a jazz combo, a 417-instrument marching band, acrobats, a chess-playing computer and juggling robots.

Back to the Table of Contents

Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact