(These are excerpts from my book "Intelligence is not Artificial")
A Brief History of Artificial Intelligence/ Prequel
One can start way back in the past with the ancient Greek and Chinese automata of two thousand years ago, or with the first electromechanical machines of a century ago, but to me a history of machine intelligence begins in earnest with the "universal machine", originally conceived in 1936 by the British mathematician Alan Turing. He did not personally build it, but Turing realized that one could create the perfect mathematician by simulating the way logical problems are solved: by manipulating symbols. The first computers were not Universal Turing Machines (UTM), but most computers built since the ENIAC (1946), including all the laptops and smartphones that are available today, are UTMs. Because it was founded on predicate logic, which only admits two values ("true" and "false"), the computer at the heart of any "intelligent" machine relies on binary logic (ones and zeroes).
Cybernetics (that can be dated back to the 1943 paper "Behavior, Purpose and Teleology" co-written by MIT mathematician Norbert Wiener, physiologist Arturo Rosenblueth and engineer Julian Bigelow) did much to show the relationship between machines and living organisms. One can argue that machines are a form of life or, vice versa, that living organisms are forms of machinery.
However, "intelligence" is commonly considered one or many steps above the merely "alive": humans are generally considered intelligent (by fellow humans), whereas worms are not.
Using digital electronic computers to mimic the brain is particularly tempting because Santiago Ramon y Cajal had discovered in approximately 1891 that neurons work like on/off switches. They "fire" when the cumulative signal that they receive from other neurons exceeds a certain threshold value, otherwise they don't. Plenty of mathematicians felt vindicated by that discovery: binary logic, invented in 1854 by the British philosopher George Boole in a book titled "The Laws of Thought", does seem to lie at the very foundation of human thinking.
In 1943 an unlikely pair at the University of Chicago wed the digital electronic computer and the neuron: the psychiatrist and part-time poet Warren McCulloch
and the young homeless runaway and math prodigy Walter Pitts described mathematically an "artificial" neuron that can only be in one of two possible states, and how to connect a population of such artificial binary neurons in a very intricate network to mimic the way the brain works. When signals are sent into the network, they spread to its neurons according to the simple rule that any neuron receiving enough positive signals from other neurons sends a signal to other neurons. It gets better: their seminal paper "A Logical Calculus of the Ideas Immanent in Nervous Activity" (1943) proved that such a network of binary neurons is equivalent to a Universal Turing Machine.
McCulloch, influenced by the mathematical logic of the time and by the physiologist Arturo Rosenblueth (befriended at a Macy conference), was looking for the logic of the nervous system, and Pitts, who was studying (unofficially) with the influential philosopher of logic Rudolf Carnap, found a way to apply Boolean algebra to the behavior of the brain's neurons. The result was that the brain seemed to work according to the logical calculus discovered by Russell and Whitehead in the "Principia Mathematica". (Except that, of course, things are much more complicated in real neurons). Their paper only cited three works: Carnap's "Logical Syntax of Language" next to Bertrand Russell's "Principia Mathematica" and David Hilbert's "Foundations of Theoretical Logic". Carnap had moved from Europe to the University of Chicago, and Carnap's hostility towards metaphysics may have helped Pitts to think in terms of a new logical language, so much so that Pitts' "logical calculus of ideas" feels like an application of Carnap's logical calculus to the functioning of the nervous system.
When in 1945 John Von Neumann published the lengthy and highly influential "First Draft of a Report on the EDVAC" about the "stored-program architecture" (which is still the architecture of today's programmable computers), he cited only one paper, the McCulloch-Pitts paper. He didn't even mention the colleagues of the University of Pennsylvania's Moore School of Electrical Engineering, notably John Mauchly and Presper Eckert, the designers of ENIAC, who had provided Von Neumann with the idea.
Upon opening a panel titled "The Design of Machines to Simulate the Behavior of the Human Brain" during the Institute of Radio Engineers' Offsite Link Convention, held in New York in 1955, McCulloch would confidently state that "we need not ask, theoretically, whether machines can be built to do what brains can do" because, in creating our own brain, Nature already showed us that it is possible. (Of course, the McCulloch-Pitts theorem still fails because of Goedel's theorem, but that's a detail). Pitts, instead, moved to MIT and worked with Wiener, and the two had the prescient intuition of turning the deterministic laws of neurons into the statistical laws of behavior, but it was too early for the world to appreciate it.
McCulloch and Von Neumann were working in a world that still had not seen the first digital electronic computer for sale (that would happen in 1951).
The intellectual motivation was not "computer science" but cybernetics.
The founders of cybernetics regularly convened from 1946 until 1953 at the Macy Conference on Cybernetics, organized by the Macy Foundation of New York when Willard Rappleye was its president. These conferences, that sometimes occurred twice a year, were truly interdisciplinary. Speakers at the first conference were: John von Neumann (computer science), Norbert Wiener (mathematics), Walter Pitts (mathematics), Arturo Rosenblueth (physiology), Rafael Lorente de No (neurophysiology), Ralph Gerard (neurophysiology), Warren McCulloch (neuropsychiatry), Gregory Bateson (anthropology), Margaret Mead (anthropology), Heinrich Kluever (psychology), Molly Harrower (psychology), Lawrence Kubie (psychoanalysis), Filmer Northrop (philosophy), Lawrence Frank (sociology), and Paul Lazarsfeld (sociology).
It was at the Second Cybernetic Conference in 1947 that Pitts announced that he was writing his doctoral dissertation on probabilistic three-dimensional neural networks. Unfortunately, he burned his unfinished doctoral dissertation.
The impact of cybernetics extended beyond biology and engineering. For example, the Czech social scientist Karl Deutsch at Harvard applied cybernetic methods to sociopolitical problems in a series of articles beginning with "Mechanism, Organism, and Society" (1951), later collected in the book "The Nerves of Government" (1963).
The "Turing Test", introduced by the same Alan Turing in his paper "Computing Machinery and Intelligence" (1950), has often been presented as the kind of validation that a machine has to pass in order to be considered "intelligent": if a human observer, asking all sorts of questions, cannot tell whether the agent providing the answers is human or mechanical, then the machine has become intelligent (or, better, as intelligent as the human being).
To start with, Claude Shannon at Bell Labs, shortly after writing his groundbreaking "A Mathematical Theory of Communication", delivered a lecture titled "Programming a Computer for Playing Chess" at the national IRE (Institute of Radio Engineers) convention of March 1949 in New York. According to Shannon's biographer Rob Goodman, later in life Shannon (a jazz player and a juggling unicyclist himself) planned a memorial parade for his own funeral, featuring a jazz combo, a 417-instrument marching band, acrobats, a chess-playing computer and juggling robots.
Back to the Table of Contents
Purchase "Intelligence is not Artificial"