Intelligence is not Artificial

by piero scaruffi

Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email
(Copyright © 2018 Piero Scaruffi | Terms of use )


(These are excerpts from my book "Intelligence is not Artificial")

The Symbolic School (Knowledge-based Systems)

The practitioners of Artificial Intelligence quickly split into two fields. One, pioneered by Herbert Simon and his student Allen Newell at Carnegie Mellon University with their "Logic Theorist" (1956), basically understood intelligence as the pinnacle of mathematical logic, and focused on symbolic processing. Logic Theorist (written by Clifford Shaw, who implemented Newell's Information Processing Language or IPL) effortlessly proved 38 of the first 52 theorems in chapter 2 of Bertrand Russell's "Principia Mathematica." A proof that differed from Russell's original was submitted to the Journal of Symbolic Logic, the first case of a paper co-authored by a computer program. In 1958 the trio also presented a program to play chess, NSS (the initials of the three).

In 1955 Arthur Samuel at IBM in New York wrote not only the first computer program that could play checkers but the first self-learning program. That program implemented the alpha-beta search algorithm that would dominate A.I. for the next 20 years. In February 1956 a television channel broadcast a checkers game played by an IBM 701 computer running Samuel's program against a human expert. A few years later Samuel devised another learning method, which he called "learning by generalization", and that was an embryonic version of the "temporal-difference learning" method ("Some Studies in Machine Learning Using the Game of Checkers", 1959).

The first breakthrough in this branch of A.I. (the "symbolic" branch) was probably John McCarthy's article "Programs with Common Sense" (1958): McCarthy (then at Stanford University) understood that someday machines would easily be better than humans at many repetitive and computational tasks, but "common sense" is what really makes someone "intelligent" and common sense comes from knowledge of the world. That article spawned the discipline of "knowledge representation": how can a machine learn about the world and use that knowledge to make inferences. McCarthy's approach relied on symbolic logic, notably first-order predicate logic (true/false predicates), to describe human knowledge in a formal language that the computer (a binary-logic machine) can process.

This approach was somehow "justified" by the idea, introduced by MIT linguist Noam Chomsky in "Syntactic Structures" (1957), that language competence is due to some grammatical rules that express which sentences are correct in a language. The grammatical rules express "knowledge" of how a language works, and, once you have that knowledge (and a vocabulary), you can produce any sentence in that language, including sentences you have never heard or read before. Chomsky had studied with Zellig Harris and wed Harris' "rules of transformation" (discussed in 1952 in two seminal papers, "Culture and Style" and "Discourse Analysis") with the production systems invented by Emil Post in 1943 at City College of New York (Formal Reductions of the General Combinatorial Decision Problem", 1943), a mathematician who had done his dissertation on Bertrand Russell's "Principia Mathematica" (and who had almost invented the Turing Machine before Turing). In fact, the first English parser, running on a Univac 1, was completed in 1959 at the University of Pennsylvania within the Transformations and Discourse Analysis Project (TDAP) directed by Zellig Harris.

AI was helped by the Sputnik, the first artificial satellite, launched by the Soviet Union in October 1957. The US government panicked that the Soviet Union was technologically ahead, and the new-born discipline of Artificial Intelligence was ready to accept the military funding that started flocking onto any technology remotely promising.

At the peak of the Cold War, McCarthy himself wrote a chess program that challenged a Soviet chess program, the one originally developed in 1963 by Alexander Kronrod's team at the Moscow Institute of Theoretical and Experimental Physics (ITEP). The Soviets won that match.

The first compendium of research in Artificial Intelligence was compiled by two young UC Berkeley researchers, Edward Feigenbaum and Julian Feldman, both former students of Herbert Simon at the Carnegie Institute of Technology: "Computers and Thought" (1963). It included articles by Minsky, Simon and Newell (the Logic Theorist), Samuel, Selfridge (the Pandemonium), Shaw, and Turing himself, plus Earl Hunt's and Carl Hovland's model of concept learning (CLS) and Feigenbaum's own Elementary Perceiver and Memorizer (EPAM), both early experiments with decision trees.

The rapid development of computer programming helped this field take off, as computers were getting better and better at processing symbols: knowledge was represented in symbolic structures and "reasoning" was reduced to a matter of processing symbolic expressions. This line of research led to "knowledge-based systems" (or "expert systems"), such as Ed Feigenbaum's Dendral (1965) at Stanford, that consisted of an “inference engine” (the repertory of legitimate reasoning techniques recognized by the mathematicians of the world) and a “knowledge base” (the "common sense" knowledge). This technology relied on acquiring knowledge from domain experts in order to create "clones" of such experts (machines that performed as well as the human experts). The limitation of expert systems was that they were "intelligent" only in one specific domain.

Gyorgy Polya may have been an indirect influence on the idea of using "common sense" or "heuristics" to match the "intelligence" of human experts. This Hungarian mathematician, born in what was then the Austro-Hungarian empire, was hired by Stanford in 1942 and became famous for his lessons on using intuition to solve mathematical problems, demonstrated in the book "How to Solve it" (1945). He basically founded a new discipline, the study of the nature and source of heuristics, which remained popular at Stanford.

Their "knowledge" was expressed in a formal language that can be subjected to logical inference: the language of first-order predicate logic, invented by mathematicians to express the relationships between objects. The beauty of expert systems is that, being based on first-order predicate logic, they can explain their conclusions: it is always possible to "backtrack" and find out the chain of logical steps that led to the conclusions.

After Dendral, most expert systems were written in LISP, the language based on Alonzo Church's lambda calculus invented by John McCarthy in 1958 when he was still at MIT, a vast improvement over Newell's IPL.

On the other hand, mathematical intuition seemed to constitute a higher level of intelligence than knowledge-based inference. Herbert Simon and his student Kenneth Kotovsky at Carnegie Mellon University studied sequence extrapolation: how do you come up with the next number in a sequence? ("Human Acquisition of Concepts for Sequential Patterns", 1963). Walter Reitman, a colleague of Newell and Simon at the Carnegie Institute of Technology, built an early system of analogical reasoning, Argus ("An Information-processing Model of Thinking", 1964). Reitman was unhappy with the rigid, mechanic reasoning of the GPS and wanted to model the more chaotic approach to creativity of the human mind. The GPS could not account for human creativity in fields such as art and music where the problem to be solved is not well-defined like in mathematical logic. It is in fact very ill-defined: which problems are we trying to solve when we compose a piece of music? Alas, Argus was written in the same, hopelessly complicated, programming language IPL. Analogical reasoning in a microdomain of geometry was the subject of a program called Analogy (1968), developed by Thomas Evans at the research laboratories of the Air Force near Boston ("A Heuristic Program to Solve Geometric-analogy Problems", 1968).

Back to the Table of Contents


Purchase "Intelligence is not Artificial")
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact