Intelligence is not Artificial

Why the Singularity is not Coming any Time Soon And Other Meditations on the Post-Human Condition and the Future of Intelligence

by piero scaruffi
Cognitive Science and Artificial Intelligence | Bibliography and book reviews | My book on consciousness | Contact/feedback/email

(These are excerpts from, and additions to, my book "Intelligence is not Artificial")


More Footnotes: Machine Learning and Common Sense

Machine learning lay mostly dormant until the 1970s. Earl Hunt, then a psychologist at UCLA, had developed a Concept Learning System, first described in his book "Concept Learning" (1962), for inductive learning, i.e. learning of concepts. In 1975 that was extended by an Australian, Ross Quinlan, into ID3. Patrick Winston╬Ú╬¸s thesis at the MIT with Minsky was "Learning Structural Descriptions From Examples" (1970). Polish-born Ryszard Michalski at University of Illinois built the first practical system that learned from examples, AQ11 (1978). This lent credibility to the goal of creating a program capable of scientific discovery. At the time there was a debate on this topic, started by the Austrian philosopher Karl Popper with his book "The Logic of Scientific Discovery" (1965), which basically claimed that there is no logic of scientific discovery, to which Herbert Simon responded at Carnegie Mellon University (CMU) with "Models of Discovery and Other Topics in the Methods of Science" (1977). In that year his student Pat Langley unveiled Bacon, a system capable of discovering scientific laws. Another school was started at Stanford by Bruce Buchanan, who had worked on Dendral, following his paper "Model-directed learning of production rules" (1978). His student Tom Mitchell graduated with a thesis on "Version Spaces" (1978), a model-based method for concept learning (as opposed to Michalski╬Ú╬¸s data-based method). In 1981 Allen Newell and Paul Rosenbloom at Carnegie Mellon University formulated the "chunking theory of learning" to model the so-called "power law of practice", and in 1983 John Laird and Paul Rosenbloom started building a system called Soar that implemented chunking.

Then came the "explanation-based Learning systems" such as Lex2 (1986), developed by Tom Mitchell at CMU , and Kidnap (1986), developed at the University of Illinois by Gerald De Jong, whose thesis at Yale University had been the Frump system of natural language processing based on Schank╬Ú╬¸s scripts; and the "learning apprentice systems" such as Leap (1985) by Tom Mitchell at CMU and Disciple (1986) by Yves Kodratoff in France.

In 1991 Satinder Singh of the University of Massachusetts published "Transfer of Learning by Composing Solutions of Elemental Sequential Tasks" and Lorien Pratt of the Colorado School of Mines published "Direct Transfer Of Learned Information Among Neural Networks". By then, Tom Mitchell's group at Carnegie Mellon University had become the world's center of excellence in transfer learning and multitask learning, as documented by Sebastian Thrun's " Is Learning the N-Th Thing Any Easier Than Learning the First?" (1996) and Rich Caruana's "Multitask Learning" (1997). But not much has improved since Sebastian Thrun and Lorien Pratt curated the book "Learning to Learn" (1998).

Common sense, besides learning, was another missing ingredient. Humans employ naturally several forms of inference that are not deduction, and therefore are not exact. In general, we specialize in "plausible reasoning", not the "exact reasoning" of mathematicians. Finding exact solutions to problems is often pointless: it would take too long. If a tiger attacks you, you don't start calculating the most efficient trajectory: you would be dead by the time you finished your calculations. This became a popular subject of research after the publication of "Plausible Reasoning" (1976) by the German-born philosopher Nicholas Rescher at the University of Pittsburgh and of "Logic and Conversation" (1975) by the British-born philosopher Paul Grice at UC Berkeley.

Most of our statements are actually uncertain. "The sky is blue" is obviously just an approximation; so is "blood is red". My height is actually not 171 cm: it is probably something like 171.46234782673... cm. Hence in 1965 the Azerbaijani-born mathematician Lofti Zadeh invented Fuzzy Logic at UC Berkeley. Almost everything we say has a margin of uncertainty and approximation. Even if we don't know Bayes' theorem, we use probabilities all the time (and in most cases it is not the "probability" that mathematicians use). We unconsciously side with the French physicist Pierre Duhem: the certainty that a proposition is true decreases with any increase of its precision. I am certain of being 171 cms high until you ask me to be more precise: then i am less certain whether i am 171.1 cms high or 171.2 or 171.3 or...

We are also very good at changing our conclusions: if you made plans to have dinner at a restaurant and it turns out that the restaurant has gone out of business, you effortlessly change your plans. Hence in 1979 Drew McDermott at Yale University worked out "Nonmonotonic Logic" and John McCarthy at Stanford published "Circumscription".

We normally deal with objects, not with elementary particles or waves. The world that we encounter in our daily lives is a world of objects, and we intuitively know how to operate with objects. For example, water can certainly have all sorts of temperatures, but the important thing is that at a certain temperature it freezes and at a certain temperature it boils. We deal with "qualities" (such as "hot" and "cold") rather than with quantities (such as 32.6 degrees Celsius and -4 degrees Celsius). And these "qualities" are "fuzzy": my height is both short and tall, depending on the people around me. To some extent i am short and to some extent i am tall. There other simple laws of causality connecting our actions and our objects that don't require any knowledge of theoretical Physics. Hence Pat Hayes in Britain published the "Naive Physics Manifesto" (1978), and "qualitative reasoning" was pioneered by two theses published at the MIT, first Johan DeKleer ("Causal and Teleological Reasoning in Circuit Recognition", 1979), who had worked on the Sophie project with Brown and Burton at BBN, and then Kenneth Forbus ("Qualitative Reasoning about Physical Processes", 1981). In 1984 Doug Lenat started the project Cyc to catalog commonsense knowledge. (I have written a lengthy survey of these commonsense theories in my other book "Thinking about Thought").

Back to the Table of Contents


Purchase "Intelligence is not Artificial"
Back to Cognitive Science | My book on consciousness | My reviews of books | Contact