Piero Scaruffi(Copyright © 2013 Piero Scaruffi | Legal restrictions )
These are excerpts and elaborations from my book "The Nature of Consciousness"
Programs That Learn: Induction
Intelligence, though, is often regarded as "learning" knowledge rather than simply using it. One of the most cogent failures of the field of expert systems lies in the inability of such systems to learn on their own the knowledge they need in order to operate. The performance of a human being increases with experience, and the rate of increase is often considered a measure of the person’s intelligence. In an expert system, performance changes (and does not necessarily improve) only when a new knowledge base is installed. While the intelligence of an expert system is purely deductive, human intelligence is also inductive: new knowledge is continuously inferred.
In order to be capable of “learning”, an expert system should continuously change its knowledge base to reflect the outcome of its actions.
Learning can be done according to two opposite paradigms: an inductive paradigm and an analytic (or deductive) paradigm.
The former constructs the symbolic description of a concept from a set of positive and negative instances (instances that belong and instances that do not belong to that concept). Usually, the symbolic description is in the form of a “discrimination” rule: if a new instance satisfies such rule, then it does belong to the concept. This view goes back to the US psychologist Jerome Bruner’s theory of concepts: a concept is defined by a set of features which are individually necessary and jointly sufficient for an instance to belong to that concept. The corresponding algorithm for learning and refining a concept, since Patrick Winston’s influential work ("Learning Structural Descriptions From Examples", 1970), looks like this: for every new positive instance, build a generalization of the discrimination rule that the new instance will also satisfy; for every new negative instance, build a specialization of the rule that the new instance will not satisfy. In other words, Winston views learning as a heuristic search in a space of symbolic descriptions, driven by an incremental process of specialization and generalization.
Earl Hunt, then a psychologist at UCLA, had developed a Concept Learning System, first described in his book "Concept Learning" (1962), for inductive learning, i.e. learning of concepts. In 1975 that was extended by an Australian, Ross Quinlan, into ID3. Polish-born Ryszard Michalski at University of Illinois built the first practical system that learned from examples, AQ11 (1978).
Inductive systems include Ryszard Michalski's "conceptual clustering" and Tom Mitchell's "version space". Michalski’s method is “data-driven” like Winston’s: the symbolic description is built bottom-up from the set of instances.
Mitchell’s method is instead “model-driven”, which means that symbolic descriptions are predefined and instances select the most appropriate one. In Mitchell’s case, all concepts and their abstractions are represented in a space which is partially ordered by the relation of generality. An incremental process of refinement narrows down the space to one description. New instances “shrink” down the space by generalizing the set of minimal elements and specializing the set of maximal elements. As new instances keep shrinking down the space, the concept gets defined more and more accurately until the two sets of minimal and maximal elements are the same set. Mitchell’s “version space” seems to be psychologically plausible because concepts are indeed learned and refined over a period of time, their vagueness slowly turning into crispness.
Back to the beginning of the chapter "Machine Intelligence" | Back to the index of all chapters