Piero Scaruffi(Copyright © 2013 Piero Scaruffi | Legal restrictions )
These are excerpts and elaborations from my book "The Nature of Consciousness"
Learning a concept is actually not a big deal. Many concepts must be learned to perform even the simplest of daily tasks, and once many concepts are learned they must be combined in a “theory” of the domain if they have to make any sense at all. Given a theory of the domain, then an individual or a system can plan meaningful actions in that domain. Theory formation turns out to be quite tricky. A group of concepts can be combined in infinite ways, and most are not very useful. Physics is a good example of a theory: concepts abound, from mass to electricity, but they are held together by just a few laws.
Douglas Lenat believed that theories can be built only by using “rules of thumb” on what a theory is and how it usually looks like. In other words, some concepts are more interesting than others, and some relations between concepts are more interesting than others. Lenat’s heuristics plays the role of a scientist’s intuition.
Lenat’s approach with respect to building theories was model-driven. Pat Langley’s approach, instead, was data-driven: given experimental data, build a hierarchy of hypotheses and eventually a full-fledged theory that explains them. The only rule of thumb is that regularity matters and everything else does not: any theory is a theory of the regularities that occur in a domain.
Either way, one needs heuristics (intuition, rules of thumb, common sense) in order to learn a new theory. Both Lenat and Langley got intrigued by the origins of heuristics and started studying how heuristics itself can be learned. In other words: how does one progress from being a novice, who is moving blindly around the environment and is capable only of applying rigid rules, to being an expert, who relies on intuition and rules of thumbs? For Lenat this meant that one had to progress from using weak methods to using domain-specific methods through a process of generate and test (generate a strategy, test it, tweak it, and so forth). Tom Mitchell’s approach was similar, but aimed at generating the version space.
All of these are attempts at building machines that can learn. All of them are extremely limited in how and what they can learn.
Notwithstanding these attempts at building knowledge-based programs that can learn, learning has remained a liability, not an asset, of the field, especially when compared with the achievement of neural networks.
Back to the beginning of the chapter "Machine Intelligence" | Back to the index of all chapters