Piero Scaruffi(Copyright © 2013 Piero Scaruffi | Legal restrictions )
These are excerpts and elaborations from my book "The Nature of Consciousness"
An "expert system" is simply a software system that has a knowledge base and some inference methods that can be applied to that knowledge base.
The knowledge base describes the rules that apply to the domain of expertise. The “inference engine” is capable of inferring from those rules the appropriate action in the face of a specific situation. The combination of a knowledge base and an inference engine should therefore yield a machine that behaves just like a human expert (i.e., one who makes the same decisions in the same circumstances) within the domain of expertise represented in the knowledge base.
Since all methods commonly employed to represent knowledge can be reduced to some variant of Predicate Logic, Logic can provide the inference techniques required to draw conclusions from the knowledge base. For example, some representation systems (the “production systems”) simply encode knowledge in production rules, and production rules basically assert a new fact within a knowledge base whenever some other facts have been asserted. In presence of a new situation (i.e., of a set of new facts), a number of production rules will “fire” and assert another set of new facts, which in turn will trigger more production rules, and so forth recursively (“forward chaining”). Viceversa, one can prove the truth of a statement by looking up which production rules would assert it and what has to be true in order for them to fire, and so forth recursively (“backward chaining”). This way of reasoning belongs to deduction, the most studied and reliable form of logical reasoning.
An influential paradigm was introduced by the US economist Herbert Simon and the US mathematician Allen Newell. Just as the rules of grammar provide a simple means to generate all possible sentences of a given language, the recursive application of rules can provide a simple means to generate all possible actions in a given “space”. The issue then shifts to finding out which particular sequence of actions leads to a solution: the machine has to be able to “search” for that correct sequence. Thus the process performed by an expert system can also be viewed as a “search” in a space of all possible solutions.
Each logical step corresponds to a step in the search through that abstract space for the solution to the current problem. The search can be “blind” or “heuristic”: the former recursively applies a set of algorithms (the same ones regardless of the type of problem at hand, such as “modus ponens” or “reductio ad absurdum”), hoping that eventually it will stumble into the solution; the latter employs “clues” about the problem at hand (or “domain heuristics”) in order to find short-cuts. The algorithms employed during a heuristic search can be either “weak” methods, such as “hill climbing” and “means-end analysis”, which are relatively independent of the domain, and methods which are entirely domain-specific.
Pioneering expert systems include: Newell's and Simon's "General Problem Solver" (1957), Edward Feigenbaum’s "Dendral" (1965) for analyzing chemical compounds, Bruce Buchanan’s "Mycin" (1972) for diagnosing diseases, John McDermott’s "Xcon" (1980) for configuring computers. Since the 1980s a growing number of them entered the workforce. But they were far from exhibiting any “intelligence”, other than what one expects from machines.
Note that Newell and Simon had basically mechanized psychology, they had mechanized the very self that (in humans) thinks, searches and finds solutions. Indirectly, their architecture implied that the conscious self is an epiphenomenon, a side effect, an outcome and not a cause, of intellligent behavior.
Back to the beginning of the chapter "Machine Intelligence" | Back to the index of all chapters