Piero Scaruffi(Copyright © 2013 Piero Scaruffi | Legal restrictions )
These are excerpts and elaborations from my book "The Nature of Consciousness"
Experience vs Knowledge
Inspired by Edmund Husserl’s phenomenology, another US philosopher and critic of Artificial Intelligence, Hubert Dreyfus, thinks that comprehension can never do without the context in which it occurs. The information in the environment is fundamental for a being's intelligence.
Dreyfus criticized the four fundamental assumptions of Artificial Intelligence: biological (that the brain must operate as a symbolic processor), psychological (that the mind must obey a heuristic program), epistemological (that there must be a theory of practical activity) and ontological (that the data necessary for intelligent behavior must be discrete, explicit and determinate). In his opinion all of them are just not plausible. Furthermore, Dreyfus emphasizes the role of the body in intelligent behavior which knowledge-based systems neglect. Human experience is intelligible only insofar as it gets organized in terms of a situation (as a function of human needs).
Dreyfus presents a model of acquisition of performance by humans structured in five stages. First, we are born novices: we simply follow the rules (an instructor, a manual). The moves of novices are not secure and not fluid, although they can be technically correct. Sometimes applying a rule is plain silly, but the novice will still do so because he doesn't know better. Then we become advanced beginners. At this stage we are capable of modifying rules based on the situation. Our behavior is still driven by rules but it doesn't look as awkward. Competent humans, the next stage, follow rules but in a very fluid manner, and their rules are much more plastic: the competent human knows that she can modify the rules, and she will feel guilty if something goes wrong, even if she followed the proper rules. Proficient performers do not even follow rules anymore: they act by reflex. The fact that they have encountered similar situations many times matters more than the original rules. Experts, the final stage, do not even remember the rules. Sometimes if they have to articulate them they can't even figure them out. They just act based on their expertise and their intuition. They are often not even aware of what they are doing. An expert driver does not realize that she is shifting gears and at which point she is shifting gears. She just shifts gear when it's appropriate to.
Dreyfus points out that a failure usually results in degradation: you step back to a lower stage to understand what went wrong. An expert does not even remember the rules, but, if she can't start the car, she gradually walks down the ladder from expert to merely competent all the way down to novice and will finally pick up the driver's manual.
Only novices behave like expert systems. Human experts behave in a radically different way. An expert has synthesized experience in an unconscious behavior that reacts instantaneously to a complex situation. What the expert knows cannot be decomposed in rules or any other type of discrete knowledge representation; therefore it cannot be emulated by an expert system.
The foundation of Dreyfus's argument is that minds do not use a theory about the everyday world; and the reason is that there is no set of “context-free” primitives of understanding. Human knowledge is skilled "know-how", as opposed to the logical representations that expert systems have to rely upon, or "know-that".
Also drawing from Martin Heidegger's phenomenology, the US computer scientist Terry Winograd is skeptic that intelligence can be due to processes of the type of production systems, i.e. to the systematic manipulation of representations. Intelligent systems act, don't think. People are “thrown” in the real world and cannot afford to deal with all the possible alternatives of a situation. They think only when action does not yield the desired result. Only then do they pause to picture the situation in its complexity and decompose it into its constituents, and try to infer action from knowledge. But, again, this behavior is more typical of the novice than of the expert.
Another way to see the same argument is to consider what makes an expert so much more efficient at solving a problem: the first few seconds. A chess champion wins the game against a novice because of the first few moves, not because of the huge knowledge that the champion has and could use against the novice. That huge knowledge is, in turn, certainly important to determine the first moves (that will deliver a crippling blow to the novice).
The US computer scientist Rodney Brooks (“A robust layered control system for a mobile robot”, 1986) offered an alternative way of achieving Artificial Intelligence, which significantly revised the foundations of the symbol-processing program: he argues that intelligence cannot be separated from the body. Intelligence is not only a process of the brain, it is embodied in the physical world. Every part of the body is performing an action that contributes to the overall “functioning” of the organism in the environment. There is no need for a central representation of the world, so long as all component tasks help each other operate in the world. Cognition is grounded in the physical interactions with the world. Intelligence "is" about moving in a physical world and cannot exist without a physical world.
Dreyfus and Winograd created the schism between “Cartesian” and “Heideggerian” Artificial Intelligence programs. A requirement of the latter is the ability to work in “thrown” situations without building internal representations. Philip Agre built the first “Heideggerian AI”, Pengi, a system that played the arcade videogame Pengo.
Back to the beginning of the chapter "Machine Intelligence" | Back to the index of all chapters