Piero Scaruffi(Copyright © 2013 Piero Scaruffi | Legal restrictions )
These are excerpts and elaborations from my book "The Nature of Consciousness"
Compared with knowledge-based systems, neural networks offer not only different algorithms but also a different view of mental life. Knowledge-based systems rely on Jerry Fodor’s model of cognition: knowledge is represented and then computation is performed on that knowledge yielding some kind of action.
The British philosopher Andy Clark, instead, is an advocate of neural networks and highlights the reasons why neural networks provide a more plausible model for cognition than Fodor’s “representations”. Clark views neural networks (connectionism in general) as a shift of perspective in the way we view the mind, away from a "static" view of mental representations and towards a fluid view of the cognitive activity of the mind, towards the process, not just the structure.
Jerry Fodor's representational theory of mind was meant to provide an explanation of how thoughts become "causes". Fodor assumes that propositional attitudes (“I believe that”, “I hope that”, “I fear that”, “I desire that”) are computations on mental representations (eg, a concept such as "my name is Piero"), which, in turn, can be objects of computation because they are symbolic expressions. Each kind of propositional attitude (eg, "belief") expresses a different kind of role and therefore a different kind of computation. Thus, "I believe that my name is Piero" is different from "I hope that my name is Piero" because the computation performed on the mental representation is different. The human brain knows how to represent and compute because it comes equipped with a "language of thought" that works just like the language of mathematical logic.
Clark does not believe such a language exists in the mind and does not believe that Fodor's vision of the mind can account for the "process" of thinking. In Fodor's model, learning is a secondary phenomenon and is largely independent of the environment. Clark, instead, advocates a model in which learning is a fundamental feature of the mind and learning is largely dependent on the environment. That is precisely the difference between knowledge-based models and connectionist models. Moreover, Fodor clearly distinguishes between the computation and the representation, whereas Clark believes that process and representation are one and the same. In a neural network they are.
Neural networks provide a more plausible model for cognition. Clark highlights three key features of connectionism: superposition, context sensitivity and representational change. Superposition is the ability to represent two things with the same structure: the same neural network can be trained (by changing the weights of its connections) to recognize multiple items. Context sensitivity follows from the fact that those weights encode multiple items and therefore the "representation" of something is automatically context-sensitive. Fodor's symbols are always the same regardless of the context in which they are located (the context is expressed via relationships between symbols), whereas neural networks embody the context of what they represent (the context is expressed internally). Representational change is not only the ability to create new representations (Fodor's models can do that by combining symbolic expressions to create new symbolic expressions) but also the acquisition of new representational capacities. The difference is that the former learns by combining pre-existing, internal expressions, whereas a connectionist model learns when trained by an external environment.
Clark also points to general considerations on biological systems. Complex biological systems have evolved subject to the constraints of “gradualistic holism”: the evolution of a complex system is possible only insofar as that system is the last or latest link in a chain of structures, such that at each stage the chain involves only a small change (gradualism) and each stage yields a structure that is itself a viable whole (holism). This is precisely the way neural networks grow: at each point in time a neural network is a working network.
To Clark, the process is the key. One cannot break down or troubleshoot how a network does what it does because it depends on the "process" of learning: just looking at the result of learning is not enough to understand how the network performs the task that it has learned to perform. It is like watching a man ride a bike without having watched how he learned to ride the bike: the process of learning is what explains how he is now capable of riding the bike. If we try to analyze his action of riding the bike, we basically try to reduce the task of riding the bike to a set of symbols, which is a contradiction in terms because, again, riding a bike is not obtained by computing symbols, it is obtained by learning how to ride a bike.
Clark points out that, ideally, neural networks should also be able to undergo what developmental psychologist Karmiloff-Smith calls "redescriptions", or complete reorganizations of knowledge that open up new cognitive abilities and lead to a new developmental stage (as happens during child development).
On the other hand, since they are trained by a set of data that comes from the environment, connectionist systems depend on luck: they can only learn if the set of data includes enough statistical information (technically: associative learning is heavily dependent on the statistical distribution of input data). Our brain somehow learns even in a hostile environment that does not provide enough data about this or that concept, but neural networks fail badly to learn anything unless the set of input data is favorable to the desired training (their success depends on the continued availability of a friendly training environment). Clark's suggestion is that the human mind is a neural network that has evolved over thousands of years and therefore has absorbed huge amounts of innate knowledge. In other words, connectionism is not all there is to human cognition: evolution is another big piece of the story, because it predisposes the network.
While Fodor views concepts as the building blocks of thoughts and as represented by fixed structures and as causing action through their relationships, Clark views a concept as a set of skills that a network learns, and views the effect of those skills as the "behavior" of that network. Folk psychology creates the belief that there are such things as concepts when in reality there are only sets of learned skills. To ascribe a concept to a person is to ascribe a set of skills to that person. The set of skills defines the potential behavior of that person or network. Thus "concepts" are basically an illusion created by the language of folk psychology.
Back to the beginning of the chapter "Connectionism and Neural Machines" | Back to the index of all chapters