Anderson James & Rosenfeld Edward:
NEURO-COMPUTING (MIT Press, 1988)


Home | The whole bibliography | My book on Consciousness

(Copyright © 2000 Piero Scaruffi | Legal restrictions - Termini d'uso )


A comprehensive collection of historical papers on brain anatomy, cognitive psychology, cybernetics and neural networks.
William James had a number of powerful intuitions: that the brain is built to ensure survival in the world; that cognitive functions cannot be abstracted from the environment that they deal with; that the brain is organized as an associative network; that associations are governed by a rule of reinforcement.
Warren McCulloch's and Walter Pitts' 1943 "A logical calculus of the ideas immanent in the nervous system" is a seminal paper that laid down the foundations for the computational theory of the brain. Their binary neuron can only be in one of two possible states, has a fixed threshold below which it never fires, can receive inputs from either inhibitory synapses and/or excitatory synapses, and integrates its input signals at discrete intervals of time. If no inhibitory synapse is active and the sum of all excitatory synapses is greater than the threshold, the neuron fires. A network of binary neurons is fully equivalent to a universal Turing machine (i.e., that any finite logical proposition can be realized by such a network, i.e. every computer program can be implemented as a neural net).
Featured are the main forefathers of today's neural architectures. Oliver Selfridge's 1958 "Pandemonium" employs a battery of multiple independent units analyze the input, each specialized in a different recognition task, so that the input can be progressively identified through a number of hierarchical layers, each one relying on the conclusions of the lower ones.
Rosenblatt's 1958 "Perceptron", based on a non-linear model of memory, was probably the first artificial neural network for learning concepts.
Bernard Widrow's and Marcian Hoff's 1960 "Adaptive switching circuits" yield the ADALINE, a variation on the perceptron based on a supervised learning rule, the "error correction rule", that could learn in a faster and more accurate way: synaptic strenghts are changed in porportion to the error (what the output is and what it should have been) times the input.
Briefly mentioned are also Teuvo Kohonen's linear model for memory and Stephen Grossberg's non-linear quantitative descriptions of brain processes.
John Hopfield's 1982 "Neural networks and physical systems" developed a model inspired by the "spin glass" material, which resembles a one-layer neural network in which weighs are distributed in a symmetrical fashion, the learning rule is hebbian, neurons are binary and each neuron is connected to every other neuron. As they learn, Hopfield's nets develop configurations that are dynamically stable (or "ultrastable"). Their dynamics is dominated by a tendency towards a very high number of locally stable states (or "attractors"). Every memory is a local "minimum" for an energy function similar to potential energy.
Hopfield's nets exhibit the ability to correct incomplete or incorrect information (because deviations from local minima are attracted towards one of those minima and therefore canceled away). Compared with the perceptron, a Hopfield net is asynchronous (which is a more plausible model of the nervous system) and employs backward coupling. In a later paper (also included here) Hopfield replaced the binary neuron with a more plausible neuron.
More than anything else, Hopfield proved that, despite Minsky's critique, neural networks are feasible and can even be useful.
Fukushima's 1983 "Neocognitron" is a multi-layered network with strong self-organizing properties, based on Hubel' and Weisel's model of the visual system. A number of modules are triggered by a retina of photoreceptors. Each module has a number of simple "S-cells" and more complex "C-cells", driven by "S-cells" layers so that they abstract the features that the "S-cells" pick up.
In Geoffrey Hinton's and Terrence Sejnowsky's 1985 "A learning algorithm for Boltzmann machines" Hopfield's basic architecture (binary neuron, energy function and so on) is retained, but Hopfield's learning rule is replaced with the rule of annealing (start off the system at very high "temperature" and then gradually drop the temperature to zero), which Kirkpatrick and others had just proposed as a general-purpose optimization rule. The new model, Boltzman's machine, is more stable than Hopfield's model as it will always end in a global minimum (the lowest energy state).
David Rumelhart's and Geoffrey Hinton's "back-propagation" algorithm, originally proposed in 1986, considerably faster than the Boltzmann machine, quickly became the most popular learning rule for multi-layered netowrks. The generalized "delta rule" was basically an adaptation of the Widrow-Hoff error correction rule to the case of multi-layered networks by moving backwards from the output layer to the input layer. This was also the definitive answer to Minsky's critique, as it proved to be able to solve all of the unsolved problems.

TM, ®, Copyright © 2005 Piero Scaruffi