(These are excerpts from my book "Intelligence is not Artificial")
Footnote (or Part 5.2): Recurrent Neural Networks
Neural networks, up to this point, were good for recognizing patterns (e.g., that this particular object is an apple) but not for learning events in time.
Neural networks, in principle, have no sense of time.
Recurrent neural networks (and, in general, connectionist models) have a rudimentary sense of persistence, of "memory". The question soon arose of how to represent time in connectionist models, in particular related to natural language processing. Traditional approaches to representing time basically used a spatial metaphor for time: create a representation in space of the sequence of events. The difficulty in using neural networks for analyzing language is that the connectionist representation of a neural network has none of the compositionality of syntax. However, while the connectionist representation is not syntactically compositional, it can still be functionally compositional.
Michael Jordan extended recurrent neural networks to analyses of sequences by feeding the output into a "context" layer which is then fed into the middle layer; i.e. the network is fed with the input of a step and with the output of the previous step ("Attractor Dynamics and Parallelism in a Connectionist Sequential Machine", 1986). Following in his footsteps, Jeffrey Elman at UC San Diego invented what he called a "simple recurrent neural network" or SRNN, which is actually another sophisticated model for processing sequences and not just patterns ("Finding Structure in Time", 1990). If recurrent neural networks can operate on sequences, it means that they can model relationships in time, something crucial for processing natural language.
Hava Siegelmann of Bar-Ilan University in Israel and Eduardo Sontag of Rutgers University proved an encouraging theorem: that recurrent neural networks can implement Turing machines if the weights of the connections are rational numbers; and that recurrent neural networks are even more powerful than a universal Turing machine if the weights are real numbers, i.e. if the network is analogic ("On the Computational Power of Neural Networks", written in 1992 but published only in 1995)
Typical applications of RNNs are: image captioning, that turns an image into a sequence of words ("sequence output"); sentence classification, that turns a sequence of words into a category ("sequence input"); and sentence translation (sequence input and sequence output). The innovation in RNNs is a hidden layer that connects two points in time. In the traditional feed-forward structure, each layer of a neural network feeds into the next layer. In RNNs there is a hidden layer that feeds not only into the next layer but also into itself at the next time step. This recursion or cycle adds a model of time to traditional backpropagation, and is therefore known as "backpropagation through time".
Neural networks were being designed to solve one specific problem at a time, but in 1990 Robert Jacobs at the University of Massachusetts introduced the "mixture-of-experts" architecture that trains different neural networks simultaneously and let them compete to learn, with the result that different networks end up learning different functions ("Task Decomposition Through Competition in a Modular Connectionist Architecture", 1990). Similarly, Tom Mitchell's student Rich Caruana at Carnegie Mellon University showed that multiple neural networks learning tasks in parallel could benefit from sharing what they learned ("Multitask Learning, a Knowledge-based Source of Inductive Bias", 1993). This would also be shown by Ronan Collobert's and Jason Weston's unified architecture for natural language processing of 2008 and in 2015 by Vijay Pandey's group at Stanford with a multitask network for drug discovery whose accuracy improved as additional tasks and data were added ("Massively Multitask Networks for Drug Discovery", 2015).
Another elusive goal of machine learning was to equip machines with "transfer learning", i.e. the ability to transfer what they learn on a task to another task, and therefore "multitaks learning", the ability to learn more than just one task. In 1991 Satinder Singh of the University of Massachusetts published "Transfer of Learning by Composing Solutions of Elemental Sequential Tasks" and Lorien Pratt of the Colorado School of Mines published "Direct Transfer Of Learned Information Among Neural Networks". By then, Tom Mitchell's group at Carnegie Mellon University had become the world's center of excellence in transfer learning and multitask learning, as documented by Sebastian Thrun's " Is Learning the N-Th Thing Any Easier Than Learning the First?" (1996) and Rich Caruana's "Multitask Learning" (1997). But that school seemed to reach an unsurmountable peak just when Sebastian Thrun and Lorien Pratt curated the book "Learning to Learn" (1998).
A general problem of neural networks with many layers ("deep" neural networks), and of RNNs in particular, is the "vanishing gradient", already described in 1991 by Josef "Sepp" Hochreiter at the Technical University of Munich
("Investigations on Dynamic Neural Networks", 1991)
and more famously in 1994 by Yoshua Bengio ("Learning Long-Term Dependencies with Gradient Descent is Difficult"). The expression "vanishing gradient" refers to the fact that the computations for each new layer become less and less clear. It is a problem similar to calculating the probability of a chain of events: if you multiply a probability between 0 and 1 by another probability between 0 and 1 and so on many times, the result is always zero, even in the case in which all those numbers expressed probabilities of 99%. A network with many layers is difficult to train because the "weights" of the last layer end up being too weak.
Machine learning was doing just about fine without any need to model the time dimension: support vector machines, logistic regression, and traditional as well as convolutional neural networks can recognize patterns. There are, however, applications that require a sequential analysis, i.e. that need to model the time dimension. Examples are speech recognition, image captioning, language translation, and handwriting recognition. Hidden Markov models have been used to model time but they become computationally impractical for modeling long-range dependencies (when you have to remember not just a few earlier steps but many earlier steps).
In 1997 Sepp Hochreiter and his professor Juergen Schmidhuber (by now director of the Istituto Dalle Molle di Studi sull’Intelligenza Artificiale, or IDSIA, in Switzerland) came up with a solution: the Long Short Term Memory (LSTM) model. In this model, the unit of the neural network (the "neuron") is replaced by one or more memory cells. Each cell functions like a mini-Turing machine, performing simple operations of read, write, store and erase that are triggered by simple events. The big difference with Turing machines is that these are not binary decisions but "analog" decisions, represented by real numbers between 0 and 1, not just 0 and 1. For example, if the network is analyzing a text, a unit can store the information contained in a paragraph and apply this information to a subsequent paragraph. The reasoning behind the LSTM model is that a recurrent neural network contains two kinds of memory: there is a short-term memory about recent activity and there is a long-term memory which is the traditional "weights" of the connections that change based on this recent activity. The weights change very slowly as the network is being trained. The LSTM model tries to retain also information contained in the recent activity, that traditional network only use to fine-tune the weights and then discard.
LeCun's convolutional nets had solved the problem of how to train deep feedforward neural nets: Long Short Term Memory solved the problem of how to train deep recurrent neural nets.
At the same time that LSTM were introduced, in 1997 Mike Schuster and Kuldip Paliwal at the Advanced Telecommunications Research Institute in Japan discovered that "bidirectional recurrent neural networks" can achieve a similar feat of persistence.
In the 2010s several other kinds of neural networks coupled with some long-term memories, similar to LSTMs, have been proposed: "Neural Turing Machines" (2014, by Alex Graves at Google's DeepMind in Britain) and "Memory Networks" (2014, by James Weston at Facebook's labs in New York). Just like LSTMs, these are neural networks that can perform complex inference. This seems to be a progressive shift towards a form of hybrid computing that may reconcile neural networks and the old knowledge-based school.
Back to the Table of Contents
Purchase "Intelligence is not Artificial")