9. Recurrent neural networks¶
Slides: pdf
9.1. Recurrent neural networks¶
9.1.1. Problem with feedforward neural networks¶
Feedforward neural networks learn to associate an input vector to an output.
If you present a sequence of inputs \(\mathbf{x}_0, \mathbf{x}_1, \ldots, \mathbf{x}_t\) to a feedforward network, the outputs will be independent from each other:
Many problems depend on time series, such as predicting the future of a time series by knowing its past values:
Example: weather prediction, financial prediction, predictive maintenance, natural language processing, video analysis…
A naive solution is to aggregate (concatenate) inputs over a sufficiently long window and use it as a new input vector for the feedforward network.
Problem 1: How long should the window be?
Problem 2: Having more input dimensions increases dramatically the complexity of the classifier (VC dimension), hence the number of training examples required to avoid overfitting.
9.1.2. Recurrent neural network¶
A recurrent neural network (RNN) uses it previous output as an additional input (context). All vectors have a time index \(t\) denoting the time at which this vector was computed.
The input vector at time \(t\) is \(\mathbf{x}_t\), the output vector is \(\mathbf{h}_t\):
\(\sigma\) is a transfer function, usually logistic or tanh. The input \(\mathbf{x}_t\) and previous output \(\mathbf{h}_{t-1}\) are multiplied by learnable weights:
\(W_x\) is the input weight matrix.
\(W_h\) is the recurrent weight matrix.
One can unroll a recurrent network: the output \(\mathbf{h}_t\) depends on the whole history of inputs from \(\mathbf{x}_0\) to \(\mathbf{x}_t\).
A RNN is considered as part of deep learning, as there are many layers of weights between the first input \(\mathbf{x}_0\) and the output \(\mathbf{h}_t\). The only difference with a DNN is that the weights \(W_x\) and \(W_h\) are reused at each time step.
9.1.3. BPTT: Backpropagation through time¶
The function between the history of inputs and the output at time \(t\) is differentiable: we can simply apply gradient descent to find the weights! This variant of backpropagation is called Backpropagation Through Time (BPTT). Once the loss between \(\mathbf{h}_t\) and its desired value is computed, one applies the chain rule to find out how to modify the weights \(W_x\) and \(W_h\) using the history \((\mathbf{x}_0, \mathbf{x}_1, \ldots, \mathbf{x}_t)\).
Let’s compute the gradient accumulated between \(\mathbf{h}_{t-1}\) and \(\mathbf{h}_{t}\):
As for feedforward networks, the gradient of the loss function is decomposed into two parts:
The first part only depends on the loss function (mse, cross-entropy):
The second part depends on the RNN itself:
The gradients w.r.t the two weight matrices are given by this recursive relationship (product rule):
The derivative of the transfer function is noted \(\mathbf{h'}_{t}\):
If we unroll the gradient, we obtain:
When updating the weights at time \(t\), we need to store in memory:
the complete history of inputs \(\mathbf{x}_0\), \(\mathbf{x}_1\), … \(\mathbf{x}_t\).
the complete history of outputs \(\mathbf{h}_0\), \(\mathbf{h}_1\), … \(\mathbf{h}_t\).
the complete history of derivatives \(\mathbf{h'}_0\), \(\mathbf{h'}_1\), … \(\mathbf{h'}_t\).
before computing the gradients iteratively, starting from time \(t\) and accumulating gradients backwards in time until \(t=0\). Each step backwards in time adds a bit to the gradient used to update the weights.
In practice, going back to \(t=0\) at each time step requires too many computations, which may not be needed. Truncated BPTT only updates the gradients up to \(T\) steps before: the gradients are computed backwards from \(t\) to \(t-T\). The partial derivative in \(t-T-1\) is considered 0. This limits the horizon of BPTT: dependencies longer than \(T\) will not be learned, so it has to be chosen carefully for the task. \(T\) becomes yet another hyperparameter of your algorithm…
9.1.4. Vanishing gradients¶
BPTT is able to find short-term dependencies between inputs and outputs: perceiving the inputs \(\mathbf{x}_0\) and \(\mathbf{x}_1\) allows to respond correctly at \(t = 3\).
But it fails to detect long-term dependencies because of:
the truncated horizon \(T\) (for computational reasons).
the vanishing gradient problem [Hochreiter, 1991].
Let’s look at the gradient w.r.t to the input weights:
At each iteration backwards in time, the gradients are multiplied by \(W_h\). If you search how \(\frac{\partial \mathbf{h}_t}{\partial W_x}\) depends on \(\mathbf{x}_0\), you obtain something like:
If \(|W_h| > 1\), \(|(W_h)^t|\) increases exponentially with \(t\): the gradient explodes. If \(|W_h| < 1\), \(|(W_h)^t|\) decreases exponentially with \(t\): the gradient vanishes.
Exploding gradients are relatively easy to deal with: one just clips the norm of the gradient to a maximal value.
But there is no solution to the vanishing gradient problem for regular RNNs: the gradient fades over time (backwards) and no long-term dependency can be learned. This is the same problem as for feedforward deep networks: a RNN is just a deep network rolled over itself. Its depth (number of layers) corresponds to the maximal number of steps back in time. In order to limit vanishing gradients and learn long-term dependencies, one has to use a more complex structure for the layer. This is the idea behind long short-term memory (LSTM) networks.
9.2. Long short-term memory networks - LSTM¶
Note
All figures in this section are taken from this great blog post by Christopher Olah, which is worth a read:
A LSTM layer [Hochreiter & Schmidhuber, 1997] is a RNN layer with the ability to control what it memorizes. In addition to the input \(\mathbf{x}_t\) and output \(\mathbf{h}_t\), it also has a state \(\mathbf{C}_t\) which is maintained over time. The state is the memory of the layer (sometimes called context). It also contains three multiplicative gates:
The input gate controls which inputs should enter the memory.
are they worth remembering?
The forget gate controls which memory should be forgotten.
do I still need them?
The output gate controls which part of the memory should be used to produce the output.
should I respond now? Do I have enough information?
The state \(\mathbf{C}_t\) can be seen as an accumulator integrating inputs (and previous outputs) over time. The gates learn to open and close through learnable weights.
9.2.1. State conveyor belt¶
By default, the cell state \(\mathbf{C}_t\) stays the same over time (conveyor belt). It can have the same number of dimensions as the output \(\mathbf{h}_t\), but does not have to. Its content can be erased by multiplying it with a vector of 0s, or preserved by multiplying it by a vector of 1s. We can use a sigmoid to achieve this:
9.2.2. Forget gate¶
Forget weights \(W_f\) and a sigmoid function are used to decide if the state should be preserved or not.
\([\mathbf{h}_{t-1}; \mathbf{x}_t]\) is simply the concatenation of the two vectors \(\mathbf{h}_{t-1}\) and \(\mathbf{x}_t\). \(\mathbf{f}_t\) is a vector of values between 0 and 1, one per dimension of the cell state \(\mathbf{C}_t\).
9.2.3. Input gate¶
Similarly, the input gate uses a sigmoid function to decide if the state should be updated or not.
As for RNNs, the input \(\mathbf{x}_t\) and previous output \(\mathbf{h}_{t-1}\) are combined to produce a candidate state \(\tilde{\mathbf{C}}_t\) using the tanh transfer function.
9.2.4. Candidate state¶
The new state \(\mathbf{C}_t\) is computed as a part of the previous state \(\mathbf{C}_{t-1}\) (element-wise multiplication with the forget gate \(\mathbf{f}_t\)) plus a part of the candidate state \(\tilde{\mathbf{C}}_t\) (element-wise multiplication with the input gate \(\mathbf{i}_t\)).
Depending on the gates, the new state can be equal to the previous state (gates closed), the candidate state (gates opened) or a mixture of both.
9.2.5. Output gate¶
The output gate decides which part of the new state will be used for the output.
The output not only influences the decision, but also how the gates will updated at the next step.
9.2.6. LSTM layer¶
The function between \(\mathbf{x}_t\) and \(\mathbf{h}_t\) is quite complicated, with many different weights, but everything is differentiable: BPTT can be applied.
Equations:
Forget gate
Input gate
Output gate
Candidate state
New state
Output
9.2.7. Vanishing gradients¶
How do LSTM solve the vanishing gradient problem? Not all inputs are remembered by the LSTM: the input gate controls what comes in. If only \(\mathbf{x}_0\) and \(\mathbf{x}_1\) are needed to produce \(\mathbf{h}_{t+1}\), they will be the only ones stored in the state, the other inputs are ignored.
If the state stays constant between \(t=1\) and \(t\), the gradient of the error will not vanish when backpropagating from \(t\) to \(t=1\), because nothing happens!
The gradient is multiplied by exactly one when the gates are closed.
LSTM are particularly good at learning long-term dependencies, because the gates protect the cell from vanishing gradients. Its problem is how to find out which inputs (e.g. \(\mathbf{x}_0\) and \(\mathbf{x}_1\)) should enter or leave the state memory.
Truncated BPTT is used to train all weights: the weights for the candidate state (as for RNN), and the weights of the three gates. LSTM are also subject to overfitting. Regularization (including dropout) can be used. The weights (also for the gates) can be convolutional. The gates also have a bias, which can be fixed (but hard to find). LSTM layers can be stacked to detect dependencies at different scales (deep LSTM network).
9.2.8. Peephole connections¶
A popular variant of LSTM adds peephole connections [Gers & Schmidhuber, 2000], where the three gates have additionally access to the state \(\mathbf{C}_{t-1}\).
\begin{align} \mathbf{f}t &= \sigma(W_f \times [\mathbf{C}{t-1}; \mathbf{h}_{t-1}; \mathbf{x}_t] + \mathbf{b}f) \ &\ \mathbf{i}t &= \sigma(W_i \times [\mathbf{C}{t-1}; \mathbf{h}{t-1}; \mathbf{x}_t] + \mathbf{b}i) \ &\ \mathbf{o}t &= \sigma(W_o \times [\mathbf{C}{t}; \mathbf{h}{t-1}; \mathbf{x}_t] + \mathbf{b}_o) \ \end{align}
It usually works better, but adds more weights.
9.2.9. GRU: Gated Recurrent Unit¶
Another variant is called the Gated Recurrent Unit (GRU) [Chung et al., 2014]. It uses directly the output \(\mathbf{h}_t\) as a state, and the forget and input gates are merged into a single gate \(\mathbf{r}_t\).
\begin{align} \mathbf{z}t &= \sigma(W_z \times [\mathbf{h}{t-1}; \mathbf{x}_t]) \ &\ \mathbf{r}t &= \sigma(W_r \times [\mathbf{h}{t-1}; \mathbf{x}_t]) \ &\ \tilde{\mathbf{h}}_t &= \text{tanh} (W_h \times [\mathbf{r}t \odot \mathbf{h}{t-1}; \mathbf{x}_t])\ & \ \mathbf{h}_t &= (1 - \mathbf{z}t) \odot \mathbf{h}{t-1} + \mathbf{z}_t \odot \tilde{\mathbf{h}}_t\ \end{align}
It does not even need biases (mostly useless in LSTMs anyway). Much simpler to train as the LSTM, and almost as powerful.
9.2.10. Bidirectional LSTM¶
A bidirectional LSTM learns to predict the output in two directions:
The feedforward line learns using the past context (classical LSTM).
The backforward line learns using the future context (inputs are reversed).
The two state vectors are then concatenated at each time step to produce the output. Only possible offline, as the future inputs must be known. Works better than LSTM on many problems, but slower.
9.3. word2vec¶
The most famous application of RNNs is Natural Language Processing (NLP): text understanding, translation, etc… Each word of a sentence has to be represented as a vector \(\mathbf{x}_t\) in order to be fed to a LSTM. Which representation should we use?
The naive solution is to use one-hot encoding, one element of the vector corresponding to one word of the dictionary.
One-hot encoding is not a good representation for words:
The vector size will depend on the number of words of the language:
English: 171,476 (Oxford English Dictionary), 470,000 (Merriam-Webster)… 20,000 in practice.
French: 270,000 (TILF).
German: 200,000 (Duden).
Chinese: 370,000 (Hanyu Da Cidian).
Korean: 1,100,373 (Woori Mal Saem)
Semantically related words have completely different representations (“endure” and “tolerate”).
The representation is extremely sparse (a lot of useless zeros).
word2vec [Mikolov et al., 2013] learns word embeddings by trying to predict the current word based on the context (CBOW, continuous bag-of-words) or the context based on the current word (skip-gram). See https://code.google.com/archive/p/word2vec/ and https://www.tensorflow.org/tutorials/representation/word2vec for more information.
It uses a three-layer autoencoder-like NN, where the hidden layer (latent space) will learn to represent the one-hot encoded words in a dense manner.
word2vec has three parameters:
the vocabulary size: number of words in the dictionary.
the embedding size: number of neurons in the hidden layer.
the context size: number of surrounding words to predict.
It is trained on huge datasets of sentences (e.g. Wikipedia). After learning, the hidden layer represents an embedding vector, which is a dense and compressed representation of each possible word (dimensionality reduction). Semantically close words (“endure” and “tolerate”) tend to appear in similar contexts, so their embedded representations will be close (Euclidian distance). One can even perform arithmetic operations on these vectors!
queen = king + woman - man
9.4. Applications of RNNs¶
9.4.1. Classification of LSTM architectures¶
Several architectures are possible using recurrent neural networks:
One to One: classical feedforward network.
Image \(\rightarrow\) Label.
One to Many: single input, many outputs.
Image \(\rightarrow\) Text.
Many to One: sequence of inputs, single output.
Video / Text \(\rightarrow\) Label.
Many to Many: sequence to sequence.
Text \(\rightarrow\) Text.
Video \(\rightarrow\) Text.
9.4.3. Next character/word prediction¶
Characters or words are fed one by one into a LSTM. The desired output is the next character or word in the text.
Example:
Inputs: To, be, or, not, to
Output: be
The text below was generated by a LSTM having read the entire writings of William Shakespeare, learning to predict the next letter (see http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Each generated character is used as the next input.
PANDARUS:
Alas, I think he shall be come approached and the day
When little srain would be attain'd into being never fed,
And who is but a chain and subjects of his death,
I should not sleep.
Second Senator:
They are away this miseries, produced upon my soul,
Breaking and strongly should be buried, when I perish
The earth and thoughts of many states.
DUKE VINCENTIO:
Well, your wit is in the care of side and that.
Second Lord:
They would be ruled after this chamber, and
my fair nues begun out of the fact, to be conveyed,
Whose noble souls I'll have the heart of the wars.
Clown:
Come, sir, I will make did behold your worship.
Sunspring SciFi movie
More info: http://www.thereforefilms.com/sunspring.html
9.4.4. Sentiment analysis¶
Sentiment analysis consists of attributing a value (positive or negative) to a text. A 1D convolutional layers “slides” over the text, each word being encoded using word2vec. The bidirectional LSTM computes a state vector for the complete text. A classifier (fully connected layer) learns to predict the sentiment of the text (positive/negative).
9.4.5. Question answering / Scene understanding¶
A LSTM can learn to associate an image (static) plus a question (sequence) with the answer (sequence). The image is abstracted by a CNN pretrained for object recognition.
9.4.6. seq2seq¶
The state vector obtained at the end of a sequence can be reused as an initial state for another LSTM. The goal of the encoder is to find a compressed representation of a sequence of inputs. The goal of the decoder is to generate a sequence from that representation. Sequence-to-sequence (seq2seq [Sutskever et al., 2014]) models are recurrent autoencoders.
The encoder learns for example to encode each word of a sentence in French. The decoder learns to associate the final state vector to the corresponding English sentence. seq2seq allows automatic text translation between many languages given enough data. Modern translation tools are based on seq2seq, but with attention.