This tutorial highlights structure of common RNN algorithms by following and understanding computations carried out by each model. It is intended for anyone knowing the general deep learning workflow, but without prior understanding of RNN. If you really never heard about RNN, you can read this post of Christopher Olah first.
The present post focuses on understanding computations in each model step by step, without paying attention to train something useful. It is illustrated with Keras codes and divided into five parts:
- TimeDistributed component,
- Simple RNN,
- Simple RNN with two hidden layers,