In NMT, we map the meaning of a sentence into a fixed-length vector representation and then generate a translation based on that vector. By not relying on things like n-gram counts and instead trying to capture the higher-level meaning of a text, NMT systems generalize to new sentences better than many other approaches.
If you plot the embeddings of different sentences in a low dimensional space using PCA or t-SNE for dimensionality reduction, you can see that semantically similar phrases end up close to each other.
Recurrent Neural Networks are known to have problems dealing with such long-range dependencies. In theory, architectures like LSTMs should be able to deal with this, but in practice long-range dependencies are still problematic.
Approach of reversing a sentence a “hack”. It makes things work better in practice, but it’s not a principled solution. But there are languages (like Japanese) where the last word of a sentence could be highly predictive of the first word in an English translation. In that case, reversing the input would make things worse.
One hidden state enough to capture everything about the sequence? NO!
Attention
We allow the decoder to “attend” to different parts of the source sentence at each step of the output generation. Importantly, we let the model learn what to attend to based on the input sentence and what it has produced so far.
A big advantage of attention is that it gives us the ability to interpret and visualize what the model is doing.
The basic problem that the attention mechanism solves is that it allows the network to refer back to the input sequence, instead of forcing it to encode all information into one fixed-length vector.
Downside of Attention model
If you do character-level computations and deal with sequences consisting of hundreds of tokens the above attention mechanisms can become prohibitively expensive.
By focusing on one thing, we can neglect many other things. But that’s not really what we’re doing in the above model. We’re essentially looking at everything in detail before deciding what to focus on.
An alternative approach to attention is to use Reinforcement Learning to predict an approximate location to focus to.
Background Knowledge Needed
Bidirectional-RNN
Reference
http://www.wildml.com/2016/01/attention-and-memory-in-deep-learning-and-nlp/
https://medium.com/syncedreview/a-brief-overview-of-attention-mechanism-13c578ba9129