Autoencoders
Autoencoding is a data compression algorithm using neural networks
– Data-specific: An auto encoder trained on pictures of faces would do a rather poor job on pictures of trees.
– Lossy: Decompressed outputs will be degraded compared to the original inputs.
– Learned automatically: Easy to train specialized instances of the algorithm that will perform well on a specific type of input.
You need three things to train
– Encoding function
– Decoding function
– Distance function between the amount of information loss between the compressed representation of your data and the decompressed representation.
* In picture compression, it is pretty difficult to train an autoencoder that does a better job than a basic algorithm like JPEG
* You can only use them on data that is similar to what they were trained on, requires lots of training data.
For training, it fell out of fashion due to random initialization being better than them. Also, deeper network training was enabled by batch normalization and residual networks, and autoencoders were out… Now it is used for:
– Data denoising
– Dimensionality reduction for data visualization: appropriate dimensionality and sparsity constraint required (sometimes better than PCA, ex. t-SNE)
Example
loss 0.1064, val_loss 0.1045
With this, we lose quite a bit of detail…
We were constrained by the size of the hidden layer (32).
Adding sparsity constraint (makes fewer units to fire at a given time) is good: in Keras, activity_regularizer.
loss 0.1337, val_loss 0.1320 –> Actually this shouldn’t be bigger… I don’t know why this is.
Adding depth is effective
loss 0.1005, val_loss 0.0999
Using convolution to replace fully connected layers is effective too
loss 0.0976, val_loss 0.0960
Sequence inputs, you could use LSTMs
There is also Variational Autoencoders which comes just before GANs