The hidden space
Hidden space can be understood as the compressed representation of paired data. This compression only preserves key feature points and ignores some unimportant information. Taking the encoder-decoder network as an example, the full convolutional neural network (FCN) is first used to learn image features, and the dimension reduction of data in feature extraction is regarded as a kind of lossy compression. But because the decoder needs to reconstruct the data, the model must learn how to store all the relevant information and ignore the noise. So the benefit of compression (dimension reduction) is that you can get rid of excess information and focus on the most critical features.
Express learning
In Representative Learning, Latent Space transforms raw data from more complex forms into simpler data representations that are more useful for data processing. In implicit space, feature differences between similar samples are removed as redundant information, and only their core features are retained. So when you map the data points to the hidden space,Points with similar features are closer together. The following figure maps the THREE-DIMENSIONAL data to the two-dimensional hidden space, where similar samples are closer.
Encoder – decoder structure
Autoencoder A network model based on the distance training of data in hidden space. Its goal is to output and input data similar content, similar to an identity function. The red part in the figure below is the hidden space. Firstly, the data related features are stored in the compressed representation, and then the representation is reconstructed accurately. That is, mapping from the data space to the hidden space, and from the hidden space to the data space. In many applications of NLP, we often use the output of encoder (that is, the hidden space) as our representation of the input vector, which is precisely because the hidden space has the characteristics of retaining key features.
The interpolation
In the full convolutional network, we need to promote the data dimension through deconvolution, and in the encoder-decoder structure, we need to promote the data dimension through decoder, both of which can be understood as interpolation. The feature quality of hidden space is undoubtedly very important for good interpolation effect.