[D] Should the decoder of an autoencoder be injective?
This is a vague and philosophical question, looking for your ideas.
Should the decoder of an autoencoder be injective?
On one hand, if it is a deterministic AE, then it must be so, but I am not asking that.
It seems strange that there should be a mapping between hugely varying dimension spaces and still be injective. If the AE can then also encode/decode arbitrary data, it must implement some complicated scramble of the input space down to latent.
In PCA some dimensions are dropped in going to the “latent” space, but I guess the inverse direction can be still injective.
submitted by /u/myobviousnic
[link] [comments]