Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
This is a vague and philosophical question, looking for your ideas.
Should the decoder of an autoencoder be injective?
On one hand, if it is a deterministic AE, then it must be so, but I am not asking that.
It seems strange that there should be a mapping between hugely varying dimension spaces and still be injective. If the AE can then also encode/decode arbitrary data, it must implement some complicated scramble of the input space down to latent.
In PCA some dimensions are dropped in going to the “latent” space, but I guess the inverse direction can be still injective.
submitted by /u/myobviousnic
[link] [comments]