Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Should the decoder of an autoencoder be injective?

This is a vague and philosophical question, looking for your ideas.

Should the decoder of an autoencoder be injective?

On one hand, if it is a deterministic AE, then it must be so, but I am not asking that.

It seems strange that there should be a mapping between hugely varying dimension spaces and still be injective. If the AE can then also encode/decode arbitrary data, it must implement some complicated scramble of the input space down to latent.

In PCA some dimensions are dropped in going to the “latent” space, but I guess the inverse direction can be still injective.

submitted by /u/myobviousnic
[link] [comments]