Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] VAE: why we do not sample again after decoding and before reconstruction loss?

In many of the VAE schematics and in the original paper, a sampling step is present after decoding and before the reconstruction loss as shown in the image below. The image comes from Stanford CS321n.

![VAE scheme]1

In many of the code implementations though, this step is not present. For example in the Keras implementation available here: https://keras.io/examples/variational_autoencoder/

In the latent space z they sample with the Lambda layer, but at the end of the decoder there is just a Dense layer with a sigmoid activation.

Is the sigmoid doing something I don’t understand mathematically? Is the VAE math still valid without this sampling step?

It is not only in code implementations, in some other schematics and textual material it seems to be ignored (see next image).

![Second VAE scheme with no sampling]2

Anyway I also created a question on Cross-Validated: Link to the question. If you also want to answer there and earn some points go and do it!

submitted by /u/Magre94
[link] [comments]