Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
In many of the VAE schematics and in the original paper, a sampling step is present after decoding and before the reconstruction loss as shown in the image below. The image comes from Stanford CS321n.
In many of the code implementations though, this step is not present. For example in the Keras implementation available here: https://keras.io/examples/variational_autoencoder/
In the latent space z they sample with the Lambda layer, but at the end of the decoder there is just a Dense layer with a sigmoid activation.
Is the sigmoid doing something I don’t understand mathematically? Is the VAE math still valid without this sampling step?
It is not only in code implementations, in some other schematics and textual material it seems to be ignored (see next image).
![Second VAE scheme with no sampling]2
Anyway I also created a question on Cross-Validated: Link to the question. If you also want to answer there and earn some points go and do it!
submitted by /u/Magre94
[link] [comments]