Learn About Our Meetup

5000+ Members



Join our meetup, learn, connect, share, and get to know your Toronto AI community. 



Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.



Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Confusion over Variational Autoencoders

My question is about what constitutes a perfect Variational Autoencoders.

I am a bit confused by Table 1 on the paper about IAF Variational Autoencoders ( however my question is about Variational Autoencoders in general.

The table looks somewhat like below (numbers different to the one in the paper):

VAE Model Variational Lower Bound (VLB) log p(x)
1 -20.5 -18.3
2 -19.6 -19.0
3 -18.4 -18.1

In the typical derivation of the Variational Autoencoder (VAE) we find that we get the optimal model when our approximate posterior q(z|x; theta) approaches the true posterior p(z|x) and the difference between our VLB and log p(x) is KL[q(z|x) || p(z|x)].

On table 1, it shows values of both the VLB and log p(x) for different VAE models. It is clear why the VLB is less than log p(x) because of the gap between the approximate and true posterior, however why are the true marginals log p(x) different for different models?

After you’ve closed the gap between the approximate and true posterior – shouldn’t you achieve the perfect model? so why is it the case that the true marginals differ for different VAE models?

submitted by /u/mellow54
[link] [comments]

Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.