[D] Can someone help me understand the latent encoding space of a variational autoencoder?
So I trained a VAE on 1-D Sparse data, and I am attempting to use the encoded latent variables for a similarity metric. However, the latent space has an extra dimension that I have no idea where it came from, and I am not sure which variable to use. I am attempting to use z_mean as my latent variables. But the shape of the output from the z_mean layer is somehow (8*512), even though my latent size was 512. Can someone help me understand what is going on here? Thank you!
submitted by /u/that_one_ai_nerd
[link] [comments]