Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Why does pix2pix generate a conditional *distribution* instead of a delta function?

Apologies if it’s too trivial.

Consider paired training data (x,y) where x is edge-image and y is realistic image.

Pix2Pix discriminator training is totally paired i.e. for a given x, y forms the real image and fake_y=G(x) forms the fake image. Note that this is deviating from Conditional GAN paper where for a single label of MNIST say 4, we have a distribution of real images available.

Also, note that the discriminator is pix2pix is conditional on x i.e. D(x,y) and D(x,fake_y) are to be discriminated. So, in principle pix2pix can learn a 1-1 mapping from edges to corresponding shoes in training set and I see no reason why it must produce a variation of shoes as shown in paper.

Assuming this code is correct implementation: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/models/pix2pix_model.py


[link] [comments]