Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Hi. I tried implementing CycleGAN from scratch. And I tested my implementation on a toy example where the first domain consists of normal MNIST images and the other domain consists of flipped MNIST images. I think the task sounds simple enough. However, while the learned translations are able to generate sharp (normal/flipped) MNIST images, the mappings are not correct at all (e.g., 7s are mapped to 1s or 6s ). Please refer to the figures below. Did anyone encounter the sample problem when using CycleGAN? I think overall my architecture is correct (having adversarial loss and cycle loss). There may be few differences in the generators and discriminators but I don’t think it would make that much difference.