[P] Can a Generative Adversarial Network (GAN) learn to create new and weird icons?
I have been working on a customizable implementation of vanilla DC GAN. I scrapped icons dataset across the internet. Then converted those icons to MNIST like images and trained GAN. Here is link to my project.
And the follow up blog post:
During training, I tried different architectures and batch sizes. Some results were as following:
– network learned to create hand-drawn icons after ~70 epochs but generator failed after ~200 epochs
Do you guys have any suggestions to improve results?