[P] Can a Generative Adversarial Network (GAN) learn to create new and weird icons?
Hey folks,
I have been working on a customizable implementation of vanilla DC GAN. I scrapped icons dataset across the internet. Then converted those icons to MNIST like images and trained GAN. Here is link to my project.
https://github.com/NaxAlpha/xgan
And the follow up blog post:
During training, I tried different architectures and batch sizes. Some results were as following:
– network learned to create hand-drawn icons after ~70 epochs but generator failed after ~200 epochs
Do you guys have any suggestions to improve results?
Thanks
submitted by /u/NaxAlpha
[link] [comments]