Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
I am looking for a way to monitor the balance between the training of the discriminator (D) and the generator (G) in GANs. I am aware of numerous heuristic as well as non-heuristic ways to stabilize the training (minibatch discrimination, label smoothing, crippling the discriminator, GP and many others), but I haven’t found a method that would ideally provide a scalar value denoting the balance between training.
The obvious questions is to compare losses of D and G, or their gradients. However, this is extremely noisy and I believe that there should be a better/different way to measure it out there. A different take on it is to consider for example FID as the scalar value denoting the balance. If you know of any methods, let me know!
submitted by /u/mw_molino
[link] [comments]