[D] Monitor the balance between the training of the discriminator and generator in GANs
I am looking for a way to monitor the balance between the training of the discriminator (D) and the generator (G) in GANs. I am aware of numerous heuristic as well as non-heuristic ways to stabilize the training (minibatch discrimination, label smoothing, crippling the discriminator, GP and many others), but I haven’t found a method that would ideally provide a scalar value denoting the balance between training.
The obvious questions is to compare losses of D and G, or their gradients. However, this is extremely noisy and I believe that there should be a better/different way to measure it out there. A different take on it is to consider for example FID as the scalar value denoting the balance. If you know of any methods, let me know!