[R] [OC] Intrinsic Multi-scale Evaluation of Generative Models
Generative models are often used to sample high-dimensional data points from a manifold with small intrinsic dimension. Existing techniques for comparing generative models focus on global data properties such as mean and covariance; in that sense, they are extrinsic and uni-scale. We develop the first, to our knowledge, intrinsic and multi-scale method for characterizing and comparing underlying data manifolds, based on comparing all data moments by lower-bounding the spectral notion of the Gromov-Wasserstein distance between manifolds. In a thorough experimental study, we demonstrate that our method effectively evaluates the quality of generative models; further, we showcase its efficacy in discerning the disentanglement process in neural networks.
TL;DR: We introduce a metric based on heat kernels that is able to compare data manifolds in unaligned spaces. We test it on GAN evaluation (it works), and on tracking the disentanglement and training progress of NNs (also works!).
Extra shoutout to /u/augustushimself who wrote a brilliant GAN post, and the discussion that followed on this sub. It gave a lot of inspiration for writing this.