[D] Why is it difficult to sample from Energy Based Models?
I have very little experience with generative models, so apologies if that is a trivial question.
My understanding of an energy based model (EBM) is that it is an undiredted graph defining the joint distribution over the vector X as p(X)=exp(-E(X)) such that E(x) is a sum of potentials defined over clicks.
The well-known Deep learning book by Goodfellow et al. claims that sampling from an EBM is difficult:
To understand why drawing samples from an energy-based model (EBM) is difficult, consider the EBM over just two variables, defining a distribution p(a,b). In order to sample a, we must draw from p(a|b), and in order to sample b, we must draw it from p(b|a). It seems to be an intractable chicken-and-egg problem.
I really find that perplexing. We already know p(a,b), so why can’t we just compute the marginal p(a), sample from it, and then sample from p(b|a)?
submitted by /u/AddMoreLayers
[link] [comments]