[P] The Joy of Neural Painting – Learning Neural Painters Fast! using PyTorch and Fast.ai
- Blogpost with details: https://medium.com/libreai/the-joy-of-neural-painting-e4319282d51f
- The Code: Our implementation can be found at this Github repo: https://github.com/libreai/neural-painters-x
Neural Painters are a class of models that can be seen as a fully differentiable simulation of a particular non-differentiable painting program, in other words, the machine “paints” by successively generating brushstrokes (i.e., actions that define a brushstrokes) and applying them on a canvas, as an artist would do.
Neural Painters are based on GANs, which are great generative models but they are known to be notoriously difficult to train, specially due to requiring a large amount of data, and therefore, needing large computational power on GPUs. They require a lot of time to train and are sensitive to small hyperparameter variations.
To overcome these known GANs limitations and to speed up the Neural Painter training process, we leveraged the power of Transfer Learning.
The main steps are as described as follows:
(1) Pre-train the Generator with a non-adversarial loss, e.g., using a feature loss (also known as perceptual loss)
(2) Freeze the pre-trained Generator weights
(3) Pre-train the Critic as a Binary Classifier(i.e., non-adversarially) using the pre-trained Generator (in evaluation mode with frozen model weights) to generate `fake` brushstrokes. That is, the Critic should learn to discriminate between real images and the generated ones. This step uses a standard binary classification loss, i.e., Binary Cross Entropy, not a GAN loss
(4) Transfer learning for adversarial training (GAN mode): continue the Generator and Critic training in a GAN setting. Faster!