[D] Evaluating “A Neural Algorithm of Artistic Style” by Gatys et al.
This is an exciting paper because it’s the first to introduce artistic style transfer using pre-trained neural networks.
What’s great is how the paper demonstrated how to extract content (e.g., shapes, contours) from an image and how to extract style from multiple layers (e.g., some layers extract fine-grained styles while others extract overall style). Combining these techniques can yield realistic and professional results.
The main issue is timing and compute power required. The paper extracts content features of an image with VGG19 and extracts style features from multiple layers with VGG19. Because it requires extensive compute power for feature extraction and for optimizing style and content loss, it takes 500-1000 iterations just to produce a low-resolution image. It would be ideal if the algorithm could produce results in a few iterations.
- Has anybody tried RESNET or any other state of the art network instead of VGG19?
- Does anyone know of a transformer network that produces similar results to this paper?
- Any recommendations on better style transfer papers?