[P] Using StyleGAN to make a music visualizer
I’m excited to share this generative video project I worked on with Japanese electronic music artist Qrion for the release of her Sine Wave Party EP.
Here’s the first generated video – two more coming out soon.
This was created using StyleGAN and doing a transfer learning with a custom dataset of images curated by the artist. Qrion picked images that matched the mood of each song (things like clouds, lava hitting the ocean, forest interiors, and snowy mountains) and I generated interpolation videos for each track.
The tempo of the GAN evolution is controlled by a few different things: – the beat of the song – a system I built that incorporates live input (you can tap the keyboard to add a jump to the playback in After Effects) – a keyframeable overall playback speed fader system
I’ve also posted some more images I created with Qrion’s custom models here on my site. There are some further StyleGAN experiments there too if you’re interested.
It’s been fascinating to learn how to use StyleGAN like this. As a visual effects artist, I’m over the moon with the sorts of things that are possible. Indeed, I also wanted to shout out /u/C0D32 who shared an art-centered StyleGAN model that was really influential to me! Thanks for that. Also, of course, /u/gwern who posted an incredible guide to using StyleGAN.
submitted by /u/AtreveteTeTe
[link] [comments]