Learn About Our Meetup

5000+ Members



Join our meetup, learn, connect, share, and get to know your Toronto AI community. 



Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.



Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[Discussion] StyleGANv2 uses Multi-scaled Gradients instead of Progressive Growing

The progressive growing has been immensely successful in the past and has been used in a myriad of works since then (number of citations :D). We were mostly trying to solve the issue of extensive hyperparameter tuning with the Multi-Scale Gradients. But the paper mentions a totally new problem altogether. This raises two questions for me and I thought of asking them out here:

1.) The problem of phase artifacts. There is slightly less theoretical motivation about progressive growing being the cause of this problem, and it has been attributed to the fact that the network tries to output highest frequency details at all resolutions. It would be very helpful for me if my fellow redditers could provide any more insights into how these two connect to each other.

2.) If this issue existed since the beginning, then what kind of implications does the use of progressive growing has in the state-of-the-art works in other domains such as image2image, vid2vid, colourization networks etc?

For reference -> MSG-GAN paper (

submitted by /u/akanimax
[link] [comments]

Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.