Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[R][P] StarGAN v2: Diverse Image Synthesis for Multiple Domains

[R][P] StarGAN v2: Diverse Image Synthesis for Multiple Domains

![img](ypy74yikfs241 “Diverse image synthesis results on the CelebA-HQ dataset and our newly collected animal faces (AFHQ) dataset. The first column shows input images while the remaining columns are images synthesized by StarGAN v2.”)

!(pdaoqt5rfs241 “StarGAN v2 can transform a source image into an output image reflecting the style (e.g., hairstyle and makeup) of a given reference image. Additional high-quality videos can be found at the link below.”)

arXiv: https://arxiv.org/abs/1912.01865

github: https://github.com/clovaai/stargan-v2

video: shorturl.at/eACS9

Abstract

A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains. Existing methods address either of the issues, having limited diversity or multiple models for all domains. We propose StarGAN v2, a single framework that tackles both and shows significantly improved results over the baselines. Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate our superiority in terms of visual quality, diversity, and scalability. To better assess image-to-image translation models, we release AFHQ, high-quality animal faces with large inter- and intra-domain variations. The code, pretrained models, and dataset will be released for reproducibility.

submitted by /u/yunjey
[link] [comments]