A Pigment of Your Imagination: GauGAN AI Art Tool Receives “Best of Show,” “Audience Choice” Awards at SIGGRAPH
NVIDIA’s viral real-time AI art application, GauGAN, Tuesday won two major SIGGRAPH awards.
From amateur doodlers to leading digital artists, creators are coming out in droves to produce masterpieces with NVIDIA’s most popular research demo: GauGAN.
And the demo has been a smash hit at the SIGGRAPH professional graphics conference as well, winning both the “Best of Show” and “Audience Choice,” awards at the conference’s Real Time Live competition after NVIDIA’s Ming-Yu Liu, Chris Hebert, Gavriil Klimov and UC Berkeley researcher Taesung Park presented the application to enthusiastic applause.
More than 500,000 images have been created with GauGAN since the beta version was made publicly available just over a month ago on the NVIDIA AI Playground.
Art directors and concept artists from top film studios and video game companies are among the creative professionals already harnessing GauGAN as a tool to prototype ideas and make rapid changes to synthetic scenes.
“GauGAN popped on the scene and interrupted my notion of what I might be able to use to inspire me,” said Colie Wertz, a concept artist and modeler whose credits include Star Wars, Transformers and Avengers movies. “It’s not something I ever imagined having at my disposal.”
Wertz, using a GauGAN landscape as a foundation, recently created an otherworldly ship design shared on social media.
“Real-time updates to my environments with a few brush strokes is mind-bending. It’s like instant mood,” said Wertz, who uses NVIDIA RTX GPUs for his creative work. “This is forcing me to reinvestigate how I approach a concept design.”
Attendees of this week’s SIGGRAPH conference can experience GauGAN for themselves in the NVIDIA booth, where it’s running on an HP RTX workstation powered by NVIDIA Quadro RTX GPUs that feature Tensor Cores. NVIDIA researchers will also present GauGAN during a live event at the prestigious computer graphics show.
Unleash Your AI Artist
GauGAN, named for post-Impressionist painter Paul Gauguin, creates photorealistic images from segmentation maps, which are labeled sketches that depict the layout of a scene.
People can use paintbrush and paint bucket tools to design their own landscapes with labels including river, grass, rock and cloud. A style transfer algorithm allows creators to apply filters, modifying the color composition of a generated image, or turning it from a photorealistic scene to a painting.
“As researchers working on image synthesis, we’re always pursuing new techniques to create images with higher fidelity and higher resolution,” said NVIDIA researcher Ming-Yu Liu. “That was our original goal for the project.”
But when the demo was introduced at our GPU Technology Conference in Silicon Valley, it took on a life of its own. Attendees flocked to a tablet on the show floor where they could try it out for themselves, creating stunning scenes of everything from sun-drenched ocean landscapes to idyllic mountain ranges shrouded by clouds.
The latest iteration of the app, on display at SIGGRAPH, lets users upload their own filters to layer onto their masterpieces — adopting the lighting of a perfect sunset photo or emulating the style of a favorite painter.
They can even upload their own landscape images. The AI will convert source images into a segmentation map, which can then be used as a foundation for the user’s artwork.
“We want to make an impact with our research,” Liu said. “This work creates a channel for people to express their creativity and create works of art they wouldn’t be able to do without AI. It’s enabling them to make their imagination come true.”
While the researchers anticipated game developers, landscape designers and urban planners to benefit from this technology, interest in GauGAN has been far more widespread — including from a healthcare organization exploring its use as a therapeutic, stress-mitigating tool for patients.
AI That Captures the Imagination
Developed using the PyTorch deep learning framework, the neural network behind GauGAN was trained on a million images using the NVIDIA DGX-1 deep learning system. The demo shown at GTC ran on an NVIDIA TITAN RTX GPU, while the web app is hosted on NVIDIA GPUs through Amazon Web Services.
Liu developed the deep neural network and accompanying app along with researchers Taesung Park, Ting-Chun Wang and Jun-Yan Zhu.
The team has publicly released source code for the neural network behind GauGAN, making it available for non-commercial use by other developers to experiment with and build their own applications.
GauGAN is available on the NVIDIA AI Playground for visitors to experience the demo firsthand.
Note: This post has been updated from the original to reflect the results of Tuesday’s Real Time Live competition at SIGGRAPH.