Blog

Learn About Our Meetup

4500+ Members

Category: NVIDIA

Applications Open for $50,000 NVIDIA Graduate Fellowship Awards

Bringing together the world’s brightest minds and the latest GPU technology leads to powerful research breakthroughs.

That’s why we’re taking applications for the 19th annual NVIDIA Graduate Fellowship Program, seeking students doing outstanding GPU-based research. Our goal: Provide them with grants, mentors and technical support so they can help solve the world’s biggest research problems.

We’re especially seeking doctoral students working in artificial intelligence, machine learning, autonomous vehicles, robotics, AI for healthcare, high performance computing and related fields. Our Graduate Fellowship awards are up to $50,000 per student.

Since its inception in 2002, the Graduate Fellowship Program has awarded over 160 grants worth nearly $5 million.

We’re looking for students who have completed their first year of Ph.D.-level studies at the time of application. Candidates need to be studying computer science, computer engineering, system architecture, electrical engineering or a related area. Applicants must also be investigating innovative ways to use GPUs.

The NVIDIA Graduate Fellowship Program for the 2020-2021 academic year is open to applicants worldwide. The deadline for submitting applications is Sept. 13, 2019. An internship at NVIDIA preceding the fellowship year is now mandatory — eligible candidates should be available for the internship in summer 2020.

For more on eligibility and how to apply, visit the program website or email fellowship@nvidia.com.

The post Applications Open for $50,000 NVIDIA Graduate Fellowship Awards appeared first on The Official NVIDIA Blog.

Gut Feeling: Endoscopy Startup Uses AI to Spot Stomach, Colon Cancer

Even the most experienced doctors can’t catch every tiny polyp during an endoscopy, a screening of the digestive system.

But even in routine exams, the stakes are high — missing an early warning sign of cancer can lead to delayed diagnosis and treatment, lowering a patient’s chances for recovery.

To cut down on the rate of missed precancerous lesions, one Japanese endoscopist is turning to AI. His startup, AIM (short for AI Medical Service), is building a GPU-powered AI system that will analyze endoscopy video feeds in real time, spotting lesions and helping doctors identify which are cancerous or at risk of becoming so.

AI screening could also help clinicians manage a demanding workload: Japanese endoscopists must check more than 3,000 medical images a day, on average. Stomach and colon cancer are two of the three leading causes of cancer-related deaths in the country.

“Coming from 23 years of experience as an actual endoscopist, I saw firsthand the challenges facing experts in the field,” said Tomohiro Tada, CEO of AIM. “GPU-powered AI can help manage the overwhelming demand for checking endoscopic images, while improving the overall accuracy of lesion detection.”

A quarter of precancerous lesions are overlooked in endoscopy screenings, according to one Japanese study. In preclinical research trials, AIM’s AI model achieved 92 percent sensitivity in detecting stomach cancer lesions from endoscopy videos. The startup’s deep learning tool could help endoscopists better distinguish hard-to-spot lesions and improve consistency across different clinics.

AI Powers a Better Gut Check 

During an upper gastrointestinal endoscopy, a doctor examines a patient’s esophagus, stomach and upper region of the small intestine using a long tube with a small camera attached to it. The video feed from this camera is displayed on a larger screen for the clinician, who looks for bleeding, cancer or other conditions.

While doctors examine the endoscopy video footage live to check for polyps, they also check still images after the procedure. Having an AI to assist in real-time detection during a procedure could help doctors save time spent on secondary screening, Tada said.

AIM plans to deploy its AI model, which can identify different kinds of stomach lesions, in an NVIDIA Quadro RTX 4000 GPU-powered device that connects to existing endoscope systems. The device would receive the live endoscopy video feed and simultaneously process the footage to assist doctors during the procedure.

The startup uses a variety of NVIDIA GPUs, including the TITAN Xp and Quadro P6000, to train its deep learning models. It’s using an NVIDIA Quadro mobile workstation for inference in the prototype of its real-time AI device.

AIM’s deep-learning based object detection and classification algorithms are developed using tens of thousands of annotated endoscopy images from Tada’s clinic and from research partners including Japan’s Cancer Institute Hospital and the University of Tokyo Hospital.

The post Gut Feeling: Endoscopy Startup Uses AI to Spot Stomach, Colon Cancer appeared first on The Official NVIDIA Blog.

Perfect Harmony: Pharma’s MELLODY Consortium Joins Forces with NVIDIA to Supercharge AI Drug Discovery

Pharmaceutical companies have traditionally kept their data close to the vest because collaboration’s side effects may include compromising intellectual property and losing the edge over competitors.

But sharing data has major perks: The more data a pharma company has at its disposal, the better equipped its researchers are to quickly identify and develop promising new drugs. This can ultimately improve drug candidate success rates and reduce treatment costs.

Bringing a drug to market takes on average 13 years and close to $2 billion, said Hugo Ceulemans, project leader of MELLODDY — a new drug-discovery consortium that hopes to eliminate the tradeoff between data sharing and security.

The project will use cloud-based NVIDIA GPUs and a distributed approach known as federated learning to train AI models on data from multiple pharmaceutical companies while preserving IP.

An acronym for Machine Learning Ledger Orchestration for Drug Discovery, MELLODDY brings together 17 partners: 10 leading pharmaceutical companies, such as Amgen, Bayer, GSK, Janssen Pharmaceutica and Novartis; top European universities KU Leuven and the Budapest University of Technology and Economics; four trailblazing startups; and NVIDIA’s AI computing platform.

Each pharmaceutical partner will use its own cluster of NVIDIA V100 Tensor Core GPUs hosted on Amazon Web Services. MELLODDY developers will create a distributed deep learning model that can travel among these distinct cloud clusters, training on annotated data for an unprecedented 10 million chemical compounds.

Individual pharmaceutical companies will be able to finetune the AI model, tailoring it to their specific field of inquiry. As part of the data security mission of MELLODDY, each organization will keep its research projects confidential.

“We’re looking forward to becoming better at virtualizing drug discovery to bring more efficient, efficacious and safer therapies to patients,” said Ceulemans, scientific director of Discovery Data Sciences at Janssen Pharmaceutica. “When it comes to machine learning and data science, there’s no single industry that can afford to stand on the sidelines.”

Federated Learning: A New Frontier

MELLODDY aims to demonstrate how federated learning techniques could give pharmaceutical partners the best of both worlds: the ability to leverage the world’s largest collaborative drug compound dataset for AI training without sacrificing data privacy.

The $20 million project will run for three years, at which point the consortium will share learnings with the public.

Federated learning is a method of decentralized machine learning in which training data doesn’t have to be pooled into a single aggregating server. Instead, the machine learning model learns from data stored at different geographic locations, ensuring that each pharmaceutical company’s private dataset stays within its own secure infrastructure.

“The data is never put at risk,” said Mathieu Galtier, project coordinator for Owkin, a startup developing MELLODDY’s federated learning system. “The data sits in its own GPU server, while the algorithms travel from one to the other for training.”

Pharmaceutical datasets consist of historical information about different chemical compounds and their attributes. With the versatile MELLODDY federated learning model, each partner will be able to create anonymized queries about specific drug compounds. The query will be sent to each of the organization’s data repositories to identify any potential matches.

MELLODDY will also employ a blockchain ledger system so pharmaceutical partners can maintain visibility and control over the use of their datasets.

By enabling pharmaceutical companies to learn from each other’s findings without providing traditional competitors direct access to proprietary datasets, the consortium aims to improve the predictive performance of AI-based drug discovery. With smarter models comes speedier and cheaper drug development.

The post Perfect Harmony: Pharma’s MELLODY Consortium Joins Forces with NVIDIA to Supercharge AI Drug Discovery appeared first on The Official NVIDIA Blog.

Perfect Harmony: Pharma’s MELLODDY Consortium Joins Forces with NVIDIA to Supercharge AI Drug Discovery

Pharmaceutical companies have traditionally kept their data close to the vest because collaboration’s side effects may include compromising intellectual property and losing the edge over competitors.

But sharing data has major perks: The more data a pharma company has at its disposal, the better equipped its researchers are to quickly identify and develop promising new drugs. This can ultimately improve drug candidate success rates and reduce treatment costs.

Bringing a drug to market takes on average 13 years and close to $2 billion, said Hugo Ceulemans, project leader of MELLODDY — a new drug-discovery consortium that hopes to eliminate the tradeoff between data sharing and security.

The project will use cloud-based NVIDIA GPUs and a distributed approach known as federated learning to train AI models on data from multiple pharmaceutical companies while preserving IP.

An acronym for Machine Learning Ledger Orchestration for Drug Discovery, MELLODDY brings together 17 partners: 10 leading pharmaceutical companies, such as Amgen, Bayer, GSK, Janssen Pharmaceutica and Novartis; top European universities KU Leuven and the Budapest University of Technology and Economics; four trailblazing startups; and NVIDIA’s AI computing platform.

Each pharmaceutical partner will use its own cluster of NVIDIA V100 Tensor Core GPUs hosted on Amazon Web Services. MELLODDY developers will create a distributed deep learning model that can travel among these distinct cloud clusters, training on annotated data for an unprecedented 10 million chemical compounds.

Individual pharmaceutical companies will be able to finetune the AI model, tailoring it to their specific field of inquiry. As part of the data security mission of MELLODDY, each organization will keep its research projects confidential.

“We’re looking forward to becoming better at virtualizing drug discovery to bring more efficient, efficacious and safer therapies to patients,” said Ceulemans, scientific director of Discovery Data Sciences at Janssen Pharmaceutica. “When it comes to machine learning and data science, there’s no single industry that can afford to stand on the sidelines.”

Federated Learning: A New Frontier

MELLODDY aims to demonstrate how federated learning techniques could give pharmaceutical partners the best of both worlds: the ability to leverage the world’s largest collaborative drug compound dataset for AI training without sacrificing data privacy.

The $20 million project will run for three years, at which point the consortium will share learnings with the public.

Federated learning is a method of decentralized machine learning in which training data doesn’t have to be pooled into a single aggregating server. Instead, the machine learning model learns from data stored at different geographic locations, ensuring that each pharmaceutical company’s private dataset stays within its own secure infrastructure.

“The data is never put at risk,” said Mathieu Galtier, project coordinator for Owkin, a startup developing MELLODDY’s federated learning system. “The data sits in its own GPU server, while the algorithms travel from one to the other for training.”

Pharmaceutical datasets consist of historical information about different chemical compounds and their attributes. With the versatile MELLODDY federated learning model, each partner will be able to create anonymized queries about specific drug compounds. The query will be sent to each of the organization’s data repositories to identify any potential matches.

MELLODDY will also employ a blockchain ledger system so pharmaceutical partners can maintain visibility and control over the use of their datasets.

By enabling pharmaceutical companies to learn from each other’s findings without providing traditional competitors direct access to proprietary datasets, the consortium aims to improve the predictive performance of AI-based drug discovery. With smarter models comes speedier and cheaper drug development.

The post Perfect Harmony: Pharma’s MELLODDY Consortium Joins Forces with NVIDIA to Supercharge AI Drug Discovery appeared first on The Official NVIDIA Blog.

Forget Storming Area 51, AI’s Helping Astronomers Scour the Skies for Habitable Planets

Imagine staring into the high-beams of an oncoming car. Now imagine trying to pick out a speck of dust in the glare of the headlights.

That’s the challenge Olivier Guyon and Damien Gratadour face as they try to find the dull glint of an exoplanet — a planet orbiting a star outside our solar system — beside the bright light of its star.

The pair — Guyon is an instrument developer for Japan’s Subaru Telescope and an astronomer at the University of Arizona, and Gratadour is an associate professor at the Observatoire de Paris and an instrument scientist at the Australian National University — spoke with AI Podcast host Noah Kravitz about how they’re using GPU-powered extreme adaptive optics in very large telescopes to image nearby habitable planets.

Sighting an exoplanet is difficult because its light is “millions or a billion times fainter than the star around which it orbits,” according to Guyon.

Then comes the issue of the Earth’s atmosphere. The telescopes that Guyon and Gratadour work with are based on the ground. So their images experience atmospheric turbulence. The effect, Gratadour explains, is “similar to what you see above a hot road during the summer.”

Adaptive optics algorithms — accelerated by GPUs — can correct for this turbulence by using high performance computing, sharpening an image in real time. These corrections occur through a mechanical process called compensation, in which a deformable mirror behind the focus of the telescope is adjusted every millisecond. The result is a near-perfect image.

Astronomers can use this image to separate the faint light of an exoplanet from its star. Then, they can take a spectrum, or a graph of the different colors of light coming from the planet. Spectra can reveal the planet’s composition along with the presence of “water, methane and even plant life,” according to Guyon.

Guyon works on the Subaru Telescope in Japan, but this process is occurring at several very large telescopes. “Multiple teams are essentially racing,” he says. “We are all extremely impatient, because we know the planets are out there and we want to be able to image it.”

Gratadour is working on the next generation of telescopes, which should be ready for use in 2025. Today’s very large telescopes are 8 to 10 meters in length. The next generation of telescopes will be 4 to 5x as large, and will produce 25x as much computing power as their predecessors.

Temperate exoplanets bring up the possibility of extraterrestrial life. Asked about the existence of aliens, Guyon and Gratadour say there’s almost certainly life beyond our planet. The real questions to ask, Guyon says: “How frequent is it? How frequently does it evolve from bacteria or very simple forms of life to things that are much more complex like us? What does it become?”

To learn more about the work of scientists like Guyon and Gratadour, check out the websites of very large telescopes like the Subaru and Gemini.

Help Make the AI Podcast Better

Have a few minutes to spare? Fill out this short listener survey. Your answers will help us make a better podcast.

How to Tune in to the AI Podcast

Get the AI Podcast through iTunes, Castbox, DoggCatcher, Overcast, PlayerFM, Podbay, PodBean, Pocket Casts, PodCruncher, PodKicker, Stitcher, Soundcloud and TuneIn. Your favorite not listed here? Email us at aipodcast [at] nvidia [dot] com.

Featured image credit: NASA/JPL-Caltech

The post Forget Storming Area 51, AI’s Helping Astronomers Scour the Skies for Habitable Planets appeared first on The Official NVIDIA Blog.

Ultrasounds Where You Need Them: How AI Is Improving Diagnoses at Point of Care

In medical emergencies, a quick diagnosis based on the information at hand can be a matter of life or death.

The same can be true for non-emergencies. When limited medical equipment is available, delays in diagnosis can turn these situations into emergencies.

Ultrasound enables the accurate, efficient and non-invasive diagnosis of a host of ailments, including appendicitis, heart abnormalities and many urological and gynecological conditions.

But most emergency responders and medical professionals either aren’t trained or aren’t equipped to use the technology.

DESKi, a member of the NVIDIA Inception program, based in Bordeaux, France, is using AI to make ultrasound technologies more effective at the point of care for these personnel and their patients.

“The fact that two-thirds of the world population have no access to medical imaging technologies is a public health issue,” said Bertrand Moal, CEO at DESKi. “Ultrasound is non-invasive, affordable and can be used to diagnose ailments of multiple organs, which makes it the perfect tool to support diagnosis at the point of care, by non-specialists.”

Benefits of Ultrasound

To help those on the medical front lines make more accurate diagnoses and better informed decisions about patient care, DESKi has created DeepEcho.

This system combines deep learning algorithms, trained on NVIDIA DGX Station, and cutting-edge handheld ultrasound devices, which can be linked up to mobile phones and tablets, to deliver the expertise of cardiac health specialists in emergency situations.

Using a wealth of training data from leading cardiology units, DESKi has developed a series of neural networks that can determine whether or not the DeepEcho’s ultrasound probe is in the correct position for acquiring accurate and insightful views of the heart.

The company is also training its algorithms to automatically measure the left ventricle ejection fraction, which can help diagnose heart failure.

“By deploying ultrasound in the field with AI software, we’re helping to bring medical imaging expertise to those who need it most,” said Moal.

 

Protecting Patient Privacy

To train their deep learning algorithms, DESKi needs to collect high-quality data that has been reviewed and interpreted by cardiology experts.

Recently, the startup entered into a framework agreement with Bordeaux University Hospital for the development of AI projects, including DeepEcho.

Over 20,000 cardiac ultrasound examinations are performed by experienced cardiologists every year at the hospital. DESKi uses anonymized data from these examinations to train its deep learning algorithms.

To accelerate the training, DESKi turned to the power of NVIDIA DGX Station. The portability of the deskside supercomputer enabled them to build the initial framework in-house; when it was time to deploy, they transported the system to the hospital itself.

“By deploying NVIDIA’s DGX Station onsite in the hospital, we’re able to combine cutting-edge AI technology with cardiology expertise, all while ensuring that patient data is secure and never comes off premises,” noted Victor Ferrand, co-founder and CTO at DESKi.

In the future, DESKi plans to expand its tools to other specialties such as gynecology, gastroenterology and urology.

Learn more about their work with Bordeaux University Hospital in our webinar “Deep Learning for Automatic Cardiac Ultrasound Analysis.”

The post Ultrasounds Where You Need Them: How AI Is Improving Diagnoses at Point of Care appeared first on The Official NVIDIA Blog.

AI-Based Virtualitics Demystifies Data Science with VR

The words “data science” often inspire feelings of dread or confusion.

But Virtualitics, an AI-based analytics platform, is bringing creativity and excitement to the field through machine learning and immersive visualization.

Head of Machine Learning Projects Aakash Indurkhya spoke with AI Podcast host Noah Kravitz about why combining AI and VR can be so useful.

“Just comparing two variables against each other is no longer good enough,” Indurkhya says.

Virtualitics, an AI-based analytics platform, is bringing creativity and excitement to the data analytics through machine learning and immersive visualization.

And as datasets grow, it is no longer intuitive what variables should be plotted against each other.

Even expert data scientists could take hours — or even weeks — trying to ascertain the most useful visualizations and models to make sense of the data.

Virtualitics Immersive Platform, or VIP, has a two-pronged approach to simplifying data science.

First, there are embedded machine learning routines, which includes a Smart Mapping tool that determines the best way of plotting data and identifies drivers of the client’s Key Performance Indicator — or KPI.

Indurkhya explains that, using AI, the software “immediately ranks your features in terms of which ones matter to your KPI and then also automatically generates a visualization so you can start looking at how those different combinations of features actually shape the relationship with the KPI.”

The second part of Virtualitics’ solution is their Shared Virtual Office, or SVO, in both Desktop and Virtual Reality. The technology is built on top of the Unity engine, and works with all major VR providers, such as Oculus and Windows MR devices.

VIP not only creates interactive and colorful visuals, but allows clients to have their own avatars through which they can, “like Iron Man,” collaboratively interact with their data.

For those who are less experienced with data science, this bridges the gap created by a lack of formal training, allowing them to identify clusters or detect anomalies on their own in a matter of seconds. And for expert data scientists, who deal with high demand and complex tasks, it gives them the technology to demonstrate to what they are doing to stakeholders.

In the future, Virtualitics will be working on visualizing networks, which are the common thread between technologies like IoT, blockchain, and social media.

“Network data is all around us but we lack intuitive and visual tools to properly make sense of them.” Indurkhya says, “With VR, we get the depth perception and interaction that’s lost when constrained to 2D screens. This is going to change how people think about networks.”

The applications go so far as improving disease classification, monitoring cybersecurity threats, and the identification of bad actors in social networks.

To learn more about Virtualitics, sign up for a demo, or watch their webinars, visit their website.

Help Make the AI Podcast Better

Have a few minutes to spare? It’d help us if you fill out this short listener survey.

Your answers will help us learn more about our audience, which will help us deliver podcasts that meet your needs, what we can do better, and what we’re doing right.

How to Tune into the AI Podcast

Our AI Podcast is available through iTunesCastbox, DoggCatcher, Google Play MusicOvercastPlayerFMPodbayPodBean, Pocket Casts, PodCruncher, PodKicker, Stitcher, Soundcloud and TuneIn.

If your favorite isn’t listed here, email us at aipodcast [at] nvidia [dot] com.

 

The post AI-Based Virtualitics Demystifies Data Science with VR appeared first on The Official NVIDIA Blog.

A Pigment of Your Imagination: Over Half-Million Images Created with GauGAN AI Art Tool

From amateur doodlers to leading digital artists, creators are coming out in droves to produce masterpieces with NVIDIA’s most popular research demo: GauGAN.

The AI painting web app — which turns rough sketches into stunning, photorealistic scenes — was built to demonstrate NVIDIA Research based on harnessing generative adversarial networks.

More than 500,000 images have been created with GauGAN since the beta version was made publicly available just over a month ago on the NVIDIA AI Playground.

Art directors and concept artists from top film studios and video game companies are among the creative professionals already harnessing GauGAN as a tool to prototype ideas and make rapid changes to synthetic scenes.

“GauGAN popped on the scene and interrupted my notion of what I might be able to use to inspire me,” said Colie Wertz, a concept artist and modeler whose credits include Star Wars, Transformers and Avengers movies. “It’s not something I ever imagined having at my disposal.”

Wertz, using a GauGAN landscape as a foundation, recently created an otherworldly ship design shared on social media.

Colie Wertz ship design
AI Work of Art: Senior concept artist Colie Wertz created this ship design with a GauGAN landscape as a foundation.

“Real-time updates to my environments with a few brush strokes is mind-bending. It’s like instant mood,” said Wertz, who uses NVIDIA RTX GPUs for his creative work. “This is forcing me to reinvestigate how I approach a concept design.”

Attendees of this week’s SIGGRAPH conference can experience GauGAN for themselves in the NVIDIA booth, where it’s running on an HP RTX workstation powered by NVIDIA Quadro RTX GPUs that feature Tensor Cores. NVIDIA researchers will also present GauGAN during a live event at the prestigious computer graphics show.

Users can share their GauGAN creations on Twitter with #SIGGRAPH2019, #GauGAN and @NVIDIADesign to enter our AI art contest, judged by Wertz. Winner will receive an NVIDIA Quadro RTX 6000 GPU.

Unleash Your AI Artist

GauGAN, named for post-Impressionist painter Paul Gauguin, creates photorealistic images from segmentation maps, which are labeled sketches that depict the layout of a scene.

People can use paintbrush and paint bucket tools to design their own landscapes with labels including river, grass, rock and cloud. A style transfer algorithm allows creators to apply filters, modifying the color composition of a generated image, or turning it from a photorealistic scene to a painting.

“As researchers working on image synthesis, we’re always pursuing new techniques to create images with higher fidelity and higher resolution,” said NVIDIA researcher Ming-Yu Liu. “That was our original goal for the project.”

But when the demo was introduced at our GPU Technology Conference in Silicon Valley, it took on a life of its own. Attendees flocked to a tablet on the show floor where they could try it out for themselves, creating stunning scenes of everything from sun-drenched ocean landscapes to idyllic mountain ranges shrouded by clouds.

The latest iteration of the app, on display at SIGGRAPH, lets users upload their own filters to layer onto their masterpieces — adopting the lighting of a perfect sunset photo or emulating the style of a favorite painter.

They can even upload their own landscape images. The AI will convert source images into a segmentation map, which can then be used as a foundation for the user’s artwork.

“We want to make an impact with our research,” Liu said. “This work creates a channel for people to express their creativity and create works of art they wouldn’t be able to do without AI. It’s enabling them to make their imagination come true.”

While the researchers anticipated game developers, landscape designers and urban planners to benefit from this technology, interest in GauGAN has been far more widespread — including from a healthcare organization exploring its use as a therapeutic, stress-mitigating tool for patients.

AI That Captures the Imagination 

Developed using the PyTorch deep learning framework, the neural network behind GauGAN was trained on a million images using the NVIDIA DGX-1 deep learning system. The demo shown at GTC ran on an NVIDIA TITAN RTX GPU, while the web app is hosted on NVIDIA GPUs through Amazon Web Services.

Liu developed the deep neural network and accompanying app along with researchers Taesung Park, Ting-Chun Wang and Jun-Yan Zhu.

The team has publicly released source code for the neural network behind GauGAN, making it available for non-commercial use by other developers to experiment with and build their own applications.

GauGAN is available on the NVIDIA AI Playground for visitors to experience the demo firsthand.

The post A Pigment of Your Imagination: Over Half-Million Images Created with GauGAN AI Art Tool appeared first on The Official NVIDIA Blog.

A Pigment of Your Imagination: GauGAN AI Art Tool Receives “Best of Show,” “Audience Choice” Awards at SIGGRAPH

NVIDIA’s viral real-time AI art application, GauGAN, Tuesday won two major SIGGRAPH awards.

From amateur doodlers to leading digital artists, creators are coming out in droves to produce masterpieces with NVIDIA’s most popular research demo: GauGAN.

And the demo has been a smash hit at the SIGGRAPH professional graphics conference as well, winning both the “Best of Show” and “Audience Choice,” awards at the conference’s Real Time Live competition after NVIDIA’s Ming-Yu Liu, Chris Hebert, Gavriil Klimov and UC Berkeley researcher Taesung Park presented the application to enthusiastic applause.

The AI painting web app — which turns rough sketches into stunning, photorealistic scenes — was built to demonstrate NVIDIA Research based on harnessing generative adversarial networks.

More than 500,000 images have been created with GauGAN since the beta version was made publicly available just over a month ago on the NVIDIA AI Playground.

Art directors and concept artists from top film studios and video game companies are among the creative professionals already harnessing GauGAN as a tool to prototype ideas and make rapid changes to synthetic scenes.

“GauGAN popped on the scene and interrupted my notion of what I might be able to use to inspire me,” said Colie Wertz, a concept artist and modeler whose credits include Star Wars, Transformers and Avengers movies. “It’s not something I ever imagined having at my disposal.”

Wertz, using a GauGAN landscape as a foundation, recently created an otherworldly ship design shared on social media.

Colie Wertz ship design
AI Work of Art: Senior concept artist Colie Wertz created this ship design with a GauGAN landscape as a foundation.

“Real-time updates to my environments with a few brush strokes is mind-bending. It’s like instant mood,” said Wertz, who uses NVIDIA RTX GPUs for his creative work. “This is forcing me to reinvestigate how I approach a concept design.”

Attendees of this week’s SIGGRAPH conference can experience GauGAN for themselves in the NVIDIA booth, where it’s running on an HP RTX workstation powered by NVIDIA Quadro RTX GPUs that feature Tensor Cores. NVIDIA researchers will also present GauGAN during a live event at the prestigious computer graphics show.

Users can share their GauGAN creations on Twitter with #SIGGRAPH2019, #GauGAN and @NVIDIADesign to enter our AI art contest, judged by Wertz. Winner will receive an NVIDIA Quadro RTX 6000 GPU.

Unleash Your AI Artist

GauGAN, named for post-Impressionist painter Paul Gauguin, creates photorealistic images from segmentation maps, which are labeled sketches that depict the layout of a scene.

People can use paintbrush and paint bucket tools to design their own landscapes with labels including river, grass, rock and cloud. A style transfer algorithm allows creators to apply filters, modifying the color composition of a generated image, or turning it from a photorealistic scene to a painting.

“As researchers working on image synthesis, we’re always pursuing new techniques to create images with higher fidelity and higher resolution,” said NVIDIA researcher Ming-Yu Liu. “That was our original goal for the project.”

But when the demo was introduced at our GPU Technology Conference in Silicon Valley, it took on a life of its own. Attendees flocked to a tablet on the show floor where they could try it out for themselves, creating stunning scenes of everything from sun-drenched ocean landscapes to idyllic mountain ranges shrouded by clouds.

The latest iteration of the app, on display at SIGGRAPH, lets users upload their own filters to layer onto their masterpieces — adopting the lighting of a perfect sunset photo or emulating the style of a favorite painter.

They can even upload their own landscape images. The AI will convert source images into a segmentation map, which can then be used as a foundation for the user’s artwork.

“We want to make an impact with our research,” Liu said. “This work creates a channel for people to express their creativity and create works of art they wouldn’t be able to do without AI. It’s enabling them to make their imagination come true.”

While the researchers anticipated game developers, landscape designers and urban planners to benefit from this technology, interest in GauGAN has been far more widespread — including from a healthcare organization exploring its use as a therapeutic, stress-mitigating tool for patients.

AI That Captures the Imagination 

Developed using the PyTorch deep learning framework, the neural network behind GauGAN was trained on a million images using the NVIDIA DGX-1 deep learning system. The demo shown at GTC ran on an NVIDIA TITAN RTX GPU, while the web app is hosted on NVIDIA GPUs through Amazon Web Services.

Liu developed the deep neural network and accompanying app along with researchers Taesung Park, Ting-Chun Wang and Jun-Yan Zhu.

The team has publicly released source code for the neural network behind GauGAN, making it available for non-commercial use by other developers to experiment with and build their own applications.

GauGAN is available on the NVIDIA AI Playground for visitors to experience the demo firsthand.

Note: This post has been updated from the original to reflect the results  of Tuesday’s Real Time Live competition at SIGGRAPH. 

The post A Pigment of Your Imagination: GauGAN AI Art Tool Receives “Best of Show,” “Audience Choice” Awards at SIGGRAPH appeared first on The Official NVIDIA Blog.

SIGGRAPH Showcases Amazing NVIDIA Research Breakthroughs, NVIDIA Wins Best in Show Award

Get ready to dig in this week.

SIGGRAPH is here and we’re helping graphics professionals, researchers, developers and students of all kinds take advantage of the latest advances in graphics, including new possibilities in real-time ray tracing, AI, and augmented reality.

SIGGRAPH is the most important computer graphics conference in the world, and our research team and collaborators from top universities and many industries are here with us.

At the top of the list: ray tracing, using NVIDIA’s RTX platform, which fuses ray tracing, deep learning and rasterization. We’re directly involved in 34 of 50 ray tracing-related technical sessions this week — far more than any other company. And our talks are drawing luminaries from around the industry, with four technical Academy Award winners participating in NVIDIA sponsored sessions.

Beyond the technical sessions, we’ll be showcasing new developer tools, and giving attendees a first-hand look at some of our most exciting work. One great example is NVIDIA GauGAN an interactive paint program that uses GANs (generative adversarial networks) to create works of art from simple brush strokes. Now everybody can be an artist.

Never been to the moon? A stunning new demo virtually transports visitors to the Apollo 11 landing site using never-before-shown AI pose estimation that captures their body movements in real time. This all became possible by combining NVIDIA Omniverse technology, AI and RTX ray tracing.

The story behind all these stories: our 200-person strong NVIDIA Research team — spread across 11 worldwide locations. The group embodies NVIDIA’s commitment to bringing innovative new ideas to customers in everything from machine learning, computer vision, self-driving cars, robotics, graphics, computer architecture, programming systems and more.

A Host of Papers, Talks, Tutorials

We’ll be leading or participating in six SIGGRAPH courses that detail various facets of the next-generation graphics technologies we’ve played a leading role in bringing to market.

These courses touch on everything from an introduction to real-time ray tracing, the use of the NVIDIA OptiX API, Monte Carlo and quasi-Monte Carlo sampling techniques, the latest in path tracing techniques, open problems in real-time rendering, and the future of ray tracing as a whole.

The common denominator: RTX. The real-time ray-tracing capabilities RTX unleashes offer far more realistic lighting effects than traditional real-time rendering techniques.

We’re also sponsoring seven courses on topics ranging from deep learning for content creation and real-time rendering to GPU ray tracing for film and design.

And we’re presenting technical papers that detail how our latest near-eye AR display demo works and take the next leap in denoising Monte Carlo rendering using convolutional neural networks — a cornerstone of AI — effectively using modern AI techniques to greatly reduce the time required to generate realistic images.

The Eyes Have It: Prescription-Embedded AR Display Wins Best in Show Award

You’ll be able to get hands-on with our latest technology in SIGGRAPH’s Emerging Technologies area. That’s where we have a pair of wearable augmented reality displays technology you need to see, especially if you don’t see very well without regular eyeglasses.

The first, “Prescription AR,” is a prescription-embedded AR display that won a SIGGRAPH Best in Show Emerging Technology award Monday.

The display is many times thinner and lighter and has a wider field of view that current-generation AR devices. Virtual objects appear throughout the natural instead of clustered in the center, and it has your prescription built right into it if you wear corrective optics. This much closer to the goal of comfortable, practical and socially-acceptable AR displays than anything currently available.

 

The second research demonstration, “Foveated AR,” is a headset that adapts to your gaze in real time using deep learning. It adjusts the resolution of the images it displays and their focal depth to match wherever you are looking and gives both sharper images and a wider field of view than any previous AR display.

To do this, it combines two different displays per eye, a high-resolution small field of view displaying images to the portion of the human retina where visual acuity is highest, and a low-resolution display for peripheral vision. The result is high-quality visual experiences with reduced power and computation.

TITAN RTX Giveway

Finally, NVIDIA is thanking the student volunteer community at SIGGRAPH with a daily giveaway of TITAN RTX while exhibit hall is open. These students are the future of one of the world’s most vibrant professional communities, a community we’re privileged to be a part of.

The post SIGGRAPH Showcases Amazing NVIDIA Research Breakthroughs, NVIDIA Wins Best in Show Award appeared first on The Official NVIDIA Blog.

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.