Learn About Our Meetup

4200+ Members

Category: NVIDIA

Lean, Green, AI Machines: 5 Projects Using GPUs for a Better Planet

Earth Day is a good day for AI.

And the benefits are felt all year, all around the world, as deep learning and NVIDIA GPUs aid our understanding of ecosystems and climate patterns, preserving plants and animals, and managing waste.

Here are five ways companies, researchers and scientists are using GPUs for a better planet:

Into the Woods AI Goes

Whether in a rainforest or urban green spaces, life on Earth relies heavily on trees. But manually monitoring forested areas to track potential risks to plant health is time consuming and costly.

Portugal-based startup is using AI to monitor forests from satellite imagery in a fraction of the time currently required. It uses NVIDIA GPUs inhouse and in the cloud to process some 100TB of new satellite data daily, helping clients analyze tree species, growth and productivity.

A Cloudy Picture of Climate Change

The Earth is warming, but at what rate? Climate models vary in their projections of global temperature rise in the coming years, from 1.5 degrees to more than three degrees by 2100. This variation is largely due to the difficulty of representing clouds in global climate models.

Neural networks can be used to address this cloud resolution challenge, researchers from Columbia University, UC Irvine and the University of Munich found. Developed using an assortment of NVIDIA GPUs, their deep learning model improved performance and provided better predictions for precipitation extremes than the original climate model. This detailed view can improve scientists’ ability to predict regional climate impact.

One Person’s Trash Is an AI’s Treasure

Trying to correctly sort the remains of a lunch into compost, recycling and landfill is a hard enough task for the average person. But if different types of waste are collected together and sent to recycling centers, the trash often can’t be sorted and it all ends up in a landfill. Only 29 percent of the municipal waste generated in Europe in 2017 was recycled.

Smart recycling startup Bin-e hopes to raise the recycling rate with deep learning. Using the NVIDIA Jetson TX1, the startup has created a smart recycling bin that automatically recognizes, sorts and compresses waste. Its AI, trained on NVIDIA TITAN Xp GPUs, takes an image of each piece of trash and determines whether it’s paper, aluminum, plastic or e-waste before depositing it into the correct bin.

Sequencing on Land and at Sea

DNA sequencing isn’t just for the human genome. Nanopore sequencing, a technique for DNA sequencing, can be used to analyze the genomes of plants and microorganisms. UK.startup Oxford Nanopore Technologies is using recurrent neural networks to help scientists detect pathogens in cassava plant genomes.

It’s also analyzed the DNA of microbial sea life off the coast of Alaska, giving researchers a better understanding of ocean biodiversity and the effects of climate change on marine microorganisms.

Oxford Nanopore’s MinIT hand-held AI supercomputer is powered by NVIDIA AGX, enabling researchers to run sequence analysis in the field.

Whale, AI’ll Say

Due to centuries of whaling by humans, just 500 North Atlantic right whales still exist. Those left have been forced by climate change to adopt a new migration path — exposing them to a new threat: commercial shipping vessels that can accidentally strike whales as they pass through shipping lanes.

Autonomous drone company Planck Aerosystems is working with Transport Canada, the national transportation department, to identify whales from aerial drone imagery with AI and NVIDIA GPUs. The tool can help biologists narrow down thousands of images to identify the few containing whales, so ships can slow down and avoid the endangered creatures.

Learn more about how GPU technology is driving applications with social impact, including environmental projects.

The post Lean, Green, AI Machines: 5 Projects Using GPUs for a Better Planet appeared first on The Official NVIDIA Blog.

How to Train Your Robot: Robot Learning Guru Shares Advice for College Students

Whether you’re a robot or a college student, it helps to start with the fundamentals, says a leading robotics researcher.

While robots can do amazing things, compare even the most advanced robots to a three-year-old and they can come up short.

Pieter Abbeel, a professor at the University of California, Berkeley, and cofounder of, an AI company, has pioneered the idea that deep learning could be the key to bridging that gap: creating robots that can learn how to move through the world more fluidly and naturally.

Or as Abbeel refers to it, building “brains for robots.”

Teaching robots new skills is similar to taking classes in college, Abbeel explained on the latest episode of the AI Podcast.

While college courses may not immediately qualify a student for a job, the classes are still important in helping students develop fundamental skills they can apply in all kinds of situations.

Abbeel uses the same approach in his robotics research. At last month’s GPU Technology Conference, he showed a robot learning to navigate a new building it’s never been in before. His talk will be available here starting May 1.

The robot was able to do that because it was applying principles it had learned by navigating other buildings. “What were [the robot’s] courses in its college curriculum were the many other buildings that it was also learning to navigate,” he said. “So it learned a generic skill of navigating new buildings.”

Similarly, college students should look for classes that can teach them skills they can apply broadly.

Getting Physical

For younger students interested in getting a head start in AI and deep learning, Abbeel encourages them to look into physics.

“When I think about the foundations, the things you would learn early on, that will help a lot — they’re essentially mathematics and computer science and physics,” said Abbeel.

“And the reason I say ‘physics,’ which might be slightly more unexpected in the lineup, is that physics is all about looking at the world and building abstractions of how the world works,” he said.

Abbeel also recommends getting involved in research.

“It’s a lot about taking initiative, trying things,” Abbeel said. “The research cycle is a lot about just trying things people haven’t tried before and trying them quickly and understanding how to simplify things.”

How to Tune into the AI Podcast

Our AI Podcast is available through iTunes, Castbox, DoggCatcher, Google Play Music, Overcast, PlayerFM, Podbay, PodBean, Pocket Casts, PodCruncher, PodKicker, Stitcher, Soundcloud and TuneIn.

If your favorite isn’t listed here, email us at aipodcast [at] nvidia [dot] com.

The post How to Train Your Robot: Robot Learning Guru Shares Advice for College Students appeared first on The Official NVIDIA Blog.

Going Against the Grain: How Lucidyne Is Revolutionizing Lumber Grading with Deep Learning

Talk about a knotty problem.

Lucidyne Technologies has been using AI since the late 1980s to detect defects in lumber products.

But no matter how much technology it’s employed, finding imperfections in wood boards — a process that’s critical to categorizing lumber and thus maximizing its value — has remained a challenge.

“This isn’t like being in a factory and scanning cogs. These are all like snowflakes,” said Patrick Freeman, CTO of the small, Corvallis, Oregon-based company. “There’s never been a knot that looks like another one.”

It’s a job tailor-made for AI, and Lucidyne has jumped in with both feet by building a cutting-edge scanning system for lumber mills that’s powered by GPU-enabled deep learning.

With lumber flying through at speeds of up to 35 mph, the company’s GradeScan system — which physically resembles a mashup of an assembly line station and an MRI machine — scans two boards a second. It detects and collects visual data on 70 different types of defects, such as knots, fire scars, pitch pockets and seams.

Lucidyne redwood scan
Lucidyne’s system detects numerous kinds of defects in lumber: orange = bark pocket; forest green = fire scar; red = live knots; bright green = dead knots; blue = pitch pockets; pink = minor seam.

It then applies a deep learning model trained on a combination of NVIDIA GPUs, with a dataset of hundreds of thousands of scanned boards across 16 tree species, all of which have been classified by a team of lumber-grading experts.

To generate the most revenue, the model’s underlying algorithm determines the optimal way to cut each board — navigating around defects measuring as little as 8/1,000th of an inch. Those instructions are then sent to the mill’s saws.

Each mill’s findings are fed back into Lucidyne’s dataset, continuously improving the accuracy and precision of its deep learning model. Thus, there’s no end to how much mills will be able to learn about the lumber they’re milling.

Unprecedented Accuracy and Precision

A typical scanning application might involve categorizing lumber into one of six grade types, with grade 1 being the most valuable, for example. After scanning a 20-foot board, Lucidyne’s system might determine that the best cut will remove a 2-foot defective section near the center, leaving two 8-foot grade 1 and 2 sections on either side, and an additional 2-foot section of trim, which might be sold to a sawdust manufacturer.

This level of detail separates Lucidyne from the competition by enabling mills to drastically improve the precision of their lumber-grading efforts.

“Going to deep learning has allowed us to be a lot more accurate, and our customers produce packs that are 2 percent below or above grade,” said Dan Robin, Lucidyne’s software engineering manager. “No one else is coming even close to that.”

Lucidyne GradeScan system
Lucidyne’s GradeScan system.

Lucidyne started deploying GradeScan systems, powered by its Perceptive Sight software, in 2017, with each unit performing inference on NVIDIA P4 GPUs. The company is now deploying systems with newer NVIDIA T4 GPUs.

Freeman said the new system is delivering 16x the data processing speed, and at a higher image resolution to boot.

The upshot is that Lucidyne’s decision to travel a deep learning path toward increasingly detailed identification of defects has paid off exactly as it hoped.

Raising the Bar

“We wanted to up our game,” said Freeman. “We sought to improve our accuracy on currently detected defects, to correctly classify defects we had never been able to call before, while at the same time delivering more timely solutions and to a larger customer base.”

To that end, the company is working with NVIDIA to develop customized software that extends fine-grain inferencing capabilities using semantic segmentation.

In the meantime, Lucidyne is riding every wave of increased computing power to zoom in on smaller and more subtle defects. It has recently begun grading redwood, which is much harder to scan because of its color variations. It’s also looking to expand into hardwoods and eventually hopes to tackle other challenges faced by mills.

All of this innovation has Lucidyne’s technical leaders feeling that they’re onto something bigger. As a result, they have an eye on disrupting other sectors where inspection of organic materials is involved.

Said Freeman, “What we’re doing that we think is unique is taking industrial deep learning inspection to the next level.”

The post Going Against the Grain: How Lucidyne Is Revolutionizing Lumber Grading with Deep Learning appeared first on The Official NVIDIA Blog.

How UnitedHealth Group Is Infusing Deep Learning Into Healthcare Services

In a massive healthcare organization, even a small improvement in workflow can translate to major gains in efficiency. That means lower costs for the healthcare provider and better, faster care for patients.

UnitedHealth Group, one of the largest healthcare companies in the U.S., is turning to GPU-powered AI for these kinds of enhancements. In a talk at the GPU Technology Conference last month, two of the organization’s AI developers shared how it’s adopting deep learning for a variety of applications — from prior authorization of medical procedures to directing phone calls.

“The datasets required to solve these problems are enormous,” said Dima Rekesh, senior distinguished engineer at Optum, the health services platform of UnitedHealth Group. “Deep learning is uniquely suited to solve some of these hard problems through its ability to parse large amounts of data.”

The key challenge for an AI to be usable is getting error rates low enough, Rekesh said. “When you develop a model, you need to cross a threshold of accuracy to the point where you can trust it — to the point where it’s a pleasant experience for someone, whether it’s a call center representative or a medical professional looking at a model’s predictions.”

Deep learning models can meet that high bar, he says.

“AI solutions actually impact not just the operational costs for our company, but also patient services,” said Julie Zhu, chief data scientist and distinguished engineer at Optum. “We could make decisions much earlier, with more accurate treatment recommendations and earlier detection of disease.”

Optum is using a number of NVIDIA GPUs, including a cluster of V100 GPUs and the NVIDIA DGX-1, to power its deep learning work.

This Procedure Is AI Approved

Healthcare providers often need prior authorization, or advance approval from a patient’s insurance plan, before moving forward with a procedure or filling out a prescription. Manually approving procedures currently costs Optum hundreds of labor hours and millions of dollars a year.

In addition to checking whether or not a patient’s insurance plan covers a treatment, the healthcare provider must gather information from several sources to confirm that it’s necessary for a given patient to have a procedure or take a particular medication. With deep learning models, much of this decision-making could eventually be done automatically.

Zhu and her colleagues are developing neural networks that can conduct prior authorization in real time. The AI is currently in production and is being benchmarked against the manual process.

The team found its deep learning model outperforms the traditional machine learning model by a significant margin against a high volume of cases.

“When you have a million cases per year, the impact is really big,” Zhu said. UnitedHealth Group serves 126 million individuals and 80 percent of U.S. hospitals. “Even a small percentage improvement in accuracy will have a huge impact.”

Deep Learning on the Other End of the Line

More than a million people dial UnitedHealth Group each day. As with any large organization, callers are greeted by an automatic voice response system — a phone tree interface with prompts like “Press 1 to reach the emergency department” or “Press 6 for radiology.”

This process can be streamlined with deep learning.

By implementing AI in its call system, UnitedHealth Group can use natural language processing models to understand what callers are looking for and answer automatically, or route them to the right department or service representative.

Rekesh is working on developing neural networks that can accomplish these tasks, with the goals of reducing call length and connecting patients and customers to answers more quickly. To do so, he’s using OpenSeq2Seq, an open-source toolkit for NLP and speech recognition developed by NVIDIA researchers.

“In NLP, deep learning is the only option,” he said. “Other solutions just aren’t accurate enough.”

Deep learning models can also be used to streamline the process of authenticating patients’ identities on the call. For customer representatives, an AI-powered interface can help them during the call by pulling up the patient’s records or providing recommendations on the agent’s computer screens.

Optum plans to deploy some of these deep learning models later this year. The organization is also working on neural network tools for multi-disease prediction and medical claim fraud detection.

Dig In: Startup Tills Satellite Data to Harvest Farm AI

The farm-to-fork movement is getting a taste of AI.

Startup OneSoil cultivates AI to help farmers boost their bounty. The company offers a GPU-enabled platform that turns satellite data into farm analytics for soil and crop conditions.

The Belarus-based company interprets satellite feeds to show how plants reflect different light waves, and it rates the state of plant growth based off this information for plots of land.

OneSoil’s free platform displays how areas of land measure up on the standard known as the NDVI (Normalized Difference Vegetation Index). Farmers can use this vegetation score to spot unhealthy crop areas that need inspection and to plan watering needs and the application of fertilizers.

The field monitoring platform is available as an Android app and on the web.

OneSoil has developed its platform to cover North America, most of western Europe and some of central Europe.  It aims to have coverage of the entire world by year’s end. The satellite data visualizations are updated every three to five days.

Satellite to Sprouts

OneSoil taps into free satellite data from the European Union’s Copernicus Earth observation program. The company manually marked out boundaries on nearly 400,000 fields for training data used on its convolutional neural networks. Now its algorithms can now enable its algorithms to automatically create boundaries from the satellite data.

It processed about 50 terabytes of Sentinel 2 satellite data using NVIDIA GPUs in Microsoft Azure to build out its boundaries of land for the map spanning much of the world.

“With Sentinel images, we need a lot of processing power to analyze those,” said Clement Matyuhov, director of business development at OneSoil.

OneSoil can automatically detect more than 20 different crop types.

Dig the Sensors

OneSoil has developed sensors to work on its platform. Customer can dig a hole and stick in one of its battery-powered sensors that packs a SIM card to start sending data.

The sensors measure air humidity, soil moisture, the temperature of air and soil, and the level of light intensity for the nearby area.

The company has also developed a modem that can transfer data between agricultural equipment and the OneSoil platform over a mobile network.

OneSoil users can enter data, as well. They can make such entries as date of harvest, crop type, average yield, field boundaries and files documenting field work. They can use the app, which tracks location and provides field data, to go examine areas.

Prescriptive Agriculture

On the analytics side, OneSoil Maps makes it easy for farmers to make adjustments on their land. The maps provide a productivity rating of low, medium or high for different areas of the land.

“We can say there is a low productivity zone there, so go check it out. Within one field, the productivity can vary dramatically,” said Matyuhov.

Farmers can use the maps for the vegetation on their land to create prescription maps for fertilizer. These prescription maps, downloadable as a file from, can be uploaded into compatible tractors from John Deere and steering systems from Trimble, allowing tractors to go to the specific GPS coordinates and treat the area as prescribed.

“It’s really an expert assessment for the farmer. The results for the yield can be substantial,” he

Image and credit: Corn harvest with an IHC International combine harvester, Jones County, Iowa, U.S., by Bill Whittaker under Creative Commons license.

The post Dig In: Startup Tills Satellite Data to Harvest Farm AI appeared first on The Official NVIDIA Blog.

Medical Imaging Startup Uses AI to Classify Conditions from Sinus and Brain Scans

Radiologists are tasked with diagnosing some of the most serious medical conditions — but their workloads are becoming increasingly demanding as the volume of imaging studies such as CT and MRI has steadily gone up.

Houston-based InformAI is stepping in to help reduce fatigue and stress for radiologists by building deep learning tools that can help them analyze medical scans faster.

“We wanted to build diagnostic-assist tools for clinicians to speed up information workflow and decision-making at the point of care to benefit patients,” said InformAI CEO Jim Havelka.

InformAI trains its deep learning image classifiers and patient outcome predictors on NVIDIA V100 GPUs through the Microsoft Azure cloud platform and with an onsite NVIDIA DGX Station. The startup worked with data science consulting firm SFL Scientific to develop a convolutional neural network-based deep learning technology stack using top technology resources.

In less than 30 seconds, InformAI’s image classifier scans for 20 sinus conditions and flags which ones might be present in a patient’s 3D CT scan. This AI tool has also formed the basis for other image classification applications that analyze 3D scans of soft tissue — including detecting common brain cancers from MRI scans.

AI Spots Sinus Conditions

Figuring out the structure of an individual’s sinuses is harder than it sounds. Each person’s sinus cavities look different, making it challenging for AI to determine if an infection or abnormal mass is present in the eight major sinus cavities and passageways that connect them.

Doctors perform around 700,000 sinus procedures each year in the United States. Using AI to speed up the diagnostics workflow can save on healthcare costs and shorten the time it takes to begin treatment.

InformAI and its healthcare partners built a training dataset was built consisting of approximately 6 million images from 20,000 patient studies. The scans were labeled by a team of radiologists and medical residents who worked with the company on the project.

Radiologists using the startup’s platform can examine and analyze 3D sinus CT scans while the predictor neural network is running. In under a minute, the AI results pop up for 20 sinus medical conditions, which the doctors can then use to assist in their diagnosis and treatment planning process.

InformAI is deploying the sinus classifier this spring at a hospital and several clinics to test its effectiveness as an assist tool for radiologists and ear, nose and throat physicians. The team is also going through the regulatory process required for the AI to be certified as a direct diagnostic tool.

A Neural Network for Neurological Disorders

In general terms, the sinus classification neural network extracts 3D segments from a CT scan to analyze whether a particular disease or set of diseases is present in those image segments, Havelka said. Since the network was trained on such a large medical dataset, it can be repurposed using transfer learning to solve image classification problems for a broad range of soft tissue medical applications.

The startup is doing just that. Using transfer learning, the team trained a neural network to detect disease from another kind of soft tissue: the brain.

When a tumor or lesion is identified in the brain, “it can be life-and-death for patients,” said Havelka. “Early detection and classification are critical in providing the best treatment options and outcome for patients.”

But different brain tumors and lesions can look alike, and can also resemble other neurological disorders with different treatments. As a result of this classification complexity, a patient’s treatment plan can evolve over time.

When radiologists are unable to make a conclusive diagnosis from a brain MRI scan, physicians turn to invasive brain biopsies to obtain additional information. An AI tool that can assist radiologists in making an earlier and more certain diagnosis could reduce the number of required biopsies.

Using a 3D CNN, InformAI is developing a tool that analyzes brain MRI scans to detect whether a tumor or lesion is present, and can classify an abnormal scan as one of four conditions: glioblastoma, metastatic brain tumor, multiple sclerosis or lymphoma.

The deep learning model for brain cancer detection, which is still under development, was initially trained on around 100,000 image scans from 1,000 patient studies.

Founded in 2017, InformAI is a member of the NVIDIA Inception virtual accelerator program. To learn more about the company’s work, read this recent white paper.

The post Medical Imaging Startup Uses AI to Classify Conditions from Sinus and Brain Scans appeared first on The Official NVIDIA Blog.

How AI Is Transforming Healthcare

Healthcare is a multitrillion-dollar global industry, growing each year as average life expectancy rises — and with nearly unlimited facets and sub-specialties.

For medical professionals, new technologies can change the way they work, enable more accurate diagnoses and improve care. For patients, healthcare innovations lessen suffering and save lives.

Deep learning can be implemented at every stage of healthcare, creating tools that doctors and patients can take advantage of to raise the standard of care and quality of life.

How AI Is Changing Patient Care

Providing patient care is a series of critical choices, from decisions made on a 911 call to the recommendations a primary physician makes at an annual physical. The challenge is getting the right treatments to patients as fast and efficiently as possible.

Nearly half the countries and territories in the world have less than one physician per 1,000 people, a third of the threshold value to deliver quality healthcare, according to a 2018 study in The Lancet. Meanwhile, as healthcare data goes digital, the amount of information medical providers collect and refer to is growing.

In intensive care units, these factors come together in a perfect storm — patients who need round-the-clock attention; large, continuous data feeds to interpret; and a crucial need for fast, accurate decisions.

Researchers at MIT’s Computer Science and Artificial Intelligence Lab developed a deep learning tool called ICU Intervene, which uses hourly vital sign measurements to predict eight hours in advance whether patients will need treatments to help them breathe, require blood transfusions or need interventions to improve heart function.

Corti, a Denmark-based startup, is stepping in at another time-sensitive interaction: phone calls with emergency services. The company is using an NVIDIA Jetson TX2 module to analyze emergency call audio and help dispatchers identify cardiac arrest cases in under a minute.

LexiconAI, a member of the NVIDIA Inception program, is helping doctors spend more time with their patients every day. The startup built a mobile app that uses speech recognition to capture medical information from doctor-patient conversations — making it possible to automatically fill in electronic health records.

How AI Is Changing Pathology

Just as millions of medical scans are taken each year, so too are hundreds of millions of tissue biopsies. While pathologists have long used physical slides to analyze specimens and make diagnoses, these slides are increasingly being scanned to create digital pathology datasets.

Inception startup Proscia uses deep learning to analyze these digital slides, scoring over 99 percent accuracy for classifying three common skin pathologies. Using AI can help standardize diagnoses, which is important. Depending on the type and stage of disease, two pathologists looking at the same tissue may disagree on a diagnosis more than half the time.

SigTuple, another Inception startup, developed an AI microscope to analyze blood and bodily fluids. The microscope scans physical slides under a lens and uses GPU-accelerated deep learning to analyze the digital images either on SigTuple’s AI platform in the cloud or on the microscope itself.

Compared to scanners that automatically convert glass slides to digital images and interpret the results, SigTuple’s microscope does this at a fraction of the cost. The company hopes its tool will address the global pathologist shortage, a crucial problem in many countries.

How AI Is Changing Predictive Health

A host of AI tools are being developed to detect risk factors for diseases months before symptoms appear. These will help doctors make earlier diagnoses, conduct longevity studies or take preventative action. Taking advantage of the ability of deep learning models to spot patterns in large datasets, these tools may extract insights from electronic health records, physical features or genetic information.

One mobile app, Face2Gene, uses facial recognition and AI to identify about 50 known genetic syndromes from photos of patients’ faces. It’s used by around 70 percent of geneticists worldwide and could help cut down the time it takes to get an accurate diagnosis.

Another deep learning tool, developed by researchers at NYU, analyzes lab tests, X-rays and doctors notes to predict ailments like heart failure, severe kidney disease and liver problems three months faster than traditional methods.

Using AI and a wide range of electronic health records helped the researchers draw new connections among hundreds of health measurements that could predict diseases like diabetes.

How AI Is Enabling Healthcare Apps

Healthcare doesn’t start and end at the doctor’s office. And with wearables, smartphones and IoT devices, there’s no shortage of devices to monitor health from anywhere.

A service called SpiroCall, for example, makes it possible for patients to check lung function by breathing into a smartphone, either by dialing a toll-free number or recording a sound file on an app. The data is sent to a central server, which uses a deep learning model to assess lung health.

For athletes at risk of suffering concussions on the playing field, an AI-powered app is using a smartphone camera to analyze how an athlete’s pupils respond to light, a metric medical professionals use to diagnose brain injury.

And in the realm of mental health, Canadian startup Aifred Health is using GPU-accelerated deep learning to better tailor depression treatments to individual patients. Using data on a patient’s symptoms, demographics and medical test results, the neural network helps doctors as they prescribe treatments.

How AI Is Enabling Devices for People with Disabilities

A billion people around the world experience some form of disability. AI-powered technology can provide some of them with a greater level of independence, making it easier to perform daily tasks or get around.

Aira, a member of the Inception program, has created an AI platform that connects to smart glasses, helping people with impaired vision with tasks like reading labels on medication bottles. And a professor at Ohio State University is using GPUs and deep learning to create a hearing aid that can bump the volume of speech while filtering out background noise.

Researchers at OSU and Battelle, a nonprofit research organization, are developing a brain-computer interface powered by neural networks that can read thoughts and restore movement to paralyzed limbs.

And a team at Georgia Tech developed an AI prosthetic hand that helped jazz musician Jason Barnes play piano for the first time in five years. The prosthesis uses electromyogram sensors to recognize muscle movement and allows for individual finger control.

See the NVIDIA healthcare page for more.

Main image licensed from iStock.

The post How AI Is Transforming Healthcare appeared first on The Official NVIDIA Blog.

NVIDIA CEO Ties AI-Driven Medical Advances to Data-Driven Leaps in Every Industry

Radiology. Autonomous vehicles. Supercomputing. The changes sweeping through all these fields are closely related. Just ask NVIDIA CEO Jensen Huang.

Speaking in Boston at the World Medical Innovation Forum to more than 1,800 of the world’s top medical professionals, Huang tied Monday’s news — that NVIDIA is collaborating with the American College of Radiology to bring AI to thousands of hospitals and imaging centers — to the changes sweeping through fields as diverse as autonomous vehicles and scientific research.

In a conversation with Keith Dryer, vice chairman of radiology at Massachusetts General Hospital, Huang asserted that data science — driven by a torrent of data, new algorithms and advances in computing power — is becoming a fourth pillar of scientific discovery, alongside theoretical work, experimentation and simulation.

Putting data science to work, however, will require enterprises of all kinds to learn how to handle data in new ways. In the case of radiology, the privacy of the data is too important, and the expertise is local,  Huang told the audience. “You want to put computing at the edge,” he said.

As a result, the collaboration between NVIDIA and the American College of Radiology promises to enable thousands of radiologists nationwide to use AI for diagnostic radiology in their own facilities, using their own data, to meet their own clinical needs.

Huang began the conversation by noting that the Turing Award, “the Nobel Prize of computing,” had just been given to the three researchers who kicked off today’s AI boom: Yoshua Bengio, Geoffrey Hinton and Yann LeCunn.

“The takeaway from that is that this is probably not a fad, that deep learning and this data-driven approach where software and the computer is writing software by itself, that this form of AI is going to have a profound impact,” Huang said.

Huang drew parallels between radiology and other industries putting AI to work, such as automotive, where Huang sees an enormous need for computing power in autonomous vehicles that can put multiple intelligences to work, in real time, as they travel through the world.

Similarly, in medicine, putting one — or more — AI models to work will only enhance the capabilities of the humans guiding these models.

These models can also guide those doing cutting-edge work at the frontiers of science, Huang said, citing Monday’s announcement that the Accelerating Therapeutics for Opportunities in Medicine, or ATOM, consortium will collaborate with NVIDIA to scale ATOM’s AI-driven drug discovery program.

The big idea: to pair data science with more traditional scientific methods, using neural networks to help “filter” through the large combination of possible molecules to decide which ones to simulate to find candidates for in vitro testing, Huang explained

Software Is automation, AI Is the Automation of Automation

Huang sees such techniques being used in all fields of human endeavor — from science to front-line healthcare and even to running a technology company. As part of that process, NVIDIA has built one of the world’s largest supercomputers, SATURNV, to support its own efforts to train

AI models with a broad array of capabilities. “We use this for designing chips, for improving our systems, for computer graphics,” Huang said.

Such techniques promise to revolutionize every field of human endeavor, Huang said, asserting that AI is “software that writes software,” and that software’s “fundamental purpose is automation.”

“AI therefore is the automation of automation,” Huang said. “And if we can harness the automation of automation, imagine what good we could do.”



The post NVIDIA CEO Ties AI-Driven Medical Advances to Data-Driven Leaps in Every Industry appeared first on The Official NVIDIA Blog.

Wasting Away: Winnow Slims Down Commercial Food Waste

Food is too valuable to waste.

But nearly $100 billion of it is thrown away in the hospitality sector every year.

When you’re catering for an unknown number of guests, you can’t afford to be underprepared. In many cases, this can lead kitchen staff to the other extreme — preparing too many meals. All of the extra, unused ingredients ultimately end up in the bin.

Winnow, a U.K.-based company, is using AI to take a bite out of food waste by empowering commercial kitchens to reduce the amount of food they dump.

AI for Reducing Food Waste

Around one-third of the food produced globally for human consumption is wasted every year. That amounts to a staggering 1.3 billion tonnes.

Winnow is helping professional chefs curb those numbers with its latest product, Winnow Vision, which automatically detects, identifies and measures food at the point it is thrown out.

The system involves a set of digital weighing scales on top of which sits a standard kitchen bin. Mounted above this is a camera and compute system containing an NVIDIA Jetson TX2 supercomputer on a module.

The module takes the images captured by the camera, as well as the weight recorded by the scales, and determines what is being thrown out and in what quantity. The neural networks used by the Jetson TX2 are trained using AWS instances with NVIDIA V100 GPUs on TensorFlow. To identify the wide variety of food the system may encounter, a huge amount of training data is needed — up to 1,000 images per food item.

The collected data is sent to the cloud for processing and regular reports are then created and shared with kitchen staff. The reports detail quantities and types of food being tossed, as well as recommendations as to how the kitchen can reduce waste.

Winnow co-founder and CEO Marc Zornes explains why the real-time deep learning results the Jetson TX2 delivers onsite — what’s known as “inference at the edge” — are key.

“It’s really important to us that the customer receives immediate results, in an environment that cannot guarantee a reliable and fast internet connection,” said Zornes. “Using the Jetson TX2 devices in the field enables us to provide, in real time, a ‘better than human’ understanding of what is being thrown into the bin on the edge, live, in the kitchen.”

The Jetson TX2 module can run multiple processes. Having a complete system on the edge means the Winnow team can reuse knowledge gained from working in the cloud and apply it to an edge paradigm. The Jetson platform is powerful enough to encompass current and future workloads, and flexible enough for Winnow to experiment and design new solutions.

Business Sense

Winnow Vision has already surpassed human levels with an accuracy rate of over 80 percent when identifying food that has ended up in the trash. This will increase with time as more and more data is collected.

The system is already installed in over 75 kitchens and Winnow plans to roll out the technology to thousands more in the coming years. IKEA and Emaar are among the companies that have implemented Winnow Vision in their kitchens.

Reducing the amount of food waste isn’t the only benefit for businesses. Automating the process increases efficiency in the kitchen, too. Staff require less training on food management and need to spend less time adjusting their menus.

Winnow has shown that by arming teams with analytics, food waste can be cut in half. The company estimates it has already helped commercial kitchens save more than $30 million in annualized food costs. That equates to preventing over 23 million meals from going in the trash.

With the advent of its new technology, Winnow has announced that it aims to save kitchens $1 billion by 2025.

The post Wasting Away: Winnow Slims Down Commercial Food Waste appeared first on The Official NVIDIA Blog.

SETI Phone Home: Harnessing AI in Search of Aliens

We’ve all read the science fiction, we’ve wondered about  suspicious objects in the sky, and we’ve even speculated over mysterious crop circles. But we still don’t know what’s out there.

Gerry Zhang, a graduate researcher at the Berkeley SETI Research Center, at the University of California, Berkeley, is working to detect signs of extraterrestrials through radio frequencies using AI.

“The idea is that if there are advanced civilizations out there, they could be sending us signals, either intentionally or unintentionally. And we could try to detect them,” said Zhang in a conversation with AI Podcast host Noah Kravitz.

The Berkeley SETI team collaborates with the Breakthrough Listen Initiative, a Breakthrough Initiatives program dedicated to searching for evidence of intelligent life across over 1 million stars and 100 galaxies. SETI stands for the search for extraterrestrial intelligence.

Taking data from radio telescopes, Zhang and his team create spectrograms, which are visual representations of a spectrum of frequencies in a sound or signal as it varies with time. According to Zhang, radio frequency data is ideal for interstellar communication as it’s transparent with a range of frequencies.

“[SETI] is an idea that other civilizations might have developed similar technology as ours. But in reality, we obviously don’t know for sure, right? So, one idea is to search for anomalous signals that looks different from anything on Earth. AI can certainly help with that.”

AI helps sort through the data collected from radio frequency transmissions, separating signals from the noise.

“On Earth, we make a lot of transmissions in radio frequency and …  [we can’t] immediately identify [the signals] to an unknown source,” said Zhang. “Part of the job that AI can do is help us sort through the signals and try to characterize them.”

Zhang also held a session at the 2019 GPU Technology Conference in San Jose, Calif., discussing Berkeley SETI and Breakthrough Listen’s work with AI. A recording of the talk will be available here starting May 1.

When asked about his career journey, Zhang credits “the universality of artificial intelligence” as the driving force behind his passion and work ethic.

“The same [AI] technique can be applied from camera images to generating voice to writing music to finding aliens.”

How to Tune in to the AI Podcast

Our AI Podcast is available through iTunesCastbox, DoggCatcher, Google Play MusicOvercastPlayerFMPodbayPodBean, Pocket Casts, PodCruncher, PodKicker, Stitcher, Soundcloud, and TuneIn.

If your favorite isn’t listed here, email us at aipodcast [at] nvidia [dot] com.

Featured image credit: NASA

The post SETI Phone Home: Harnessing AI in Search of Aliens appeared first on The Official NVIDIA Blog.

Next Meetup




Plug yourself into AI and don't miss a beat