Blog

Learn About Our Meetup

4500+ Members

Category: NVIDIA

5G Meets AI: NVIDIA CEO Details ‘Smart Everything Revolution,’ EGX for Edge AI, Partnerships with Leading Companies

The smartphone revolution that’s swept the globe over the past decade is just the start, NVIDIA CEO Jensen Huang declared Monday.

Next up: the “smart everything revolution,” Huang told a crowd of hundreds from telcos, device manufacturers, developers, and press at his keynote ahead of the Mobile World Congress gathering in Los Angeles this week.

“The smartphone revolution is the first of what people will realize someday is the IoT revolution, where everything is intelligent, where everything is smart,” Huang said. He squarely positioned NVIDIA to power AI at the edge of enterprise networks and in the virtual radio access networks – or vRANs – powering next-generation 5G wireless services.

Among the dozens of leading companies joining NVIDIA as customers and partners cited during Huang’s 90 minute address are WalMart — which is already building NVIDIA’s latest technologies into its showcase Intelligent Retail Lab — BMW, Ericsson, Microsoft, NTT, Procter & Gamble, Red Hat, and Samsung Electronics.

Anchoring NVIDIA’s story: the NVIDIA EGX edge supercomputing platform, a high-performance cloud-native edge computing platform optimized to take advantage of three key revolutions – AI, IoT and 5G – providing the world’s leading companies the ability to build next-generation services.

“The smartphone moment for edge computing is here and a new type of computer has to be created to provision these applications,” said Huang speaking at the LA Convention Center. He noted that if the global economy can be made just a little more efficient with such pervasive technology, the opportunity can be measured in “trillions of dollars per year.”

Ericsson Exec Joins on Stage Marking Collaboration

Ericsson’s Fredrik Jejdling, executive vice president and head of business area networks joined NVIDIA CEO Jensen Huang on stage to announce Ericsson and NVIDIA’s collaboration on 5G radio.

A key highlight: a new collaboration on 5G with Erisson to build high-performance software-defined radio access networks.

Joining Jensen on stage was Ericsson’s Fredrik Jejdling, executive vice president and head of business area networks. The company is a leader in the radio access network industry, one of the key building blocks for high-speed wireless networks.

“As an industry we’ve, in all honesty, been struggling to find alternatives that are better and higher performance than our current bespoke environment,” Jejdling said. “Our collaboration is figuring out an efficient way of providing that, combining your GPUs with our heritage.”

The collaboration brings Ericsson’s expertise in radio access network technology together with NVIDIA’s leadership in high-performance computing to fully virtualize the 5G Radio, giving telcos unprecedented flexibility.

Together NVIDIA and Ericsson are innovating to fuse 5G, supercomputing and AI for a revolutionary communications platform that will someday support trillions of always-on devices.

Red Hat, NVIDIA to Create Carrier-Grade Telecommunications Infrastructure

Red Hat, NVIDIA to create carrier-grade telecommunications infrastructure.

Huang also announced a new collaboration with Red Hat to building carrier-grade cloud native telecom infrastructure with EGX for AI, 5G RAN and other workloads.  The enterprise software provider already serves 120 telcos around the world, powering every member of the Fortune 500.

Together, NVIDIA and Red Hat will bring carrier-grade Kubernetes — which automates the deployment, scaling, and management of applications – to telcos so they can orchestrate and manage 5G RANs in a truly-software defined mobile edge.

“Red Hat is joining us to integrate everything we’re working on and make it a carrier grade stack,” Huang said. “The rest of the industry has joined us as well, every single data center computer maker, the world’s leading enterprise software makers, have all joined us to take this platform to market.”

Introducing the NVIDIA EGX edge supercomputing platform, a high-performance cloud-native edge computing platform optimized to take advantage of three key revolutions – AI, IoT and 5G.

NVIDIA Aerial to Accelerate 5G

For carriers, Huang also announced NVIDIA Aerial, a CUDA-X software developer kit running on top of EGX.

Aerial allows telecommunications companies to build completely virtualized 5G radio access networks that are highly programmable, scalable and energy efficient — enabling telcos to offer new AI services such as smart cities, smart factories, AR/VR and cloud gaming.

Technology for the Enterprise Edge

In addition to telcos, enterprises will also increasingly need high performance edge servers to make decisions from large amounts of data in real-time using AI.

EGX combines NVIDIA CUDA-X software, a collection of NVIDIA libraries that provide a flexible and high-performance programing language to developers,  with NVIDIA-certified GPU servers and devices.

The result enables companies to harness rapidly streaming data — from factory floors to manufacturing inspection lines to city streets — delivering AI and other next-generation services.

Microsoft, NVIDIA Technology Collaboration

To offer customers an end-to-end solution from edge to cloud, Microsoft and NVIDIA are working together in a new collaboration to more closely integrate Microsoft Azure with EGX. In addition, NVIDIA T4 GPUs are featured in a new form factor of Microsoft’s Azure Data Box edge appliance.

Other top technology companies collaborating with NVIDIA on the EGX platform include Cisco, Dell Technologies, Hewlett Packard Enterprise, Mellanox and VMware.

Walmart Adopts EGX to Create Store of the Future

Huang cited Walmart as an example of EGX’s power.

The retail giant is deploying it in its Levittown, New York, Intelligent Retail Lab. It’s a unique, fully operating grocery store where the retail giant explores the ways AI can further improve in-store shopping experiences.

Walmart is deploying EGX in its Levittown, New York, Intelligent Retail Lab.

Using EGX’s advanced AI and edge capabilities, Walmart can compute in real time more than 1.6 terabytes of data generated per second. This helps it use to automatically alert associates to restock shelves, open up new checkout lanes, retrieve shopping carts and ensure product freshness in meat and produce departments.

Just squeezing out a half a percent of efficiencies in the $30 trillion retail opportunity represents an enormous opportunity, Huang noted. “The opportunity for using automation to improve efficiency in retail is extraordinary,” Huang said.

BMW, Procter & Gamble, Samsung, Among Leaders Adopting EGX

That power is already being harnessed for a dizzying array of real-world applications across the world:

  • Korea’s Samsung Electronics, in another early EGX deployment, is using AI at the edge for highly complex semiconductor design and manufacturing processes.
  • Germany’s BMW is using intelligent video analytics and EGX edge servers in its South Carolina manufacturing facility to automate inspection.
  • Japan’s NTT East uses EGX in its data centers to develop new AI-powered services in remote areas through its broadband access network.
  • The U.S.’s Procter & Gamble the world’s top consumer goods company, is working with NVIDIA to develop AI-enabled applications on top of the EGX platform for the inspection of products and packaging.

Cities, too, are grasping the opportunity. Las Vegas uses EGX to capture vehicle and pedestrian data to ensure safer streets and expand economic opportunity.  And San Francisco’s prime shopping area, the Union Square Business Improvement District, uses EGX to capture real-time pedestrian counts for local retailers.

Stunning New Possibilities

To demonstrate the possibilities, Huang punctuated his keynote with demos showing what AI can unleash in the world around us.

In a flourish that stunned the crowd, Huang made a red McLaren Senna prototype — which carries a price of a hair under $1 million — materialize on stage in augmented reality. It could be viewed from any angle — including from the inside — on a smartphone streaming data over Verizon’s 5G network from a Verizon data center in Los Angeles

The technology behind the demo: Autodesk VRED running in a virtual machine on a Quadro RTX 8000 server. On the phone: a 5G client build with NVIDIA’s CloudXR client application software development kit for mobile devices and head mounted displays.

And, in a video, Huang showed how the Jarvis multi-modal AI was able to to follow queries from two different speakers conversing on different topics, the weather and restaurants, as they drove down the road – reacting to what the computer sees as well as what is said.

In another video, Jarvis guided a shopper through a purchase in a real-world store.

“In the future these kind of multi-modal AIs will make the conversation and the engagement you have with the AI much much better,” Huang said.

Cloud Gaming Goes Global

Huang also detailed how NVIDIA is expanding its cloud gaming network through partnerships with global telecommunications companies.

GeForce NOW, NVIDIA’s cloud gaming service, transforms underpowered or incompatible devices into a powerful GeForce gaming PC with access to popular online game stores.

Taiwan Mobile joins industry leaders rolling out GeForce NOW, including Korea’s LG U+, Japan’s Softbank, and Russia’s Rostelecom in partnership with GFN.RU. Additionally, Telefonica will kick-off a cloud gaming proof-of-concept in Spain.

Huang showed what’s now possible with a real-time demo of a gamer playing Assetto Corsa Competizione on GeForce Now — as a cameraman watched over his shoulder — on a smartphone over a 5G network. The gamer navigated through the demanding racing game’s action with no noticeable lag.

The mobile version of GeForce NOW for Android devices is available in Korea and will be available widely later this year, with a preview on display at Mobile World Congress Los Angeles.

“These servers are going to be the same servers that run intelligent agriculture and intelligent retail,” Huang said. “The future is software defined and these low latency services that need to be deployed at the edge can now be provisioned at the edge with these servers.”

A Trillion New Devices

The opportunities for AI, IoT, cloud gaming, augmented reality and 5G network acceleration are huge — with a trillion new IoT devices to be produced between now and 2035, according to industry estimates.

And GPUs are up to the challenge, with GPU computing power growing 300,000x from 2013, driving down the cost per teraflop of computing power, even as gains in CPU performance level off, Huang said.

NVIDIA is well positioned to help telcos and enterprises make the most of this by helping customers combine AI algorithms, powerful GPUs, smart NICs — or network interface cards, cloud native technologies, the NVIDIA EGX accelerated edge computing platform, and 5G high-speed wireless networks.

Huang compared all these elements to the powerful “infinity stones” featured in Marvel’s movies and comic books.

“What you’re looking at are the six miracles that will make it possible to put 5G at the edge, to virtualize the 5G data center and create a world of smart everything,” Huang said, and that, in turn, will add intelligence to everything in the world around us.

“This will be a pillar, a foundation for the smart everything revolution,” Huang said.

The post 5G Meets AI: NVIDIA CEO Details ‘Smart Everything Revolution,’ EGX for Edge AI, Partnerships with Leading Companies appeared first on The Official NVIDIA Blog.

Put AI Label on It: Startup Aids Annotators of Healthcare Training Data

Deep learning applications are data hungry. The more high-quality labeled data a developer feeds an AI model, the more accurate its inferences.

But creating robust datasets is the biggest obstacle for data scientists and developers building machine learning models, says Gaurav Gupta, CEO of TrainingData.io, a member of the NVIDIA Inception virtual accelerator program.

The startup has created a web platform to help researchers and companies manage their data labeling workflow and use AI-assisted segmentation tools to improve the quality of their training datasets.

“When the labels are accurate, then the AI models learn faster and they reach higher accuracy faster,” said Gupta.

The company’s web interface, which runs on NVIDIA T4 GPUs for inference in Google Cloud, helped one healthcare radiology customer speed up labeling by 10x and decrease its labeling error rate by more than 15 percent.

The Devil Is in the Details 

The higher the data quality, the less data needed to achieve accurate results. A machine learning model can produce the same results after training on a million images with low-accuracy labels, Gupta says, or just 100,000 images with high-accuracy labels.

Getting data labeling right the first time is no easy task. Many developers outsource data labeling to companies or crowdsourced workers. It may take weeks to get back the annotated datasets, and the quality of the labels is often poor.

A rough annotated image of a car on the street, for example, may have a segmentation polygon around it that also includes part of the pavement, or doesn’t reach all the way to the roof of the car. Since neural networks parse images pixel by pixel, every mislabeled pixel makes the model less precise.

That margin of error is unacceptable for training a neural network that will eventually interact with people and objects in the real world — for example, identifying tumors from an MRI scan of the brain or controlling an autonomous vehicle.

Developers can manage their data labeling through TrainingData.io’s web interface, while administrators can assign image labeling tasks to annotators, view metrics about individual data labelers’ performance and review the actual image annotations.

Using AI to Train Better AI 

When a data scientist first runs a machine learning model, it may only be 60 percent accurate. The developer then iterates several times to improve the performance of the neural network, each time adding new training data.

TrainingData.io is helping AI developers across industries use their early-stage machine learning models to ease the process of labeling new training data for future versions of the neural networks — a process known as active learning.

With this technique, the developer’s initial machine learning model can take the first pass at annotating the next set of training data. Instead of starting from scratch, annotators can just go through and tweak the AI-generated labels, saving valuable time and resources.

The startup offers active learning for data labeling across multiple industries. For healthcare data labeling, its platform integrates with the NVIDIA Clara Deploy SDK, allowing customers to use the software toolkit for AI-assisted segmentation of healthcare datasets.

Choose Your Own Annotation Adventure

TrainingData.io chose to deploy its platform on cloud-based GPUs to easily scale usage up and down based on customer demand. Researchers and companies using the tool can choose whether to use the interface online, connected to the cloud backend, or instead use a containerized application running on their own on-premises GPU system.

“It’s important for AI teams in healthcare to be able to protect patient information,” Gupta said. “Sometimes it’s necessary for them to manage the workflow of annotating data and training their machine learning models within the security of their private network. That’s why we provide Docker images to support on-premises annotation on local datasets.”

Balzano, a Swiss startup building deep learning models for radiologists, is using TrainingData.io’s platform linked to an on-premises server of NVIDIA V100 Tensor Core GPUs. To develop training datasets for its musculoskeletal orthopedics AI tools, the company labels a few hundred radiology images each month. Adopting TrainingData.io’s interface saved the company a year’s worth of engineering effort compared to building a similar solution from scratch.

“TrainingData.io’s features allow us to annotate and segment anatomical features of the knee and cartilage more efficiently,” said Stefan Voser, chief operating officer and product manager at Balzano, which is also an Inception program member. “As we ramp up the annotation process, this platform will allow us to leverage AI capabilities and ensure the segmented images are high quality.”

Balzano and TrainingData.io will showcase their latest demos in NVIDIA booth 10939 at the annual meeting of the Radiological Society of North America, Dec. 1-6 in Chicago.

The post Put AI Label on It: Startup Aids Annotators of Healthcare Training Data appeared first on The Official NVIDIA Blog.

The Buck Starts Here: NVIDIA’s Ian Buck on What’s Next for the AI Revolution

AI is still young, but software is available to help even relatively unsophisticated users harness it.

That’s according to Ian Buck, general manager of NVIDIA’s accelerated computing group, who shared his views in our latest AI Podcast.

Buck, who helped lay the foundation for GPU computing as a Stanford doctoral candidate, will deliver the keynote address at GTC DC on Nov. 5. His talk will give an audience inside the Beltway a software-flavored update on the status and outlook of AI.

Like the tech industry, the U.S. government is embracing deep learning. “A few years ago, there was still some skepticism, but today that’s not the case,” said Buck.

Federal planners have “gotten the message for sure. You can see from the executive orders coming out and the work of the Office of Science and Technology Policy that they are putting out mandates and putting money into budgets — it’s great to see that literally billions of dollars are being invested,” he said.

The next steps will include nurturing a wide variety of AI projects to come.

“We have the mandate and budget, now we have to help all the agencies and parts of the government down to state and local levels help take advantage of this disruptive technology in areas like predictive maintenance, traffic congestion, power-grid management and disaster relief,” Buck said.

From Computer Vision to Tougher Challenges

On the commercial horizon, users already deeply engaged in AI are moving from work in computer vision to tougher challenges in natural language processing. The neural network models needed to understand human speech can be hundreds of thousands of times larger than the early models used, for example, to identify breeds of cats in the seminal 2012 ImageNet contest.

“Conversational AI represents a new level of complexity and a new level of opportunity with new use cases,” Buck said.

AI is definitely hard, he said. The good news is that companies like NVIDIA are bundling 80 percent of the software modules users need to get started into packages tailored for specific markets such as Clara for healthcare or Metropolis for smart cities.

Unleashing GPUs

Software is a field close to Ian Buck’s heart. As part of his PhD work, he developed the Brook language to harness the power of GPUs for parallel computing. His efforts evolved into CUDA, GPU programming tools at the foundation of offerings such as Clara, Metropolis and NVIDIA DRIVE software for automated vehicles.

Users “can program down at the CUDA level” or at the higher level of frameworks such as Pytorch and TensorFlow, “or go up the stack to work with our vertical market solutions,” Buck said.

It’s a journey that’s just getting started.

“AI will be pervasive all the way down to the doorbell and thermostat. NVIDIA’s mission is to help enable that future,” Buck said.

To hear our full conversation with Buck and other AI luminaries, tune into our AI Podcast wherever you download your podcasts.

(You can see Buck’s keynote live by attending GTC DC. Use the promotional code GMPOD for a 20 percent discount.) 

Help Make the AI Podcast Better

Have a few minutes to spare? Fill out this short listener survey. Your answers will help us make a better podcast.

How to Tune in to the AI Podcast

Get the AI Podcast through iTunes, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Stitcher and TuneIn. Your favorite not listed here? Email us at aipodcast [at] nvidia [dot] com.

The post The Buck Starts Here: NVIDIA’s Ian Buck on What’s Next for the AI Revolution appeared first on The Official NVIDIA Blog.

Heard Mentality: AI Voice Startup Helps Hear Customer Pain Points

Eleven years ago, Carnegie Mellon University alumni Anthony Gadient, Edward Lin and Rob Rutenbar were hunkered down in a garage, chowing pizza over late nights of coding. Eighteen months later, voice startup Voci emerged as a spinout from CMU.

Voci, like that of many early AI researchers, became a reality as a startup because of breakthroughs in deep neural networks paired with advances in GPU computing.

“Our academic roots are based in this idea that you can do better by taking advantage of application-specific hardware such as NVIDIA GPUs,” said Gadient, Voci’s chief strategy officer and co-founder.

Automated Speech Recognition 

Voci’s V-Blaze automated speech recognition offers real-time speech-to-text and audio analytics to analyze conversations between customers and call center representatives. The data can be used by customers to understand the sentiment and emotion of speakers.

Voci can provide customers with an open API to pipe the data into customer experience and sales applications.

Companies can use Voci to track what customers are saying about competitive products and different features offered elsewhere.

“There’s valuable data in those call center communications,” said Gadient.

AI Closes Deal

Voci’s automated speech recognition provides data to indicate how well sales representatives are handling calls, allowing companies to improve interactions with real-time feedback on best practices drawn from Voci’s metadata that drives products from analytics companies.

“Sales is very interesting in terms of understanding what message is effective and what is the reaction emotionally on the part of the potential buyer to different messaging,” he said.

Understanding the underlying emotion and sentiment is valuable for a number of these applications, said Gadient.

Voci’s customers include analytics companies such as Clairabridge, Call Journey and EpiAnalytics, which tap into the startup’s API for metadata that can highlight issues for customers.

Biometrics for Voice 

Voci is also addressing a problem that plagues automated customer service systems: caller verification. Many of these systems ask callers a handful of verification questions and then ask those same questions again if live support is required or if the call gets transferred.

Instead, Voci has developed an API for “voiceprints” that can identify people by voice, bypassing the maze of verification questions.

“Biometrics for voice is a problem worth solving, if only for our collective sanity. It enables machine verification of callers in the background instead of those maddening repeated questions you can face when handed off from operator to operator in a call center,” said Gadient.

GPU-Accelerated NLP 

Voci uses a multitude of neural networks and techniques to offer its natural language processing services. The service is offered either on premises or in the cloud and taps into NVIDIA V100 Tensor Core GPUs for inference.

For example, the company uses convolutional neural networks to process audio data and recurrent neural networks for language modeling to make predictions about text.

Developers at Voci trained their networks on more than 20,000 hours of audio from customers seeking results for their businesses.

“It took approximately one month to train the neural nets on a network of machines running a combination of NVIDIA P100 and V100 GPUs,” said Gadient.

Voci is a member of NVIDIA Inception, a virtual accelerator program that helps startups get to market faster.

 

The post Heard Mentality: AI Voice Startup Helps Hear Customer Pain Points appeared first on The Official NVIDIA Blog.

NVIDIA Collaborates with UCSF on AI Center for Radiology

University of California, San Francisco, one of the world’s top medical schools for research, unveiled today a center to develop AI tools for clinical radiology — leveraging the NVIDIA Clara healthcare toolkit and the powerful NVIDIA DGX-2 AI system.

As a founding partner of the Center for Intelligent Imaging, known as ci2, NVIDIA is working with UCSF to foster an ecosystem of industry and academic collaboration in healthcare. In addition to contributing technology tools, NVIDIA developers will work with UCSF researchers on several AI projects, including brain tumor segmentation, liver segmentation and clinical deployment.

Integrating AI into the radiology workflow can help medical institutions keep pace with an ever-growing stream of medical imaging data. The number of images acquired during common studies like MRI and CT scans has swelled in recent years from tens of images each to hundreds or thousands. It’s a challenge compounded by a rise in the number of patients being imaged.

“It makes for an absolutely overwhelming volume of information to digest,” said Christopher Hess, chair of the UCSF Department of Radiology and Biomedical Imaging. “We’re hoping to use AI to help radiologists better navigate and interact with data, to derive more meaning out of images, and to improve the value of medical imaging for the individual patient.”

Hess says the university also plans to use AI for quantitative imaging, predictive analytics and resource scheduling — giving medical professionals access to insights that were once too time-consuming to calculate or impossible to find without deep learning methods.

UCSF Adopts NVIDIA Clara and DGX Systems

DGX-2 at UCSF AI for radiology center
UCSF’s Center for Intelligent Imaging will use the NVIDIA DGX-2 AI system to power several radiology tools. From right to left: the author, UCSF’s Hess, Sharmila Majumdar, a professor and vice chair of the radiology department at UCSF, and Mona Flores, global lead for hospitals and clinical partnerships at NVIDIA.

A leading healthcare institution with more than a century of work in radiology, UCSF has long been an innovator in medical imaging. Its radiology department collaborated with industry partners in the 1970s to develop the first MRI systems,now used worldwide to diagnose a variety of conditions, including spinal fractures and brain and heart diseases.

Close to half a million imaging studies are performed at UCSF annually. The medical center has amassed at least a petabyte of imaging data over the years — ranging from small X-ray images to much larger PET/MRI studies. These bigger files can take up gigabytes or now even terabytes of data storage.

Training deep learning models on these massive datasets requires immense computational power. By adopting the high-performance NVIDIA DGX-2, Hess estimates UCSF researchers could cut the time to train AI models from months or days down to hours or even minutes.

The DGX-2 will also enable UCSF to harness multimodal data sources to develop more sophisticated deep learning models to accelerate the radiology workflow.

“We’re interested in integrating data from not only imaging, but also from medical records, genetics and other information sources in the healthcare system,” said Hess. “When we talk about computation at scale, we need access to a high-throughput, highly efficient and computationally sophisticated platform like DGX-2 to accelerate our development cycle.”

UCSF has also adopted the NVIDIA Clara developer toolkit for medical imaging. Its researchers are using the Clara Train SDK to train deep learning models that reconstruct and analyze CT and MRI scans, and the Clara Deploy SDK to optimize integration with the center’s clinical infrastructure.

“We’re really focusing on developing ways in which to implement algorithms from the modality to the reading room,” said Hess. “NVIDIA Clara will be an essential platform to create this ecosystem to implement, validate and use AI algorithms.”

Weaving AI Into the Clinical Workflow 

NVIDIA and UCSF are working together to develop AI models that can be deployed into the medical center’s imaging workflow, starting with deep learning models to analyze scans of the brain and liver.

When doctors treat brain cancer patients, MRI scans provide critical information about how a tumor is responding to radiation treatment and chemotherapy. Today, radiologists analyze scans visually with manual tools. AI can instead provide a quantitative measurement, calculating the precise volume of a tumor. By tracking how a tumor’s volume changes from scan to scan, clinicians can better assess how a patient is responding to treatment over time.

The team is also developing an AI model that can segment and measure the left and right lobes of an organ donor’s liver from CT images. These metrics are critical for doctors planning liver transplants from a living donor to a patient, and take up to two hours to delineate and compute by hand. With deep learning, Hess estimates, it could be done in seconds.

UCSF and NVIDIA will also collaborate on tools that could improve the quality, efficiency and reproducibility of medical imaging exams. AI can be used to denoise medical images so that scans can be taken faster and are less susceptible to patient motion during scanning.

Beyond the day-to-day medical imaging workflow, the collaboration will explore predictive analytics tools to provide radiologists and other physicians insights from imaging scans, medical records and even patient sensors.

Additional deep learning algorithms will be created to improve operational efficiency at UCSF, helping its technologists optimize how the medical center’s fleet of imaging scanners is used.

The post NVIDIA Collaborates with UCSF on AI Center for Radiology appeared first on The Official NVIDIA Blog.

Top Experts from Government, Industry Join to Take On Critical AI Issues at GTC DC

Influential leaders and industry experts will give an inside look at AI policy matters at GTC DC, the largest AI conference in Washington, from Nov. 4-6.

Key topics to be focused on include the national AI strategy, cybersecurity, healthcare, workforce training and diversity.

Can’t-miss AI policy panels taking place at GTC DC include:

AI in America

U.S. CTO Michael Kratsios will kick off a series of panels on AI policy with a keynote addressing how the federal government is supporting American leadership in AI.

Kratsios headed the development of the executive order on AI and leads the White House’s Select Committee on Artificial Intelligence. He’ll share updates from the administration on how the order is being implemented.

The next panel will focus on national AI strategy. Experts involved with the executive order will delve into the details of how it’s being applied, and how private citizens can bring AI to their businesses.

The panel, moderated by David Luebke, vice president of research at NVIDIA, will share firsthand knowledge of the state of federal AI adoption and the investments being made in R&D, and discuss policies that are accelerating the implementation of AI in businesses and government agencies.

Panelists include:

  • Jason Matheny, founding director at Georgetown’s Center for Security and Emerging Technology and Commissioner in the National Security AI Commission
  • Lynne Parker, assistant director for AI at the White House Office of Science and Technology
  • Elham Tabassi, chief of staff of the IT Lab at the National Institute of Standards and Technology
  • Robert Atkinson, president at the Information Technology and Innovation Foundation

Hindering the Hackers: AI and Cybersecurity

As technology improves, so do cyberattacks and massive data breaches. But cybersecurity experts will take part in a panel on how AI can help.

Moderated by Iain Cunningham, vice president of intellectual property and cybersecurity at NVIDIA, the panel features leaders in data security who will pinpoint how AI can prevent cyberattacks and how AI policy can safeguard data.

Panelists include:

  • Moira Bergin, subcommittee director, cybersecurity, infrastructure protection for the House Committee on Homeland Security
  • Coleman Mehta, senior director of U.S. policy at Palo Alto Networks
  • Daniel Kroese, associate director of the national risk management center at the Cybersecurity and Infrastructure Security Agency
  • Joshua Patterson, general manager of data science at NVIDIA

The Future Is AI

Healthcare experts will discuss how AI is changing the industry to provide better service and patient outcomes in a panel moderated by Kimberly Powell, vice president of healthcare at NVIDIA.

They’ll share examples of how they’ve built programs for AI in healthcare and present strategies for using AI to accelerate the improvement of healthcare quality, cost and access.

Panelists include:

  • Gil Alterovitz, director of AI at the U.S. Department of Veterans Affairs
  • Susan Gregurick, director of the biophysics, biomedical technology, and computational biosciences division at the National Institutes of Health
  • Jorge Cardoso, CTO at the London Medical Imaging and AI Centre

AI is also changing the future of the workforce, which business leaders will discuss in a panel moderated by Tonie Hansen, who heads corporate social responsibility at NVIDIA.

Panelists will focus on how sensible policies can help create opportunities for current and future generations of workers. They’ll share tangible advice on reskilling and upskilling employees into data science and IT roles, and preparing computer scientists for AI and machine learning, concentrating on how to do so across socioeconomic, racial and ethnic groups for a more diverse workforce.

Panelists include:

  • Laura Montoya, founder and managing partner at Accel AI
  • Charles Eaton, executive vice president of social innovation at CompTIA
  • Rhonda Foxx, former chief of staff for U.S. Representative Alma Adams of North Carolina

View descriptions of these AI policy panels in more detail on the GTC DC website and register for the conference. Media may request a complimentary pass here.

The post Top Experts from Government, Industry Join to Take On Critical AI Issues at GTC DC appeared first on The Official NVIDIA Blog.

Bird’s-AI View: How Deep Learning Helps Ornithologists Track Migration Patterns

Billions of birds in North America make the trek south each fall, migrating in pursuit of warmer winter temperatures. But at least a quarter of them don’t make it back to northern breeding grounds in the spring, falling victim to predators, weather or man-made hazards like oil pits and cell towers.

Many of these migratory birds fly under the cover of night, making it challenging for birdwatchers and ornithologists to observe them and track long-term trends. But the need to monitor avian population levels is critical.

Recent research estimates that the number of birds in North America has fallen by 3 billion in the past 50 years, impacted by climate change, habitat loss, hunting and pesticides. Spring migration has declined by 14 percent in the last decade.

To better understand how and why bird populations are changing over time, researchers at the University of Massachusetts, Amherst are using AI to analyze more than two decades of data from the national weather radar network. These insights can also improve forecasts of future bird migration and aid conservation efforts.

Two Birds with One Dataset 

A network of more than 100 weather radars has been online in the U.S. since the mid-’90s, scanning the atmosphere day and night, adding new measurements roughly every 10 minutes to a public data archive in the cloud.

While the radar network’s original purpose was to inform meteorologists, the instruments also capture flocks of birds (and even patches of insects) in flight, creating a vast trove of data for ornithologists.

Traditional methods for avian monitoring include observing and counting birds in the wild, weighing and measuring them, or tagging them with identification numbers or GPS trackers.

Radar, on the other hand, provides a detailed view of migration trends on a continental scale — giving ornithologists a way to track bird populations as they migrate thousands of miles year after year. But it’s hard to separate the signal from the noise.

When a radar image captures a flock of birds migrating across the skies, an untrained viewer may confuse the pattern for rain or snow. While both humans and AI can learn to tell the difference between birds and precipitation in radar images, using deep learning methods accelerates the process of analyzing an ever-growing dataset of more than 200 million images.

Flocking to AI 

Led by Daniel Sheldon, an associate professor of computer science, researchers at UMass Amherst used transfer learning and a dataset of 200,000 radar images from the National Weather Service to develop a neural network that could differentiate between migrating birds and precipitation.

Ph.D. student Tsung-Yu Lin (lead author on the paper) and assistant professor Subhransu Maji developed the model with support from the Cornell Lab of Ornithology.

The team used a cluster of four NVIDIA GPUs to train the deep learning model, which provides an estimate of how much biomass is present in a given radar image. From that figure, ornithologists can approximate the number of birds migrating. Named MistNet, the tool correctly identifies at least 96 percent of the birds within a test set of radar images, the researchers found.

MistNet can be run on every radar image in the public archive to summarize how much migration is occurring at different elevation levels, the direction of the birds and how fast they’re flying. Additional data sources like observations from birdwatchers or the geographic coordinates of the radar image can be used to determine which species of bird corresponds to a radar data trail.

Insights on the Horizon

The researchers have so far analyzed around 28 million scans and found that a large proportion of migration happens in a very concentrated time span. Just one night accounted for 10 percent of migration over Houston last spring.

Looking at these migration spikes over the two decades of available data could help scientists track how bird migration patterns are changing in response to climate change. The team discovered that as food becomes available earlier in the spring, bird migration dates are shifting earlier, particularly for flocks that settle in breeding grounds further north.

Since radar data is updated every few minutes, this work also can be used to project bird migration in the near term. Sheldon works with BirdCast, a collaboration among the Cornell Lab of Ornithology, UMass Amherst and Oregon State University that uses radar data to provide a real-time bird migration map, as well as three-day forecasts.

“These forecasts are exciting because they allow bird watchers to look out and see what’s going to happen, and get excited about big migration events,” he said. “But it also has significant uses in conservation.”

For example, to help birds as they fly through the night, cities could turn off distracting light sources when major migrations are forecast. Artificial lights from skyscrapers or radio towers can distract and disorient migrating birds, impairing their navigation strategies.

Main image by Frank Boston, licensed from Flickr by CC BY 2.0

The post Bird’s-AI View: How Deep Learning Helps Ornithologists Track Migration Patterns appeared first on The Official NVIDIA Blog.

Answering the Call: NVIDIA CEO to Detail How AI Will Revolutionize 5G, IoT

Highlighting the growing excitement at the intersection of AI, 5G and IoT, NVIDIA CEO Jensen Huang kicks off the Mobile World Congress Los Angeles 2019 Monday, Oct. 21.

The keynote, NVIDIA’s debut at the wireless industry’s highest-profile gathering in the U.S., will be the first of a slate of talks and training sessions from NVIDIA and its partners.

The AI revolution is spurring a wave of progress across the mobile technology industry that’s unleashing unprecedented capabilities and new opportunities.

NVIDIA is at the center of this, thanks to AI and accelerated computing capabilities that have been adopted by industries across the globe.

Jensen Huang to Deliver Agenda-Setting Keynote

Huang will detail how the latest AI and accelerated computing innovations will transform the wireless industry in a keynote that’s open to all on Monday, Oct. 21, at the Los Angeles Convention Center’s Petree Hall.

If you’re not registered for MWC-LA, RSVP for our keynote.

Get Trained with DLI

Our Deep Learning Institute — one of the largest training programs in the world for AI and accelerated computing — has partnered with the show’s sponsor, the GSMA.

Together, we’re offering hands-on training to the show’s attendees in the South Hall, booth 1743.

The training is on a first-come, first-served basis. No need to sign up in advance.

Get Inspired at NVIDIA Booth 1745

If you’re attending the event, our booth will serve as a hub for the innovations we’re bringing to the show.

At the booth, you’ll find NVIDIA Inception partners using our Metropolis platform to showcase a variety of real-world applications that demand GPUs at the edge.

Get Oriented at the NVIDIA Theater

Want to dig into the nit and grit of delivering services such as these? Stop by the NVIDIA Theater to hear speakers from NVIDIA, our partners and our customers.

Among the highlights, Saurabh Jain, director of products and strategic partnerships at NVIDIA, will detail how edge computing brings compute and storage closer to the point of action.

That’s critical for smart cities, and it’s opening up new business and service revenue opportunities for the telecom industry.

Visit NVIDIA booth 1745 at 1:30 pm on Oct. 23 to hear his talk, and stick around for others from key industry leaders.

The post Answering the Call: NVIDIA CEO to Detail How AI Will Revolutionize 5G, IoT appeared first on The Official NVIDIA Blog.

AI Space Odyssey: Deep Learning Aids Astronomers Study Galaxies

The Milky Way is on a collision course with the neighboring Andromeda galaxy. But no need to revise your will — the two star systems won’t meet for around 4 billion years.

“At some point in every galaxy’s life, it’ll undergo one of these mergers,” said William Pearson, Ph.D. student at the Netherlands Institute for Space Research and the University of Groningen, Netherlands. “It’s part of our understanding of how we think the universe works. These galaxies tend to find and crash into each other.”

Using convolutional neural networks developed on NVIDIA GPUs, Pearson is studying galaxy mergers based on both simulations and observational data from telescope images.

When two galaxies merge, the resulting fused galaxy mixes together all the gas, dust and other matter from the original star systems. Astronomers are interested in how the shape of galaxies change as a result, how the process can cause stars to form at a higher rate, and how the moving matter interacts with the supermassive black holes lying at the center of large galaxies.

By using AI to identify and analyze galaxy mergers across the universe, scientists can better understand how this phenomenon could affect our corner of the universe in the future.

Hubble Up: Analyzing Galaxy Mergers with AI 

For the most part, it’s not rocket science to visually determine whether two galaxies are in the thick of a collision.

merging galaxies in the Hercules constellation
This image, taken by the Hubble Space Telescope, shows a collision between two spiral galaxies located in the constellation of Hercules, located around 450 million light-years away from Earth. Image credit: NASA, ESA, the Hubble Heritage Team (STScI/AURA)-ESA/Hubble Collaboration and K. Noll (STScI). Licensed under CC BY 4.0.

Just looking at a telescope image, it’s easy to spot tidal tails, sweeping arcs of gas and dust being pulled from one galaxy to another by gravity.

The main challenge is classifying galaxies that are just starting to interact, or, on the other end of the spectrum, at the very final stages of a merge.

And then there’s the sheer volume of data.

Crowdsourced projects like Galaxy Zoo have relied on citizen scientists to classify a database of more than a million galaxy images from various ground-based and satellite telescopes. But that’s just a fraction of an estimated 100 billion galaxies in the universe.

And the available data is just getting larger. Projects like the under-construction Large Synoptic Survey Telescope are expected to capture images of billions of galaxies.

“There’s not enough people in the world to classify all these,” Pearson said. “As astronomers, we need another technique.”

While citizen scientist projects are a powerful tool, it still takes a long time for results to come through, he says. Deep learning models can help researchers keep pace with the many ground- and space-based telescopes busy collecting images of the universe, most of which are publicly available for analysis.

Using an NVIDIA GPU for inference, Pearson’s AI was able to categorize 300,000 galaxies in about 15 minutes. Even at an unheard-of rate of one classification per second, it would have taken an individual two working weeks to accomplish the task.

Trained using the TensorFlow deep learning framework and images from the Sloan Digital Sky Survey, the deep learning model identifies galaxies as merging or not merging with 92 percent accuracy. Pearson hopes for future versions of the CNN to look at more specific details, such as the size of the galaxies and how far along the merging process is.

From this data, researchers can make statistical assessments of broad trends in galaxy mergers — or take a closer look at specific galaxies of interest.

Main image shows two merging galaxies, nicknamed “The Mice,” located 300 million light-years away. Image credit: NASA, Holland Ford (JHU), the ACS Science Team and ESA. Licensed under CC BY 4.0.

The post AI Space Odyssey: Deep Learning Aids Astronomers Study Galaxies appeared first on The Official NVIDIA Blog.

GauGAN Rocket Man: Conceptual Artist Uses AI Tools for Sci-Fi Modeling

Have you ever wondered what it takes to produce the complex imagery in films like Star Wars or Transformers? The man behind the magic, Colie Wertz, is here to explain.

Wertz is a conceptual artist and modeler who works on film, television and video games. He sat down with AI Podcast host Noah Kravitz to explain his specialty in hard modeling, in which he produces digital models of objects with hard surfaces like vehicles, robots and computers.

To make these images, Wertz has taken to using AI art tools such as GauGAN, a real-time painting web app that allows users to create realistic landscapes using generative adversarial networks.

Rather than use GauGAN in the traditional manner, Wertz makes the tools “trick themselves” by putting a mountain in the sky, or snow falling at the bottom of the page, to create a unique image. Then he incorporates his signature spaceships into the scene.

Artist Colie Wertz uses the GauGAN landscape to inspire some of his ship designs.

Wertz appreciates how easily GauGAN builds a background. He says, “Coming from the hard surface world, that’s the kind of stuff that’s kind of always been a curveball for me, like matte painting and background composition.” Now, Wertz is able to focus on the ship and how to “integrate it into a background.”

For some of his creations, Wertz uses the GauGAN landscape to inspire his ship designs. He views AI art as a “creative partner” rather than a replacement for more traditional forms of art.

Wertz’s artistic career kickstarted after he left an architectural design firm in South Carolina and moved to Los Angeles to develop his digital art skills. There, he entered one of his spaceship models created with Photoshop into a contest put on by visual effects production company Electric Image.

Colie Wertz views AI art as a “creative partner” rather than a replacement for more traditional forms of art.

Caption: Wertz views AI art as a “creative partner” rather than a replacement for more traditional forms of art.

The judges were impressed, and Wertz ended up with a job at Industrial Light & Magic, a visual effects company founded by George Lucas. Wertz’s first job was working on the rerelease of Return of the Jedi, building digital models for matte painters.

For listeners curious about Wertz’s current work, they can look at his portfolio, visit his website or follow him on Instagram.

Help Make the AI Podcast Better

Have a few minutes to spare? Fill out this short listener survey. Your answers will help us make a better podcast.

How to Tune in to the AI Podcast

Get the AI Podcast through iTunes, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, Podkicker, Soundcloud, Stitcher and TuneIn. Your favorite not listed here? Email us at aipodcast [at] nvidia [dot] com.

Image credit: Colie Wertz

The post GauGAN Rocket Man: Conceptual Artist Uses AI Tools for Sci-Fi Modeling appeared first on The Official NVIDIA Blog.

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.