Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: NVIDIA

AI’s Mild Ride: RoadBotics Puts AI on Pothole Patrol

National Pothole Day is Jan. 15. Its timing is no accident.

All over the Northern hemisphere, potholes are at their suspension-wrecking, spine-shaking worst this month.

Thanks to AI, one startup is working all year long to alleviate this menace. Benjamin Schmidt, president and co-founder of RoadBotics, is using the tech to pave the way to better roads.

His startup is identifying areas at risk of potholes, so city governments can improve roads before damage worsens.

Schmidt spoke with AI Podcast host Noah Kravitz about how RoadBotics is working with over 160 governments across the world to collect and analyze video data to improve preventative maintenance.

 Key Points From This Episode:

  • Using smartphones placed against car windshields, RoadBotics collects and analyzes video data to assign each road a score, which local governments can use to inform infrastructure decisions.
  • RoadBotics protects privacy by blurring people, cars and other sensitive data so only roads are analyzed.
  • Early this year, RoadBotics will be release an app so anyone can use smartphones to collect data and submit to their neural network to help improve analysis.

Tweetables:

“The sooner you can detect [surface distresses], the sooner you can put a cheaper intervention in now that really just saves the life of the road.” — Benjamin Schmidt [5:00]

“RoadBotics was founded at exactly the right moment with the right tech, the right hardware. So we’re now in this sweet spot where we can actually deploy a solution” — Benjamin Schmidt [6:46]

You Might Also Like

How Deep Learning Will Reshape Our Cities

Lynn Richards, president and CEO of the Congress for New Urbanism, and Charles Marohn, president and co-founder of Strong Towns, weigh in on the benefits of using AI to design cities, and simulating designs in VR prior to construction.

How AI Will Revolutionize Driving

Danny Shapiro, senior director of automotive at NVIDIA, explains the capabilities necessary for autonomous driving, from object detection to AI to high performance computing.

Where Is Deep Learning Going Next?

Bryan Catanzaro, head of applied deep learning research at NVIDIA, explains his journey in AI from UC Berkeley, to Baidu, to NVIDIA. He’s striving for AI that works so seamlessly that users don’t even notice it, and he explains how GPUs are helping to make that happen.

Featured image credit: Santeri Viinamäki, some rights reserved.

The post AI’s Mild Ride: RoadBotics Puts AI on Pothole Patrol appeared first on The Official NVIDIA Blog.

New Online Course Gets IT Up to Speed on AI

Enterprise use of AI has grown 270 percent over the past four years, according to a 2019 survey of CIOs by Gartner.

To stay competitive in the face of this rapid adoption, organizations need to build data center infrastructure that is scalable and flexible enough to meet quickly evolving and expanding AI workloads.

IT plays a critical role in this. IT professionals across data center management, development operations (devops), security compliance and data governance need to be set up for success.

To help, the NVIDIA Deep Learning Institute has released an online, self-paced Introduction to AI in the Data Center course designed for IT professionals. (Enroll today — the course is free through Jan. 31 with code DLI_IT_BLOG_PROMO.)

The course explores AI concepts and terminology, NVIDIA’s AI software architecture and how to implement and scale AI workloads in the data center.

In this focused four-hour course, IT professionals will understand how AI is transforming industry and how to deploy GPU accelerated computing in the data center to facilitate this capability in their own organization. Plus, participants will earn a digital badge of completion to support their professional growth.

For enterprises considering how to get started with the right resources and training to support their needs, this course will get your IT teams started on the right foot. At a high level, the course covers:

  • GPU Computing in the Data Center
  • Introduction to Artificial Intelligence
  • Introduction to GPUs
  • GPU Software Ecosystem
  • Server-Level Considerations
  • Rack-Level Considerations
  • Data Center-Level Considerations

The Deep Learning Institute offers hands-on training in AI, accelerated computing and accelerated data science. Developers, data scientists, researchers, students, and now IT professionals can enroll in DLI courses to get the practical experience they need to learn how to deploy AI in their work. To date, the DLI has trained more than 200,000 people.

Enroll in the “Introduction to AI in the Data Center” course for free with code DLI_IT_BLOG_PROMO before Jan. 31.

To learn more about NVIDIA technology, read about our data center and AI solutions. Plus, join us in March at NVIDIA’s GPU Technology Conference in Silicon Valley for hands-on training, expert-led talks, and opportunities to network with others in the industry.

The post New Online Course Gets IT Up to Speed on AI appeared first on The Official NVIDIA Blog.

AI Meets 5G at the Edge: The Innovation Center at MWC 2020 in Barcelona

AI is alive at the edge of the network, where it’s already transforming everything from car makers to supermarkets. And we’re just getting started.

NVIDIA’s AI Edge Innovation Center, a first for this year’s Mobile World Congress (MWC) in Barcelona, will put attendees at the intersection of AI, 5G and edge computing. There, they can hear about best practices for AI at the edge and get an update on how NVIDIA GPUs are paving the way to better, smarter 5G services.

It’s a story that’s moving fast.

AI was born in the cloud to process the vast amounts of data needed for jobs like recommending new products and optimizing news feeds. But most enterprises interact with their customers and products in the physical world at the edge of the network — in stores, warehouses and smart cities.

The need to sense, infer and act in real time as conditions change is driving the next wave of AI adoption at the edge. That’s why a growing list of forward-thinking companies are building their own AI capabilities using the NVIDIA EGX edge-computing platform.

Walmart, for example, built a smart supermarket it calls its Intelligent Retail Lab. Jakarta uses AI in a smart city application to manage its vehicle registration program. BMW and Procter & Gamble automate inspection of their products in smart factories. They all use NVIDIA EGX along with our Metropolis application framework for video and data analytics.

For conversational AI, the NVIDIA Jarvis developer kit enables voice assistants geared to run on embedded GPUs in smart cars or other systems. WeChat, the world’s most popular smartphone app, accelerates conversational AI using NVIDIA TensorRT software for inference.

All these software stacks ride on our CUDA-X libraries, tools, and technologies that run on an installed base of more than 500 million NVIDIA GPUs.

Carriers Make the Call

At MWC Los Angeles this year, NVIDIA founder and CEO Jensen Huang announced Aerial, software that rides on the EGX platform to let telecommunications companies harness the power of GPU acceleration.

Ericsson’s Fredrik Jejdling, executive vice president and head of business area networks joined NVIDIA CEO Jensen Huang on stage at MWC LA to announce their collaboration.

With Aerial, carriers can both increase the spectral efficiency of their virtualized 5G radio-access networks and offer new AI services for smart cities, smart factories, cloud gaming and more — all on the same computing platform.

In Barcelona, NVIDIA and partners including Ericsson will give an update on how Aerial will reshape the mobile edge network.

Verizon is already using NVIDIA GPUs at the edge to deliver real-time ray tracing for AR/VR applications over 5G networks.

It’s one of several ways telecom applications can be taken to the next level with GPU acceleration. Imagine having the ability to process complex AI jobs on the nearest base station with the speed and ease of making a cellular call.

Your Dance Card for Barcelona

For a few days in February, we will turn our innovation center — located at Fira de Barcelona, Hall 4 — into a virtual university on AI with 5G at the edge. Attendees will get a world-class deep dive on this strategic technology mashup and how companies are leveraging it to monetize 5G.

Sessions start Monday morning, Feb. 24, and include AI customer case studies in retail, manufacturing and smart cities. Afternoon talks will explore consumer applications such as cloud gaming, 5G-enabled cloud AR/VR and AI in live sports.

We’ve partnered with the organizers of MWC on applied AI sessions on Tuesday, Feb. 25. These presentations will cover topics like federated learning, an emerging technique for collaborating on the development and training of AI models while protecting data privacy.

Wednesday’s schedule features three roundtables where attendees can meet executives working at the intersection of AI, 5G and edge computing. The week also includes two instructor-led sessions from the NVIDIA Deep Learning Institute that trains developers on best practices.

See Demos, Take a Meeting

For a hands-on experience, check out our lineup of demos based on the NVIDIA EGX platform. These will highlight applications such as object detection in a retail setting, ways to unsnarl traffic congestion in a smart city and our cloud-gaming service GeForce Now.

To learn more about the capabilities of AI, 5G and edge computing, check out the full agenda and book an appointment here.

The post AI Meets 5G at the Edge: The Innovation Center at MWC 2020 in Barcelona appeared first on The Official NVIDIA Blog.

Deep Learning Shakes Up Geologists’ Tools to Study Seismic Fault Systems

Fifteen years after a magnitude 9.1 earthquake and tsunami struck off the coast of Indonesia, killing more than 200,000 people in over a dozen countries, geologists are still working to understand the complex fault systems that run through Earth’s crust.

While major faults are easy for geologists to spot, these large features are connected to other, smaller faults and fractures in the rock. Identifying these smaller faults is painstaking, requiring weeks to study individual slices from a 3D image.

Researchers at the University of Texas at Austin are shaking up the process with deep learning models that identify geologic fault systems from 3D seismic images, saving scientists time and resources. The developers used NVIDIA GPUs and synthetic data to train neural networks that spot small, subtle faults typically missed by human interpreters.

Examining fault systems helps scientists to determine which seismic features are older than others and to study regions of interest like continental margins, where a continental plate meets an oceanic one.

Seismic analyses are also used in the energy sector to plan drilling and rigging activities to extract oil and natural gas, as well as the opposite process of carbon sequestration — injecting carbon dioxide back into the ground to mitigate the effects of climate change.

“Deep learning isn’t just a little bit more accurate — it’s on a whole different level both in accuracy and efficiency.” – Sergey Fomel

“Sometimes you want to drill into the fractures, and sometimes you want to stay away from them,” said Sergey Fomel, geological sciences professor at UT Austin. “But in either case, you need to know where they are.”

Tracing Cracks in Earth’s Upper Crust

Seismic fault systems are so complex that researchers analyzing real-world data by hand miss some of the finer cracks and fissures connected to a major fault. As a result, a deep learning model trained on human-annotated datasets will also miss these smaller fractures.

To get around this limitation, the researchers created synthetic data of seismic faults. Using synthetic data meant the scientists already knew the location of each major and minor fault in the dataset. This ground-truth baseline enabled them to train an AI model that surpasses the accuracy of manual labeling.

The team’s deep learning model parses 3D volumetric data to determine the probability that there’s a fault at every pixel within the image. Geologists can then go through the regions the neural network has flagged as having a high probability of faults present to conduct their analyses.

seismic fault systems
Fomel’s team uses 3D seismic volumes like these to map seismic fault systems. (Figure courtesy of Xinming Wu, from the paper “FaultSeg3D: Using synthetic data sets to train an end-to-end convolutional neural network for 3D seismic fault segmentation.” )

“Geologists help explain what happened throughout the history of geologic time,” he said. “They still need to analyze the AI model’s results to create the story, but we want to relieve them from the labor of trying to pick these features out manually. It’s not the best use of geologists’ time.”

Fomel said it can take up to a month to analyze by hand fault systems that take just seconds to process with the team’s CNN-based model, using an NVIDIA GPU for inference. Previous automated methods took hours and were much less accurate.

“Deep learning isn’t just a little bit more accurate — it’s on a whole different level both in accuracy and efficiency,” Fomel said. “It’s a game changer in terms of automatic interpretation.”

The researchers trained their neural networks on the Texas Advanced Computing Center’s Maverick2 system, powered by NVIDIA GPUs. Their deep learning models were built using the PyTorch and TensorFlow deep learning frameworks, as well as the Madagascar software package for geophysical data analysis.

Besides faults, these algorithms can be used to detect other features geologists examine, including salt bodies, sedimentary layers and channels. The researchers are also designing neural networks to calculate relative geologic time from seismic data —  a measure that gives scientists detailed information about geologic structures.

The post Deep Learning Shakes Up Geologists’ Tools to Study Seismic Fault Systems appeared first on The Official NVIDIA Blog.

Saved by the Spell: Serkan Piantino’s Company Makes AI for Everyone

Spell, founded by Serkan Piantino, is making machine learning as easy as ABC.

Piantino, CEO of the New York-based startup and former director of engineering for Facebook AI Research, explained to AI Podcast host Noah Kravitz how he’s bringing compute power to those that don’t have easy access to GPU clusters.

Spell provides access to hardware as well as a software interface that accelerates execution. Piantino reported that a wide variety of industries has shown interest in Spell, from healthcare to retail, as well as researchers and academia.

Key Points From This Episode

  • Spell’s basic tool is a command line, which has users type “spell run” before code that they previously would’ve run locally. Spell will then snapshot the code, find any necessary data and move that computation onto relevant hardware in the cloud.
  • Spell’s platform provides a collaborative workspace in which clients within an organization can work together on their Jupyter Notebooks and Labs.
  • Users can choose what type of GPU they require for their machine learning experiment, and Spell will run it on the corresponding hardware in the cloud.

Tweetables

“You know there’s some upfront cost to running an experiment, but if you get that cost down low enough, it disappears mentally” — Serkan Piantino [11:52]

“Providing access to hardware and making things easier — giving everybody the same sort of beautiful compute cluster that giant research organizations work on — was a really powerful idea” — Serkan Piantino [18:36]

You Might Also Like

NVIDIA Chief Scientist Bill Dally on How GPUs Ignited AI, and Where His Team’s Headed Next

Deep learning icon and NVIDIA Chief Scientist Bill Dally reflects on his career in AI and offers insight into the AI revolution made possible by GPU-driven deep learning. He shares his predictions on where AI is going next: more powerful algorithms for inference, and neutral networks that can train on less data.

Speed Reader: Evolution AI Accelerates Data Processing with AI

Across industries, employees spend valuable time processing mountains of paperwork. Evolution AI, a U.K. startup and NVIDIA Inception member, has developed an AI platform that extracts and understands information rapidly. Evolution AI Chief Scientist Martin Goodson explains the variety of problems that the company can solve.

Striking a Chord: Anthem Helps Patients Navigate Healthcare with Ease

Health insurance company Anthem helps patients personalize and better understand their healthcare information through AI. Rajeev Ronanki, senior vice president and chief digital officer at Anthem, explains how the company gives users the opportunity to schedule video consultations and book doctor’s appointments virtually.

Tune in to the AI Podcast

Get the AI Podcast through iTunesGoogle PodcastsGoogle PlayCastbox, DoggCatcher, OvercastPlayerFM, Pocket Casts, PodbayPodBean, PodCruncher, PodKicker, SoundcloudSpotifyStitcher and TuneIn.

  

Make Our Podcast Better

Have a few minutes to spare? Fill out this short listener survey. Your answers will help us make a better podcast.

The post Saved by the Spell: Serkan Piantino’s Company Makes AI for Everyone appeared first on The Official NVIDIA Blog.

BERT Does Europe: AI Language Model Learns German, Swedish

BERT is at work in Europe, tackling natural-language processing jobs in multiple industries and languages with help from NVIDIA’s products and partners.

The AI model formally known as Bidirectional Encoder Representations from Transformers debuted just last year as a state-of-the-art approach to machine learning for text. Though new, BERT is already finding use in avionics, finance, semiconductor and telecom companies on the continent, said developers optimizing it for German and Swedish.

“There are so many use cases for BERT because text is one of the most common data types companies have,” said Anders Arpteg, head of research for Peltarion, a Stockholm-based developer that aims to make the latest AI techniques such as BERT inexpensive and easy for companies to adopt.

Natural-language processing will outpace today’s AI work in computer vision because “text has way more apps than images — we started our company on that hypothesis,” said Milos Rusic, chief executive of deepset in Berlin. He called BERT “a revolution, a milestone we bet on.”

Deepset is working with PricewaterhouseCoopers to create a system that uses BERT to help strategists at a chip maker query piles of annual reports and market data for key insights. In another project, a manufacturing company is using NLP to search technical documents to speed maintenance of their products and predict needed repairs.

Peltarion, a member of NVIDIA’s Inception program that nurtures startups with access to its technology and ecosystem, packed support for BERT into its tools in November. It is already using NLP to help a large telecom company automate parts of its process for responding to product and service requests. And it’s using the technology to let a large market research company more easily query its database of surveys.

Work in Localization

Peltarion is collaborating with three other organizations on a three-year, government-backed project to optimize BERT for Swedish. Interestingly, a new model from Facebook called XLM-R suggests training on multiple languages at once could be more effective than optimizing for just one.

“In our initial results, XLM-R, which Facebook trained on 100 languages at once, outperformed a vanilla version of BERT trained for Swedish by a significant amount,” said Arpteg, whose team is preparing a paper on their analysis.

Nevertheless, the group hopes to have before summer a first version of a Swedish BERT model that performs really well, said Arpteg, who headed up an AI research group at Spotify before joining Peltarion three years ago.

An analysis by deepset of its German version of BERT.

In June, deepset released as open source a version of BERT optimized for German. Although its performance is only a couple percentage points ahead of the original model, two winners in an annual NLP competition in Germany used the deepset model.

Right Tool for the Job

BERT also benefits from optimizations for specific tasks such as text classification, question answering and sentiment analysis, said Arpteg. Peltarion researchers plans to publish in 2020 results of an analysis of gains from tuning BERT for areas with their own vocabularies such as medicine and legal.

The question-answering task has become so strategic for deepset it created Haystack, a version of its FARM transfer-learning framework to handle the job.

In hardware, the latest NVIDIA GPUs are among the favorite tools both companies use to tame big NLP models. That’s not surprising given NVIDIA recently broke records lowering BERT training time.

“The vanilla BERT has 100 million parameters and XML-R has 270 million,” said Arpteg, whose team recently purchased systems using NVIDIA Quadro and TITAN GPUs with up to 48GB of memory. It also has access to NVIDIA DGX-1 servers because “for training language models from scratch, we need these super-fast systems,” he said.

More memory is better, said Rusic, whose German BERT models weigh in at 400MB. Deepset taps into NVIDIA V100 Tensor Core 100 GPUs on cloud services and uses another NVIDIA GPU locally.

The post BERT Does Europe: AI Language Model Learns German, Swedish appeared first on The Official NVIDIA Blog.

Playing Pod: Our Top 5 AI Podcast Episodes of 2019

If it’s worth doing, it’s worth doing with AI.

2019 was the year deep-learning driven AI made the leap from the rarefied world of elite computer scientists and the world’s biggest web companies to the rest of us.

Everyone from startups to hobbyists to researchers are picking up powerful GPUs and putting this new kind of computing to work. And they’re doing amazing things.

And with NVIDIA’s AI Podcast, now in its third year, we’re bringing the people behind these wonders to more listeners than ever, with the podcast reaching more than 650,000 downloads in 2019.

Here are the episodes that were listener favorites in 2019.

A Man, a GAN, and a 1080 Ti: How Jason Antic Created ‘De-Oldify’

You don’t need to be an academic or to work for a big company to get into deep learning. You can just be a guy with a NVIDIA GeForce 1080 Ti and a generative adversarial network. Jason Antic, who describes himself as “a software guy,” began digging deep into GANs. Next thing you know, he’s created an increasingly popular tool that colors old black-and-white shots. Interested in digging into AI for yourself? Listen and get inspired.

Sort Circuit: How GPUs Helped One Man Conquer His Lego Pile

At some point in life, every man faces the same great challenge: sorting out his children’s Lego pile. Thanks to GPU-driven deep learning, Francisco “Paco” Garcia is one of the few men who can say they’ve conquered it. Here’s how.

UC Berkeley’s Pieter Abbeel on How Deep Learning Will Help Robots Learn

Robots can do amazing things. Compare even the most advanced robots to a three-year-old, however, and they can come up short. UC Berkeley Professor Pieter Abbeel has pioneered the idea that deep learning could be the key to bridging that gap: creating robots that can learn how to move through the world more fluidly and naturally. We caught up with Abbeel, who is director of the Berkeley Robot Learning Lab and cofounder of Covariant AI, a Bay Area company developing AI software that makes it easy to teach robots new and complex skills, at GTC 2019.

How the Breakthrough Listen Harnessed AI in the Search for Aliens

UC Berkeley’s Gerry Zhang talks about his work using deep learning to analyze signals from space for signs of intelligent extraterrestrial civilizations. And while we haven’t found aliens, yet, the doctoral student has already made some extraordinary discoveries.

How AI Helps GOAT Keep Sneakerheads a Step Ahead

GOAT Group helps sneaker enthusiasts get their hands on authentic Air Jordans, Yeezys and a variety of old-school kicks with the help of AI. Michael Hall, director of data at GOAT Group, explains how in a conversation with AI Podcast host and raging sneakerhead Noah Kravitz.

Tune in to the AI Podcast

We’re available through iTunesGoogle Play MusicGoogle PodcastsCastboxCastro, DoggCatcher, OvercastPodbayPocket Casts, PodCruncher, PodKicker, SpotifyStitcher and Soundcloud. If your favorite isn’t listed here, drop us a note.

  The AI Podcast SpotifyThe AI Podcast Google Podcasts The AI Podcast Apple Podcasts

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Playing Pod: Our Top 5 AI Podcast Episodes of 2019 appeared first on The Official NVIDIA Blog.

As AI Universe Keeps Expanding, NVIDIA CEO Lays Out Plan to Accelerate All of It

With the AI revolution spreading across industries everywhere, NVIDIA founder and CEO Jensen Huang took the stage Wednesday to unveil the latest technology for speeding its mass adoption.

His talk — to more than 6,000 scientists, engineers and entrepreneurs gathered for this week’s GPU Technology Conference in Suzhou, two hours west of Shanghai — touched on advancements in AI deployment, as well as NVIDIA’s work in the automotive, gaming, and healthcare industries.

“We build computers for the Einsteins, Leonardo di Vincis, Michaelngelos of our time,” Huang told the crowd, which overflowed into the aisles. “We build these computers for all of you.”

Huang explained that demand is surging for technology that can accelerate the delivery of AI services of all kinds. And NVIDIA’s deep learning platform — which the company updated Wednesday with new inferencing software — promises to be the fastest, most efficient way to deliver these services.

It’s the latest example of how NVIDIA achieves spectacular speedups by applying a combination of GPUs optimized for parallel computation, work across the entire computing stack, and algorithm and ecosystem expertise in focused vertical markets.

“It is accepted now that GPU accelerated computing is the path forward as Moore’s law has ended,” Huang said.

Real-Time Recommendations: Baidu and Alibaba

The latest challenge for accelerated computing: driving a new generation of powerful systems, known as recommender systems, able to connect individuals with what they’re looking for in a world where the options available to them is spiraling exponentially.

“The era of search has ended: if I put out a trillion, billion million things and they’re changing all the time, how can you find anything,” Huang asked. “The era of search is over. The era of recommendations has arrived.

Baidu — one of the world’s largest search companies – is harnessing NVIDIA technology to power advanced recommendation engines.

“It solves this problem of taking this enormous amount of data, and filtering it through this recommendation system so you only see 10 things,” Huang said.

With GPUs, Baidu can now train the models that power its recommender systems 10x faster, reducing costs, and, over the long term, increasing the accuracy of its models, improving the quality of its recommendations.

Another example such systems’ power: Alibaba, which relies on NVIDIA technology to help power the recommendation engines behind the success of Single’s Day.

This new shopping festival which takes place on Nov. 11 — or 11.11 — generated $38 billion in sales last month. That’s up by nearly a quarter from last year’s $31 billion, and more than double the online sales on Black Friday and Cyber Monday combined.

Helping to drive its success are recommender systems that display items that match user preferences, improving the click-through rate — which is closely watched in the e-commerce industry as a key sales driver. Its systems need to run in real-time and at an incredible scale, something that’s only possible with GPUs.

“Deep learning inference is wonderful for deep recommender systems and these recommender systems will be the engine for the Internet,” Huang said. “Everything we do in the future, everything we do now, passes through a recommender system.”

Real-Time Conversational AI

Huang also announced groundbreaking new inference software enabling smarter, real-time conversational AI.

NVIDIA TensorRT 7 — the seventh generation of the company’s inference software development kit — features a new deep learning compiler designed to automatically optimize and accelerate the increasingly complex recurrent and transformer-based neural networks needed for complex new applications, such as AI speech.

This speeds the components of conversational AI by 10x compared to CPUs, driving latency below the 300-millisecond threshold considered necessary for real-time interactions.

“To have the ability to understand your intention, make recommendations, do searches and queries for you, and summarize what they’ve learned to a text to speech system… that loop is now possible,” Huang said. “It is now possible to achieve very natural, very rich, conversational AI in real time.”

Accelerating Automotive Innovations

Huang also announced NVIDIA will provide the transportation industry with source access to its NVIDIA DRIVE deep neural networks (DNNs) for autonomous vehicle development.

NVIDIA DRIVE has become a de facto standard for AV development, used broadly by automakers, truck manufacturers, robotaxi companies, software companies and universities.

Now, NVIDIA is providing source access of it’s pre-trained AI models and training code to AV developers. Using a suite of NVIDIA AI tools, the ecosystem can freely extend and customize the models to increase the robustness and capabilities of their self-driving systems.

In addition to providing source access to the DNNs, Huang announcing the availability of a suite of advanced tools so developers can customize and enhance NVIDIA’s DNNs, utilizing their own data sets and target feature set. These tools allow the training of DNNs utilizing active learning, federated learning and transfer learning, Huang said.

Haung also announced NVIDIA DRIVE AGX Orin, the world’s highest performance and most advanced system-on-a-chip. It delivers 7x the performance and 3x the efficiency per watt of Xavier, NVIDIA’s previous-generation automotive SoC. Orin — which will be available to be incorporated in customer production runs for 2022 — boasts 17 billion transistors, 12 CPU cores, and is capable of over 200 trillion operations per second.

Orin will be woven into a stack of products — all running a single architecture and compatible with software developed on Xavier — able to scale from simple level 2 autonomy, all the way up to full Level 5 autonomy.

And Huang announced that Didi — the world’s largest ride hailing company — will adopt NVIDIA DRIVE to bring robotaxis and intelligent ride-hailing services to market.

“We believe everything that moves will be autonomous some day,” Huang said. “This is not the work of one company, this is the work of one industry, and we’ve created an open platform so we can all team up together to realize this autonomous future.”

Game On

Adding to NVIDIA’s growing footprint in cloud gaming, Huang announced a collaboration with Tencent Games in cloud gaming.

“We are going to extend the wonderful experience of PC gaming to all the computers that are underpowered today, the opportunity is quite extraordinary,” Huang said. “We can extend PC gaming to the other 800 milliion gamers in the world.”

NVIDIA’s technology will power Tencent Games’ START cloud gaming service, which began testing earlier this year. START gives gamers access to AAA games on underpowered devices anytime, anywhere.

Huang also announced that six leading game developers will join the ranks of game developers around the world who have been using the realtime ray tracing capabilities of NVIDIA’s GeForce RTX to transform the image quality and lighting effects of their upcoming titles

Ray tracing is a graphics rendering technique that brings real-time, cinematic-quality rendering to content creators and game developers. NVIDIA GeForce RTX GPUs contain specialized processor cores designed to accelerate ray tracing so the visual effects in games can be rendered in real time.

The upcoming games include a mix of blockbusters, new franchises, triple-A titles and indie fare — all using real-time ray tracing to bring ultra-realistic lighting models to their gameplay.

They include Boundary, from Surgical Scalpels Studios; Convallarioa, from LoongForce;  F.I.S.T. from  Shanghai TiGames; an unnamed project from Mihyo; Ring of Elysium, from TenCent; and Xuan Yuan Sword VII from Softstar.

Accelerating Medical Advances, 5G

This year, Huang said, NVIDIA has added two major new applications to CUDA – 5G vRAN and genomic processing. With each, NVIDIA’s supported by world leaders in their respective industries – Ericsson in telecommunication and BGI in genomics.

Since the first human genome was sequenced in 2003, the cost of whole genome sequencing has steadily shrunk, far outstripping the pace of Moore’s law. That’s led to an explosion of genomic data, with the total amount of sequence data is doubling every seven months.

“The ability to sequence the human genome in its totality is incredibly powerful,” Huang said.

To put this data to work — and unlock the promise of truly personalized medicine — Huang announced that NVIDIA is working with Beijing Genomics Institute.

BGI is using NVIDIA V100 GPUs and software from Parabricks, an Ann Arbor, Michigan- based startup acquired by NVIDIA earlier this month — to build the highest throughput genome sequencer yet, potentially driving down the cost of genomics-based personalized medicine.

“It took 15 years to sequence the human genome for the first time,” Huang said. “It is now possible to sequence 16 whole genomes per day.”

Huang also announced the availability of the NVIDIA Parabricks Genomic Analysis Toolkit, and its availability on NGC, NVIDIA’s hub for GPU-optimized software for deep learning, machine learning, and high-performance computing.

Accelerated Robotics with NVIDIA Isaac

As the talk wound to a close, Huang announced a new version of NVIDIA’s Isaac software development kit. The Isaac SDK achieves an important milestone in establishing a unified robotic development platform — enabling AI, simulation and manipulation capabilities.

The showstopper: Leonardo, a robotic arm with exquisite articulation created by NVIDIA researchers in Seattle, that not only performed a sophisticated task — recognizing and rearranging four colored cubes — but responded almost tenderly to the actions of the people around it in real time. It purred out a deep squeak, seemingly out of a Steven Spielberg movie.

As the audience watched the robotic arm was able to gently pluck a yellow colored block from Hunag’s hand and set it down. It then went on to rearrange four colored blocks, gently stacking them with fine precision.

The feat was the result of sophisticated simulation and training, that allows the robot to learn in virtual worlds, before being put to work in the real world. “And this is how we’re going to create robots in the future,” Huang said.

Accelerating Everything

Huang finished his talk by by recapping NVIDIA’s sprawling accelerated computing story, one that spans ray tracing, cloud gaming, recommendation systems, real-time conversational AI, 5G, genomics analysis, autonomous vehicle and robotis, and more.

“I want to thank you for your collaboration to make accelerated computing amazing and thank you for coming to GTC,” Huang said.

The post As AI Universe Keeps Expanding, NVIDIA CEO Lays Out Plan to Accelerate All of It appeared first on The Official NVIDIA Blog.

AI, Accelerated Computing Drive Shift to Personalized Healthcare

Genomics is finally poised to go mainstream, with help from deep learning and accelerated-computing technologies from NVIDIA.

Since the first human genome was sequenced in 2003, the cost of whole genome sequencing has steadily shrunk, far faster than suggested by Moore’s law. From sequencing the genomes of newborn babies to conducting national population genomics programs, the field is gaining momentum and getting more personal by the day.

Advances in sequencing technology have led to an explosion of genomic data. The total amount of sequence data is doubling every seven months. This breakneck pace could see genomics in 2025 surpass by 10x the amount of data generated by other big data sources such as astronomy, Twitter and YouTube — hitting the double-digit exabyte range.

New sequencing systems, like the DNBSEQ-T7 from BGI Group, the world’s largest genomics research group, are pushing the technology into broad use. The system generates a whopping 60 genomes per day, equaling 6 terabytes of data.

With advancements in BGI’s flow cell technology and acceleration by a pair of NVIDIA V100 Tensor Core GPUs, DNBSEQ-T7 sequencing is sped up 50x, making it the highest throughput genome sequencer to date.

As costs decline and sequencing times accelerate, more use cases emerge, such as the ability to sequence a newborn in intensive care where every minute counts.

Getting Past the Genome Analysis Bottleneck: GPU-Accelerated GATK

NVIDIA Parabricks GPU-accelerated GATK

The genomics community continues to extract new insights from DNA. Recent breakthroughs include single-cell sequencing to understand mutations at a cellular level, and liquid biopsies that detect and monitor cancer using blood for circulating DNA.

But genomic analysis has traditionally been a computational bottleneck in the sequencing pipeline — one that can be surmounted using GPU acceleration.

To deliver a roadmap of continuing GPU acceleration for key genomic analysis pipelines, the team at Parabricks — an Ann Arbor, Michigan-based developer of GPU software for genomics — is joining NVIDIA’s healthcare team, NVIDIA founder and CEO Jensen Huang shared today onstage at GTC China.

Teaming up with BGI, the Parabricks’ software can analyze a genome in under an hour. Using a server with eight NVIDIA T4 Tensor Core GPUs, BGI showed the throughput could lower the cost of genome sequencing to $2 — less than half the cost of existing systems.

See More, Do More with Smart Medical Devices

New medical devices are being invented across the healthcare industry. United Imaging Healthcare has introduced two industry-first medical devices. The uEXPLORER is the world’s first total body PET-CT scanner. Its pioneering ability to image an individual in one position enables it to carry out fast, continuous tracking of tracer distribution over the entire body.

A full body PET/CT image from uEXPLORER. Courtesy of United Imaging.

The total-body coverage of uEXPLORER can significantly shorten scan time. Scans as brief as 30 seconds provide good image quality, compared to traditional systems requiring over 20 minutes of scan time. uEXPLORER is also setting a new benchmark in tracer dose — imaging at about 1/50 of the regular dose, without compromising image quality.

The FDA-approved system uses 16 NVIDIA V100 Tensor Core GPUs and eight 56 GB/s InfiniBand network links from Mellanox to process movie-like scans that can acquire up to a terabyte of data. The system is already deployed in the U.S. at the University of California, Davis, where scientists helped design the system. It’s also the subject of an article in Nature, as well as videos watched by nearly half a million viewers on YouTube.

United’s other groundbreaking system, the uRT-Linac, is the first instrument to support a full radiation therapy suite, from detection to prevention.

With this instrument, a patient from a remote village can make the long trek to the nearest clinic just once to get diagnostic tests and treatment. The uRT-Linac combines CT imaging, AI processing to assist in treatment planning, and simulation with the radiation therapy delivery system. Using multi-modal technologies and AI, United has changed the nature of delivering cancer treatment.

Further afield, a growing number of smart medical devices are using AI for enhanced signal and image processing, workflow optimizations and data analysis.

And on the horizon are patient monitors that can sense when a patient is in danger and smart endoscopes that can guide surgeons during surgery. It’s no exaggeration to state that, in the future, every sensor in the hospital will have AI-infused capabilities.

Our recently announced NVIDIA Clara AGX developer kit helps address this trend. Clara AGX comprises hardware based on NVIDIA Xavier SoCs and Volta Tensor Core GPUs, along with a Clara AGX software development kit, to enable the proliferation of smart medical devices that make healthcare both smarter and more personal.

Apply for early access to Clara AGX.

The post AI, Accelerated Computing Drive Shift to Personalized Healthcare appeared first on The Official NVIDIA Blog.

All the Way to 11: NVIDIA GPUs Accelerate 11.11, World’s Biggest Online Shopping Event

Putting AI to work on a massive scale, Alibaba recently harnessed NVIDIA GPUs to serve its customers on 11/11, the year’s largest shopping event.

During Singles Day, as the Nov. 11 shopping event is also known, it generated $38 billion in sales. That’s up by nearly a quarter from last year’s $31 billion, and more than double online sales on Black Friday and Cyber Monday combined.

Singles Day — which has grown from $7 million a decade ago — illustrates the massive scale AI has reached in global online retail, where no player is bigger than Alibaba.

Each day, over 100 million shoppers comb through billions of available products on its site. Activity skyrockets on peak shopping days, when Alibaba’s systems field hundreds of thousands of queries a second.

And AI keeps things humming along, according to Lingjie Xu, Alibaba’s director of heterogeneous computing.

“To ensure these customers have a great user experience, we deploy state-of-the-art AI technology at massive scale using the NVIDIA accelerated computing platform, including T4 GPUs, cuBLAS, customized mixed precision and inference acceleration software,” he said.

“The platform’s intuitive search capabilities and reliable recommendations allow us to support a model six times more complex than in the past, which has driven a 10 percent improvement in click-through rate. Our largest model shows 100 times higher throughput with T4 compared to CPU,” he said.

One key application for Alibaba and other modern online retailers: recommender systems that display items that match user preferences, improving the click-through rate — which is closely watched in the e-commerce industry as a key sales driver.

Every small improvement in click-through rate directly impacts the user experience and revenues. A 10 percent improvement from advanced recommender models that can run in real time, and at incredible scale, is only possible with GPUs.

Alibaba’s teams employ NVIDIA GPUs to support a trio of optimization strategies around resource allocation, model quantization and graph transformation to increase throughput and responsiveness.

This has enabled NVIDIA T4 GPUs to accelerate Alibaba’s wide and deep recommendation model and deliver 780 queries per second. That’s a huge leap from CPU-based inference, which could only deliver three queries per second.

Alibaba has also deployed NVIDIA GPUs to accelerate its systems for automatic advertisement banner-generating, ad recommendation, imaging processing to help identify fake products, language translation, and speech recognition, among others. As the world’s third-largest cloud service provider, Alibaba Cloud provides a wide range of heterogeneous computing products capable of intelligent scheduling, automatic maintenance and real-time capacity expansion.

Alibaba’s far-sighted deployment of NVIDIA’s AI platform is a straw in the wind, indicating what more is to come in a burgeoning range of industries.

Just as its tools filter billions of products for millions of consumers, AI recommenders running on NVIDIA GPUs will find a place among other countless other digital services — app stores, news feeds, restaurant guides and music services among them — keeping customers happy.

Learn more about NVIDIA’s AI inference platform.

The post All the Way to 11: NVIDIA GPUs Accelerate 11.11, World’s Biggest Online Shopping Event appeared first on The Official NVIDIA Blog.