Learn About Our Meetup

5000+ Members



Join our meetup, learn, connect, share, and get to know your Toronto AI community. 



Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.



Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: NVIDIA

AI Calling: How to Kickoff a Career in Data Science

Paul Mahler remembers the day in May 2013 he decided to make the switch.

The former economist was waiting at a bus stop in Washington, D.C., reading the New York Times on his smartphone. He was struck by the story of a statistics professor who wrote an app that let computers review screenplays. It launched the academic into a lucrative new career in Hollywood.

“That seemed like a monumental breakthrough. I decided I wanted to get into data science, too,” said Mahler. Today, he’s a senior data scientist in Silicon Valley, helping NVIDIA’s customers use AI to make their own advances.

Like Mahler, Eyal Toledano made a big left turn a decade into his career. He describes “an existential crisis … I thought if I have any talent, I should try to do something I’m really proud of that’s bigger than myself and even if I fail, I will love every minute,” he said.

Then “an old friend from my undergrad days told me about his diving accident in a remote area and how no one could read his X-rays. He said we should build a database of images [using AI] to facilitate diagnoses in situations where people need this help — it was the first time I devoted myself to a seed of an idea that came from someone else,” Toledano recalled.

The two friends co-founded Zebra Medical Vision in 2014 to apply AI to medical imaging. For Toledano, there was only one way into the emerging field of deep learning.

“Roll up your sleeves, shovel some dirt and join the effort, that’s what helped me — in data science, you really need to get dirty,” he said.

Plenty of Room in the Sandbox

The field is still wide open. Data scientist tops the list of best jobs in America, according to a 2019 ranking from Glassdoor, a service that connects 67 million monthly visitors with 12 million job postings. It pegged median base salary for an entry-level data scientist at $108,000, job satisfaction at 4.3 out of 5 and said there are 6,510 job openings.

The job of data engineer was not far behind at $100,000, 4.2 out of 5 and 4,524 openings.

A 2018 study by recruiters at Burtch Works adds detail to the picture. It estimated starting salaries range from $95,000 to $168,000, depending on skill level. Data scientists come to the job with a wide range of academic backgrounds including math/statistics (25%), computer science and physical science (20% each), engineering (18%) and general business (8%). Nearly half had Ph.D.s and 40 percent held master’s degrees.

“Now that data is the new oil, data science is one of the most important jobs,” said Alen Capalik, co-founder and chief executive of startup, a developer of GPU software backed in part by NVIDIA. “Demand is incredible, so the unemployment in data science is zero.”

Like Mahler and Toledano, Capalik jumped in head first. “I just read a lot to understand data, the data pipeline and how customers use their data — different verticals use data differently,” he said.

The Nuts and Bolts

Data scientists are hybrid creatures. Some are statisticians who learned to code. Some are Python wizards learning the nuances of data analytics and machine learning. Others are domain experts who wanted to be part of the next big thing in computing.

All face a common flow of tasks. They must:

  • Identify business problems suited for big data
  • Set up and maintain tool chains
  • Gather large, relevant datasets
  • Structure datasets to address business concerns
  • Select an appropriate AI model family
  • Optimize model hyperparameters
  • Postprocess machine learning models
  • Critically analyze the results

“The unicorn data scientists do it all, from setting up a server to presenting to the board,” said Mahler.

But the reality is the field is quickly segmenting into subtasks. Data engineers work on the frontend of the process, massaging datasets through the so-called extract, transform and load process.

Big operations may employ data librarians, privacy experts and AI pipeline engineers who ensure systems deliver time-sensitive recommendations fast.

“The proliferation of titles is another sign the field is maturing,” said Mahler.

Play a Game, Learn the Job

One of the fastest, most popular ways into the field is to have some fun with AI by entering Kaggle contests, said Mahler. The online matches provide forums with real-world problems and code examples to get started. “People on our NVIDIA RAPIDS product team are continually on Kaggle contests,” he said.

Wins can lead to jobs, too. Owkin, an NVIDIA partner that designs AI software for healthcare, declares on its website, “Our data scientists are among the best in the world, with several Kaggle Masters.”

These days, at least some formal study is recommended. Online courses from aim to give experienced programmers a jumpstart into deep learning. Co-founder Rachel Thomas maintains a list of her talks encouraging everyone, especially women, to get into data science.

We compiled our own list of online courses in data science given by the likes of MIT, Google and NVIDIA’s Deep Learning Institute. Here are some other great resources:

“Having a strong grasp of linear algebra, probability and statistical modeling is important for creating and interpreting AI models,” said Mahler. “A lot of employers require a degree in data or computer science and a strong understanding of Python,” he added.

“I was never one to look for degrees,” countered Capalik of “Having real-world experience is better because the first day on a job you will find out things people never showed you in school,” he said.

Both agreed the best data scientists have a strong creative streak. And employers covet data scientists who are imaginative problem solvers.

Getting Picked for a Job

One startup gives job candidates a test of technical skills, but the test is just part of the screening process, said Capalik.

“I like to just look someone in the eye and ask a few questions,” he said. “You want to know if they are a problem solver and can work with a team because data science is a team effort — even Michael Jordan needed a team to win,” he said.

To pass the test and get an interview with Capalik, “you need to know what the data pipeline looks like, how data is collected, where it’s stored and how to work around the nuances and inefficiencies to solve problems with algorithms,” he said.

Toledano of Zebra is suspicious of candidates with pat answers.

“This is an experimental science,” he said. “The results are asymptotic to your ability to run many experiments, so you need to come up with different pieces and ideas quickly and test them in training experiments over and over again,” he said.

“People who want to solve a problem once might be very intelligent, but they will probably miss things. Don’t build a bow and arrow, build a catapult to throw a gazillion arrows — each one a potential solution you can evaluate quickly,” he added

Chris Rowen, a veteran entrepreneur and chief executive of AI startup BabbleLabs, is impressed by candidates who can explain their work. “Understand the theory about why models work on which problems and why,” he advised.

The Developer’s Path

Unlike the pure digital world of IT where answers are right or wrong, data science challenges often have no definitive answer, so they invite the curious who like to explore options and tradeoffs.

Indeed, IT and data science are radically different worlds.

IT departments use carefully structured processes to check code in and out and verify compliance. They write apps once that may be used for years. Data science teams, on the other hand, conduct experiments continuously with models based on probability curves and frequently massage models and datasets.

“Software engineering is more of a straight line while data science is a loop,” said James Kobielus, a veteran market watcher and lead AI analyst at Wikibon.

That said, it’s also true that “data science is the core of the next-generation developer, really,” Kobielus said. Although many subject matter experts jump into data science and learn how to code, “even more people are coming in from general app development,” he said, in part because that’s where the money is these days.

Clouds, Robots and Soft Skills

Whatever path you take, data scientists need to be familiar with the cloud. Many AI projects are born on remote servers using containers and modern orchestration techniques.

And you should understand the latest mobile and edge hardware and its constraints.

“There’s a lot of work going on in robotics using trial-and-error algorithms for reinforcement learning. This is beyond traditional data science, so personnel shortages are more acute there — and computer vision in cameras could not be hotter,” Kobielus said.

A diplomat’s skills for negotiation comes in handy, too. Data scientists are often agents of change, disrupting jobs and processes, so it’s important to make allies.

A Philosophical Shift

It sounds like a lot of work, but don’t be intimidated.

“I don’t know that I’ve made such a huge shift,” said Rowen of BabbleLabs, his first startup to leverage data science.

“The nomenclature has changed. The idea that the problem’s specs are buried in the data is a philosophical shift, but at the root of it, I’m doing something analogous to what I’ve done in much of my career,” he said

In the past Rowen explored the “computational profile of a problem and found the processor to make it work. Now, we turn that upside down. We look at what’s at the heart of a computation and what data we need to do it — that insight carried me into deep learning,” he said.

In a May 2018 talk, co-founder Thomas was equally encouraging. Using transfer learning, you can do excellent AI work by training just the last few layers of a neural network, she said. And you don’t always need big data. For example, one system was trained to recognize images of baseball vs. cricket using just 30 pictures.

“The world needs more people in AI, and the barriers are lower than you thought,” she added.

The post AI Calling: How to Kickoff a Career in Data Science appeared first on The Official NVIDIA Blog.

How AI Accelerates Blood Cell Analysis at Taiwan’s Largest Hospital

Blood tests tell doctors about the function of key organs, and can reveal countless medical conditions, including heart disease, anemia and cancer. At major hospitals, the number of blood cell images awaiting analysis can be overwhelming.

With over 10,000 beds and more than 8 million outpatient visits annually, Taiwan’s Chang Gung Memorial Hospital collects at least a million blood cell images each year. Its clinicians must be on hand 24/7, since blood analysis is key in the emergency department. To improve its efficiency and accuracy, the health care network — with seven hospitals across the island — is adopting deep learning tools developed on AI-Ready Infrastructure, or AIRI.

An integrated architecture from Pure Storage and NVIDIA, AIRI is based on the NVIDIA DGX POD reference design and powered by NVIDIA DGX-1 in combination with Pure Storage FlashBlade. The hospital’s AIRI solution is equipped with four NVIDIA DGX-1 systems, delivering over one petaflop of AI compute performance per system. Each DGX-1 integrates eight of the world’s fastest data center accelerators: the NVIDIA V100 Tensor Core GPU.

Chang Gung Memorial’s current blood cell analysis tools are capable of automatically identifying five main types of white blood cells, but still require doctors to manually identify other cell types, a time-consuming and expensive process.

Its deep learning model provides a more thorough analysis, classifying 18 types of blood cells from microscopy images with 99 percent accuracy. Having an AI tool that identifies a wide variety of blood cells also boosts doctors’ abilities to classify rare cell types, improving disease diagnosis. Using AI can help reduce clinician workloads without compromising on test quality.

To accelerate the training and inference of its deep learning models, the hospital relies on the integrated infrastructure design of AIRI, which incorporates best practices for compute, networking, storage, power and cooling.

AI Runs in This Hospital’s Blood

After a patient has blood drawn, Chang Gung Memorial uses automated tools to sample the blood, smear it on a glass microscope slide and stain it, so that red blood cells, white blood cells and platelets can be examined. The machine then captures an image of the slide, known as a blood film, so it can be analyzed by algorithms.

Using transfer learning, the hospital trained its convolutional neural networks on a dataset of more than 60,000 blood cell images on AIRI.

The AI takes just two seconds to interpret a set of 25 images using a server of NVIDIA T4 GPUs for inference — a task that’s more than a hundred times faster than the usual procedure involving a team of three medical experts spending up to five minutes.

In addition to providing faster blood test results, deep learning can reduce physician fatigue and enhance the quality of blood cell analysis.

“AI will improve the whole medical diagnosis process, especially the doctor-patient relationship, by solving two key problems: time constraints and human resource costs,” said Chang-Fu Kuo, director of the hospital’s Center for Artificial Intelligence in Medicine.

Some blood cell types are very rare, leading to an imbalance in the training dataset. To augment the number of example images for rare cell types and to improve the model’s performance, the researchers are experimenting with generative adversarial networks, or GANs.

The hospital is also using AIRI for fracture image identification, genomics and immunofluorescence projects. While the current AI tools focus on identifying medical conditions, future applications could be used for disease prediction.

The post How AI Accelerates Blood Cell Analysis at Taiwan’s Largest Hospital appeared first on The Official NVIDIA Blog.

How American Express Uses Deep Learning for Better Decision Making

Financial fraud is on the rise. As the number of global transactions increase and digital technology advances, the complexity and frequency of fraudulent schemes are keeping pace.

Security company McAfee estimated in a 2018 report that cybercrime annually costs the global economy some $600 billion, or 0.8 percent of global gross domestic product.

One of the most prevalent — and preventable — types of cybercrime is credit card fraud, which is exacerbated by the growth in online transactions.

That’s why American Express, a global financial services company, is developing deep learning generative and sequential models to prevent fraudulent transactions.

“The most strategically important use case for us is transactional fraud detection,” said Dmitry Efimov, vice president of machine learning research at American Express. “Developing techniques that more accurately identify and decline fraudulent purchase attempts helps us protect our customers and our merchants.”

Cashing into Big Data

The company’s effort spanned several teams that conducted research on using generative adversarial networks, or GANs, to create synthetic data based on sparsely populated segments.

In most financial fraud use cases, machine learning systems are built on historical transactional data. The systems use deep learning models to scan incoming payments in real time, identify patterns associated with fraudulent transactions and then flag anomalies.

In some instances, like new product launches, GANs can produce additional data to help train and develop more accurate deep learning models.

Given its global integrated network with tens of millions of customers and merchants, American Express deals with massive volumes of structured and unstructured data sets.

Using several hundred data features, including the time stamps for transactional data, the American Express teams found that sequential deep learning techniques, such as long short-term memory and temporal convolutional networks, can be adapted for transaction data to produce superior results compared to classical machine learning approaches.

The results have paid dividends.

“These techniques have a substantial impact on the customer experience, allowing American Express to improve speed of detection and prevent losses by automating the decision-making process,” Efimov said.

Closing the Deal with NVIDIA GPUs 

Due to the huge amount of customer and merchant data American Express works with, they selected NVIDIA DGX-1 systems, which contain eight NVIDIA V100 Tensor Core GPUs, to build models with both TensorFlow and PyTorch software.

Its NVIDIA GPU-powered machine learning techniques are also used to forecast customer default rates and to assign credit limits.

“For our production environment, speed is extremely important with decisions made in a matter of milliseconds, so the best solution to use are NVIDIA GPUs,” said Efimov.

As the systems go into production in the next year, the teams plan on using the NVIDIA TensorRT platform for high-performance deep learning inference to deploy the models in real time, which will help improve American Express’ fraud and credit loss rates.

Efimov will be presenting his team’s work at the GPU Technology Conference in San Jose in March. To learn more about credit risk management use cases from American Express, register for GTC, the premier AI conference for insights, training and direct access to experts on the key topics in computing across industries.

The post How American Express Uses Deep Learning for Better Decision Making appeared first on The Official NVIDIA Blog.

AI Came, AI Saw, AI Conquered: How Vysioneer Improves Precision Radiation Therapy

Of the millions diagnosed with cancer each year, over half receive some form of radiation therapy.

Deep learning is helping radiation oncologists make the process more precise by automatically labeling tumors from medical scans in a process known as contouring.

It’s a delicate balance.

“If oncologists contour too small an area, then radiation doesn’t treat the whole tumor and it could keep growing,” said Jen-Tang Lu, founder and CEO of Vysioneer, a Boston-based startup with an office in Taiwan. “If they contour too much, then radiation can harm the neighboring normal tissues.”

A member of the NVIDIA Inception startup accelerator program, Vysioneer builds AI tools to automate the time-consuming process of tumor contouring. To ensure the efficacy and safety of radiotherapy, radiation oncologists can easily spend hours contouring tumors from medical scans, Lu said.

The company’s first product, VBrain, can identify the three most common types of brain tumors from CT and MRI scans. Trained on NVIDIA V100 Tensor Core GPUs in the cloud and NVIDIA Quadro RTX 8000 GPUs on premises, the tool can speed up the contouring task by more than 6x — from over an hour to less than 10 minutes.

Vysioneer showcased its latest demos in the NVIDIA booth at the annual meeting of the Radiological Society of North America last week in Chicago. It’s one of more than 50 NVIDIA Inception startups that attended the conference.

Targeting Metastatic Brain Tumors

A non-invasive treatment, precision radiation therapy uses a high dosage of X-ray beams to destroy tumors without harming neighboring tissues.

Due to the availability of public datasets, most AI models that identify brain cancer from medical scans focus on gliomas, which are primary tumors — ones that originate in the brain.

VBrain, trained on more than 1,500 proprietary CT and MRI scans, identifies the vastly more common metastatic type of brain tumors, which occur when cancer spreads to the brain from another part of the body. Metastatic brain tumors typically occur in multiple parts of the brain at once, and can be tiny and hard to spot from medical scans.

VBrain integrates seamlessly into radiation oncologists’ existing clinical workflow, processing scans in just seconds using an NVIDIA GPU for inference. The tool could reduce variability among radiation oncologists, Lu says, and can also identify tiny lesions that radiologists or clinicians might miss.

The company has deployed its solution in a clinical trial at National Taiwan University Hospital, running on an on-premises server of NVIDIA GPUs.

In one case at the hospital, a patient had lung cancer that spread to the brain. During diagnosis, the patient’s radiologist identified a single large lesion from the brain scan. But VBrain revealed another two tiny lesions. This additional information led the oncologists to alter the patient’s radiation treatment plan.

Vysioneer is working towards FDA clearance for VBrain and plans to launch contouring AI models for medical images of other parts of the body. The company also plans to make VBrain available on NGC, a container registry that provides startups with streamlined deployment, access to the GPU compute ecosystem and a robust distribution channel.

NVIDIA tests and optimizes healthcare AI applications, like VBrain, to operate with the NVIDIA EGX platform, which enables fleets of devices and multiple physical locations of edge nodes to be remotely managed easily and securely, meeting the needs of data security and real-time intelligence in hospitals.

NVIDIA Inception helps startups during critical stages of product development, prototyping and deployment. Every Inception member receives a custom set of ongoing benefits, such as NVIDIA Deep Learning Institute credits, go-to-market support and hardware technology discounts that enable startups with fundamental tools to help them grow.

Lu says the technical articles, newsletters and better access to GPUs have helped the company — founded just six months ago — to efficiently build out its AI solution.

Lu previously was a member of the MGH & BWH Center for Clinical Data Science, where he led the development of DeepSPINE, an AI system to automate spinal diagnosis, trained on an NVIDIA DGX-1 system.

Main image shows VBrain-generated 3D tumor rendering (left) and tumor contours (right) for radiation treatment planning.

The post AI Came, AI Saw, AI Conquered: How Vysioneer Improves Precision Radiation Therapy appeared first on The Official NVIDIA Blog.

2D or Not 2D: NVIDIA Researchers Bring Images to Life with AI

Close your left eye as you look at this screen. Now close your right eye and open your left — you’ll notice that your field of vision shifts depending on which eye you’re using. That’s because while we see in two dimensions, the images captured by your retinas are combined to provide depth and produce a sense of three-dimensionality.

Machine learning models need this same capability so that they can accurately understand image data. NVIDIA researchers have now made this possible by creating a rendering framework called DIB-R — a differentiable interpolation-based renderer — that produces 3D objects from 2D images.

The researchers will present their model this week at the annual Conference on Neural Information Processing Systems (NeurIPS), in Vancouver.

In traditional computer graphics, a pipeline renders a 3D model to a 2D screen. But there’s information to be gained from doing the opposite — a model that could infer a 3D object from a 2D image would be able to perform better object tracking, for example.

NVIDIA researchers wanted to build an architecture that could do this while integrating seamlessly with machine learning techniques. The result, DIB-R, produces high-fidelity rendering by using an encoder-decoder architecture, a type of neural network that transforms input into a feature map or vector that is used to predict specific information such as shape, color, texture and lighting of an image.

It’s especially useful when it comes to fields like robotics. For an autonomous robot to interact safely and efficiently with its environment, it must be able to sense and understand its surroundings. DIB-R could potentially improve those depth perception capabilities.

It takes two days to train the model on a single NVIDIA V100 GPU, whereas it would take several weeks to train without NVIDIA GPUs. At that point, DIB-R can produce a 3D object from a 2D image in less than 100 milliseconds. It does so by altering a polygon sphere — the traditional template that represents a 3D shape. DIB-R alters it to match the real object shape portrayed in the 2D images.

The team tested DIB-R on four 2D images of birds (far left). The first experiment used a picture of a yellow warbler (top left) and produced a 3D object (top two rows).

NVIDIA researchers trained their model on several datasets, including a collection of bird images. After training, DIB-R could take an image of a bird and produce a 3D portrayal with the proper shape and texture of a 3D bird.

“This is essentially the first time ever that you can take just about any 2D image and predict relevant 3D properties,” says Jun Gao, one of a team of researchers who collaborated on DIB-R.

DIB-R can transform 2D images of long extinct animals like a Tyrannosaurus rex or chubby Dodo bird into a lifelike 3D image in under a second.

Built on PyTorch, a machine learning framework, DIB-R is included as part of Kaolin, NVIDIA’s newest 3D deep learning PyTorch library that accelerates 3D deep learning research.

The entire NVIDIA research paper, “Learning to Predict 3D Objects with an Interpolation-Based Renderer,” can be found here. The NVIDIA Research team consists of more than 200 scientists around the globe, focusing on areas including AI, computer vision, self-driving cars, robotics and graphics.

The post 2D or Not 2D: NVIDIA Researchers Bring Images to Life with AI appeared first on The Official NVIDIA Blog.

Pod Squad: Descript Uses AI to Make Managing Podcasts Quicker, Easier

You can’t have an AI podcast and not interview someone using AI to make podcasts better.

That’s why we reached out to serial entrepreneur Andrew Mason to talk to him about what he’s doing now. His company, Descript Podcast Studio, uses AI, natural language processing and automatic speech synthesis to make podcast editing easier and more collaborative.

Mason, Descript’s CEO and perhaps best known as Groupon’s founder, spoke with AI Podcast host Noah Kravitz about his company and the newest beta service it offers, called Overdub.


Key Points From This Episode

  • Descript works like a collaborative word processor. Users record audio, which Descript converts to text. They can then edit and rearrange text, and the program will change the audio.
  • Overdub, created in collaboration with Descript’s AI research division, eliminates the need to re-record audio. Type in new text, and Overdub creates audio in the user’s voice.
  • Descript 3.0 launched in November, adding new features such as a detector that can identify and remove vocalized pauses like “um” and “uh” as well as silence.


“We’re trying to use AI to automate the technical heavy lifting components of learning to use editors — as opposed to automating the craft — and we leave space for the user to display and refine their craft” — Andrew Mason [07:10]

“What’s really unique to us is a kind of tonal or prosodic connecting of the dots, where we’ll analyze the audio before and after whatever you’re splicing in with Overdub, and make sure that it sounds continuous in a natural transition” — Andrew Mason [10:30]

You Might Also Like

The Next Hans Zimmer? How AI May Create Music for Video Games, Exercise Routines

Imagine Wolfgang Amadeus Mozart as an algorithm or the next Hans Zimmer as a computer. Pierre Barreau and his startup, Aiva Technologies, are using deep learning to compose music. Their algorithm can create a theme in four minutes flat.

How Deep Learning Can Translate American Sign Language

Rochester Institute of Technology computer engineering major Syed Ahmed, a research assistant at the National Technical Institute for the Deaf, uses AI to translate between American sign language and English. Ahmed trained his algorithm on 1,700 sign language videos.

Tune in to the AI Podcast

Get the AI Podcast through iTunesGoogle PodcastsGoogle PlayCastbox, DoggCatcher, OvercastPlayerFM, Pocket Casts, PodbayPodBean, PodCruncher, PodKicker, SoundcloudSpotifyStitcher and TuneIn.


Make Our Podcast Better

Have a few minutes to spare? Fill out this short listener survey. Your answers will help us make a better podcast.

The post Pod Squad: Descript Uses AI to Make Managing Podcasts Quicker, Easier appeared first on The Official NVIDIA Blog.

AWS Outposts Station a GPU Garrison in Your Data Center

All the goodness of GPU acceleration on Amazon Web Services can now also run inside your own data center.

AWS Outposts powered by NVIDIA T4 Tensor Core GPUs are generally available starting today. They bring cloud-based Amazon EC2 G4 instances inside your data center to meet user requirements for security and latency in a wide variety of AI and graphics applications.

With this new offering, AI is no longer a research project.

Most companies still keep their data inside their own walls because they see it as their core intellectual property. But for deep learning to transition from research into production, enterprises need the flexibility and ease of development the cloud offers — right beside their data. That’s a big part of what AWS Outposts with T4 GPUs now enables.

With this new offering, enterprises can install a fully managed rack-scale appliance next to the large data lakes stored securely in their data centers.

AI Acceleration Across the Enterprise

To train neural networks, every layer of software needs to be optimized, from NVIDIA drivers to container runtimes and application frameworks. AWS services like Sagemaker, Elastic MapReduce and many others designed on custom-built Amazon Machine Images require model development to start with the training on large datasets. With the introduction of NVIDIA-powered AWS Outposts, those services can now be run securely in enterprise data centers.

The GPUs in Outposts accelerate deep learning as well as high performance computing and other GPU applications. They all can access software in NGC, NVIDIA’s hub for GPU-accelerated software optimization, which is stocked with applications, frameworks, libraries and SDKs that include pre-trained models.

For AI inference, the NVIDIA EGX edge-computing platform also runs on AWS Outposts and works with the AWS Elastic Kubernetes Service. Backed by the power of NVIDIA T4 GPUs, these services are capable of processing orders of magnitudes more information than CPUs alone. They can quickly derive insights from vast amounts of data streamed in real time from sensors in an Internet of Things deployment whether it’s in manufacturing, healthcare, financial services, retail or any other industry.

On top of EGX, the NVIDIA Metropolis application framework provides building blocks for vision AI, geared for use in smart cities, retail, logistics and industrial inspection, as well as other AI and IoT use cases, now easily delivered on AWS Outposts.

Alternatively, the NVIDIA Clara application framework is tuned to bring AI to healthcare providers whether it’s for medical imaging, federated learning or AI-assisted data labeling.

The T4 GPU’s Turing architecture uses TensorRT to accelerate the industry’s widest set of AI models. Its Tensor Cores support multi-precision computing that delivers up to 40x more inference performance than CPUs.

Remote Graphics, Locally Hosted

Users of high-end graphics have choices, too. Remote designers, artists and technical professionals who need to access large datasets and models can now get both cloud convenience and GPU performance.

Graphics professionals can benefit from the same NVIDIA Quadro technology that powers most of the world’s professional workstations not only on the public AWS cloud, but on their own internal cloud now with AWS Outposts packing T4 GPUs.

Whether they’re working locally or in the cloud, Quadro users can access the same set of hundreds of graphics-intensive, GPU-accelerated third-party applications.

The Quadro Virtual Workstation AMI, available in AWS Marketplace, includes the same Quadro driver found on physical workstations. It supports hundreds of Quadro-certified applications such as Dassault Systèmes SOLIDWORKS and CATIA; Siemens NX; Autodesk AutoCAD and Maya; ESRI ArcGIS Pro; and ANSYS Fluent, Mechanical and Discovery Live.

Learn more about AWS and NVIDIA offerings and check out our booth 1237 and session talks at AWS re:Invent.

The post AWS Outposts Station a GPU Garrison in Your Data Center appeared first on The Official NVIDIA Blog.

Healthcare Regulators Open the Tap for AI

Approvals for AI-based healthcare products are streaming in from regulators around the globe, with medical imaging leading the way.

It’s just the start of what’s expected to become a steady flow as submissions rise and the technology becomes better understood.

More than 90 medical imaging products using AI are now cleared for clinical use, thanks to approvals from at least one global regulator, according to Signify Research Ltd., a U.K. consulting firm in healthcare technology.

Regulators in Europe and the U.S. are leading the pace. Each has issued about 60 approvals to date. Asia is making its mark with South Korea and Japan issuing their first approvals recently.

Entrepreneurs are at the forefront of the trend to apply AI to healthcare.

At least 17 companies in NVIDIA’s Inception program, which accelerates startups, have received regulatory approvals. They include some of the first companies in Israel, Japan, South Korea and the U.S. to get regulatory clearance for AI-based medical products. Inception members get access to NVIDIA’s experts, technologies and marketing channels.

“Radiology AI is now ready for purchase,” said Sanjay Parekh, a senior market analyst at Signify Research.

The pipeline promises significant growth over the next few years.

“A year or two ago this technology was still in the research and validation phase. Today, many of the 200+ algorithm developers we track have either submitted or are close to submitting for regulatory approval,” said Parekh.

Startups Lead the Way

Trends in clearances for AI-based products will be a hot topic at the gathering this week of the Radiological Society of North America, Dec. 1-6 in Chicago. The latest approvals span products from startups around the globe that will address afflictions of the brain, heart and bones.

In mid-October, Inception partner LPIXEL Inc. won one of the first two approvals for an AI-based product from the Pharmaceuticals and Medical Devices Agency in Japan. LPIXEL’s product, called EIRL aneurysm, uses deep learning to identify suspected aneurysms using a brain MRI. The startup employs more than 30 NVIDIA GPUs, delivering more accurate results faster than traditional approaches.

In November, Inception partner ImageBiopsy Lab (Vienna) became the first company in Austria to receive 510(k) clearance for an AI product from the U.S. Food and Drug Administration. The Knee Osteoarthritis Labelling Assistant (KOALA) uses deep learning to process within seconds radiological data on knee osteoarthritis, a malady that afflicts 70 million patients worldwide.

In late October, HeartVista (Los Gatos, Calif.) won FDA 510(k) clearance for its One Click MRI acquisition software. The Inception partner’s AI product enables adoption for many patients of non-invasive cardiac MRIs, replacing an existing invasive process.

Regulators in South Korea cleared products from two Inception startups — Lunit and Vuno. They were among the first four companies to get approval to sell AI-based medical products in the country.

In China, a handful of Inception startups are in the pipeline to receive the country’s first class-three approvals needed to let hospitals pay for a product or service. They include companies such as 12Sigma and Shukun that already have class-two clearances.

Healthcare giants are fully participating in the trend, too.

Earlier this month, GE Healthcare recently won clearance for its Deep Learning Image Reconstruction engine that uses AI to improve reading confidence for head, whole body and cardiovascular images. It’s one of several medical imaging apps on GE’s Edison system, powered by NVIDIA GPUs.

Coming to Grips with Big Data

Zebra Medical Vision, in Israel, is among the most experienced AI startups in dealing with global regulators. European regulators approved more than a half dozen of its products, and the FDA has approved three with two more submissions pending.

AI creates new challenges regulators are still working through. “The best way for regulators to understand the quality of the AI software is to understand the quality of the data, so that’s where we put a lot of effort in our submissions,” said Eyal Toledano, co-founder and CTO at Zebra.

The shift to evaluating data has its pitfalls. “Sometimes regulators talk about data used for training, but that’s a distraction,” said Toledano.

“They may get distracted by looking at the test data, sometimes it is difficult to realise the idea that you can train your model on noisy data in large quantities but still generalize well. I really think they should focus on evaluation and test data,” he said.

In addition, it can be hard to make fair comparisons between new products that use deep learning and legacy product that don’t. That’s because until recently products only published performance metrics. They are allowed to keep their data sets hidden as trade secrets while companies submitting new AI products that would like to measure against each other or against other AI algorithms cannot compare apples to apples as done in public challenges

Zebra participated in feedback programs the FDA created to get a better understanding of the issues in AI. The company currently focuses on approvals in the U.S. and Europe because their agencies are seen as leaders with robust processes that other countries are likely to follow.

A Tour of Global Regulators

Breaking new ground, the FDA published in June a 20-page proposal for guidelines on AI-based medical products. It opens the door for the first time to products that improve as they learn.

It suggested products “follow pre-specified performance objectives and change control plans, use a validation process … and include real-world monitoring of performance once the device is on the market,” said FDA Commissioner Scott Gottlieb in an April statement.

AI has “the potential to fundamentally transform the delivery of health care … [with] earlier disease detection, more accurate diagnosis, more targeted therapies and significant improvements in personalized medicine,” he added.

For its part, the European Medicines Agency, Europe’s equivalent of the FDA, released in October 2018 a report on its goals through 2025. It includes plans to set up a dedicated AI test lab to gain insight into ways to support data-driven decisions. The agency is holding a November workshop on the report.

China’s National Medical Products Administration also issued in June technical guidelines for AI-based software products. It set up in April a special unit to set standards for approving the products.

Parekh, of Signify, recommends companies use data sets that are as large as possible for AI products and train algorithms for different types of patients around the world. “An algorithm used in China may not be applicable in the U.S. due to different population demographics,” he said.

Overall, automating medical processes with AI is a dual challenge.

“Quality needs to be not only as good as what a human can do, but in many cases it must be much better,” said Toledano, of Zebra. In addition, “to deliver(ing) value, you can’t just build an algorithm that detects something, it needs to deliver actionable results and many insights for many stakeholders such as both general practitioners and specialists,” he added.

You can see six approved AI healthcare products from Inception startups — including CureMetrix, Subtle Medical and — as well as NVIDIA’s technologies at our booth at the RSNA event.

The post Healthcare Regulators Open the Tap for AI appeared first on The Official NVIDIA Blog.

NVIDIA Clara Federated Learning to Deliver AI to Hospitals While Protecting Patient Data

With over 100 exhibitors at the annual Radiological Society of North America conference using NVIDIA technology to bring AI to radiology, 2019 looks to be a tipping point for AI in healthcare.

Despite AI’s great potential, a key challenge remains: gaining access to the huge volumes of data required to train AI models while protecting patient privacy. Partnering with the industry, we’ve created a solution.

Today at RSNA, we’re introducing NVIDIA Clara Federated Learning, which takes advantage of a distributed, collaborative learning technique that keeps patient data where it belongs — inside the walls of a healthcare provider.

Clara Federated Learning (Clara FL) runs on our recently announced NVIDIA EGX intelligent edge computing platform.

Federated Learning — AI with Privacy

Clara FL is a reference application for distributed, collaborative AI model training that preserves patient privacy. Running on NVIDIA NGC-Ready for Edge servers from global system manufacturers, these distributed client systems can perform deep learning training locally and collaborate to train a more accurate global model.

Here’s how it works: The Clara FL application is packaged into a Helm chart to simplify deployment on Kubernetes infrastructure. The NVIDIA EGX platform securely provisions the federated server and the collaborating clients, delivering everything required to begin a federated learning project, including application containers and the initial AI model.

NVIDIA Clara Federated Learning uses distributed training across multiple hospitals to develop robust AI models without sharing patient data.

Participating hospitals label their own patient data using the NVIDIA Clara AI-Assisted Annotation SDK integrated into medical viewers like 3D slicer, MITK, Fovia and Philips Intellispace Discovery. Using pre-trained models and transfer learning techniques, NVIDIA AI assists radiologists in labeling, reducing the time for complex 3D studies from hours to minutes.

NVIDIA EGX servers at participating hospitals train the global model on their local data. The local training results are shared back to the federated learning server over a secure link. This approach preserves privacy by only sharing partial model weights and no patient records in order to build a new global model through federated averaging.

The process repeats until the AI model reaches its desired accuracy. This distributed approach delivers exceptional performance in deep learning while keeping patient data secure and private.

US and UK Lead the Way

Healthcare giants around the world — including the American College of Radiology, MGH and BWH Center for Clinical Data Science, and UCLA Health — are pioneering the technology. They aim to develop personalized AI for their doctors, patients and facilities where medical data, applications and devices are on the rise and patient privacy must be preserved.

ACR is piloting NVIDIA Clara FL in its AI-LAB, a national platform for medical imaging. The AI-LAB will allow the ACR’s 38,000 medical imaging members to securely build, share, adapt and validate AI models. Healthcare providers that want access to the AI-LAB can choose a variety of NVIDIA NGC-Ready for Edge systems, including from Dell, Hewlett Packard Enterprise, Lenovo and Supermicro.

UCLA Radiology is also using NVIDIA Clara FL to bring the power of AI to its radiology department. As a top academic medical center, UCLA can validate the effectiveness of Clara FL and extend it in the future across the broader University of California system.

Partners HealthCare in New England also announced a new initiative using NVIDIA Clara FL. Massachusetts General Hospital and Brigham and Women’s Hospital’s Center for Clinical Data Science will spearhead the work, leveraging data assets and clinical expertise of the Partners HealthCare system.

In the U.K., NVIDIA is partnering with King’s College London and Owkin to create a federated learning platform for the National Health Service. The Owkin Connect platform running on NVIDIA Clara enables algorithms to travel from one hospital to another, training on local datasets. It provides each hospital a blockchain-distributed ledger that captures and traces all data used for model training.

The project is initially connecting four of London’s premier teaching hospitals, offering AI services to accelerate work in areas such as cancer, heart failure and neurodegenerative disease, and will expand to at least 12 U.K. hospitals in 2020.

Making Everything Smart in the Hospital 

With the rapid proliferation of sensors, medical centers like Stanford Hospital are working to make every system smart. To make sensors intelligent, devices need a powerful, low-power AI computer.

That’s why we’re announcing NVIDIA Clara AGX, an embedded AI developer kit that can handle image and video processing at high data rates, bringing AI inference and 3D visualization to the point of care.

NVIDIA Clara AGX scales from small, embedded devices to sidecar systems to full-size servers.

Clara AGX is powered by NVIDIA Xavier SoCs, the same processors that control self-driving cars. They consume as little as 10W, making them suitable for embedding inside a medical instrument or running in a small adjacent system.

A perfect showcase of Clara AGX is Hyperfine, the world’s first portable point-of-care MRI system. The revolutionary Hyperfine system will be on display in NVIDIA’s booth at this week’s RSNA event.

Hyperfine’s system is among the first of many medical instruments, surgical suites, patient monitoring devices and smart medical cameras expected to use Clara AGX. We’re witnessing the beginning of an AI-enabled internet of medical things.

Hyperfine’s mobile MRI system uses an NVIDIA GPU and will be on display at NVIDIA’s booth.

The NVIDIA Clara AGX SDK will be available soon through our early access program. It includes reference applications for two popular uses — real-time ultrasound and endoscopy edge computing.


Visit NVIDIA and our many healthcare partners in booth 10939 in the RSNA AI Showcase. We’ll be showing our latest AI-driven medical imaging advancements, including keeping patient data secure with AI at the edge.

Find out from our deep learning experts how to use AI to advance your research and accelerate your clinical workflows. See the full lineup of talks and learn more on our website.


The post NVIDIA Clara Federated Learning to Deliver AI to Hospitals While Protecting Patient Data appeared first on The Official NVIDIA Blog.

Read ‘em and Reap: 6 Success Factors for AI Startups

Now that data is the new oil, AI software startups are sprouting across the tech terrain like pumpjacks in Texas. A whopping $80 billion in venture capital is fueling as many as 12,000 new companies.

Only a few will tap a gusher. Those who do, experts say, will practice six key success factors.

  1. Master your domain
  2. Gather big data fast
  3. See (a little) ahead of the market
  4. Make a better screwdriver
  5. Scale across the clouds
  6. Stay flexible

Some of the biggest wins will come from startups with AI apps that “turn an existing provider on its head by figuring out a new approach for call centers, healthcare or whatever it is,” said Rajeev Madhavan who manages a $300 million fund at Clear Ventures, nurturing nine AI startups.

1. Master Your Domain

Madhavan sold his electronic design automation startup Magma Design in 2012 to Synopsys for $523 million. His first stop on the way to becoming a VC was to take Andrew Ng’s Stanford course in AI.

“For a brief period in Silicon Valley every startup’s pitch would just throw in jargon on AI, but most of them were just doing collaborative filtering,” he said. “The app companies we look for have to be heavy on AI, but success comes down to how good a startup is in its domain space,” he added.

Chris Rowen agrees. The veteran entrepreneur who in 2013 sold his startup Tensilica to Cadence Design for $380 million considers domain expertise the top criteria for an AI software startup’s success.

Rowen’s latest startup, BabbleLabs, uses AI to filter noise from speech in real time. “At the root of it, I’m doing something analogous to what I’ve done in much of my career — work on really hard real-time computing problems that apply to mass markets,” Rowen said.

Overall, “deep learning is still at the stage where people are having challenges understanding which problems can be handled with this technique. The companies that recognize a vertical-market need and deliver a solution for it have a bigger chance of getting early traction. Over time, there will be more broad, horizontal opportunities,” he added.

Jeff Herbst nurtures more than 5,000 AI startups under the NVIDIA Inception program that fuels entrepreneurs with access to its technology and market connections. But the AI tag is just shorthand.

In a way, it’s like a rerun of The Invasion of the DotComs. “We call them AI companies today, but they are all in specialized markets — in the not-so-distant future, every company will be an AI company,” said Herbst, vice president of business development at NVIDIA.

Today’s AI software landscape looks like a barbell to Herbst. Lots of activity by a handful of cloud-computing giants at one end and a bazillion startups at the other.

2. Get Big Data Fast

Collecting enough bits to fill a data lake is perhaps the hardest challenge for an AI startup.

Among NVIDIA’s Inception startups, Zebra Medical Vision uses AI on medical images to make faster, smarter diagnoses. To get the data it needed, it partnered both with Israel’s largest healthcare provider as well as Intermountain Healthcare, which manages 215 clinics and 24 hospitals in the U.S.

“We understood data was the most important asset we needed to secure, so we invested a lot in the first two years of the startup not only in data but also in developing all kinds of algorithms in parallel,” said Eyal Toledano, co-founder and CTO of Zebra. “To find one good clinical solution, you have to go through many candidates.”

Getting access to 20 years of digital data from top drawer healthcare organizations “took a lot of convincing” both from Zebra’s chief executive and Toledano.

“My contribution was showing how security, compliance and anonymity could be done. There was a lot of education and co-development so they would release the data and we could do research that could contribute back to their patient population in return,” he added.

It’s working. To date Zebra has raised $50 million, received FDA approvals on three products with two more pending “and a few other submissions are on the way,” he said.

Toledano also gave kudos to NVIDIA’s Inception program.

“We had many opportunities to examine new technologies before they became widely used. We saw the difference in applying new GPUs to current processes, and looked at inference in the hospital with GPUs to improve the user experience, especially in time-critical applications,” he said.

“We also got some good know-how and ideas to improve our own infrastructure with training and infrastructure libraries to build projects. We tried quite a lot of the NVIDIA technologies and some were really amazing and fruitful, and we adopted a DGX server and decreased our development and training time substantially in many evaluations,” he added.

Six Steps to AI Startup Gold

Success Factor Call to Action Startups Using It
Master your domain Have deep expertise in your target application BabbleLabs
Gather big data fast Tap partners, customers to gather data and refine models Zebra Medical Vision, Scale
See (a little) ahead of the market Find solutions to customer pain points before rivals see them, Netflix
Make a better screwdriver Create tools that simplify the work of data scientists Scale, Dataiku
Scale across the clouds Support private and multiple public cloud services
Stay flexible Follow changing customer pain points to novel solutions Keyhole Corp.

Another Inception startup, Scale, which provides training and validation data for self-driving cars and other platforms, got on board with Toyota and Lyft. “Working with more people makes your algorithms smarter, and then more people want to work with you — you get into a cycle of success,” said Herbst.

Reflektion, one of Madhavan’s startups, now has a database of 200 million unique shoppers, the third largest retail database after Amazon and Walmart. It started with zero. Getting big took three years and a few great partners.

Rowen’s BabbleLabs applied a little creativity and elbow grease to get a lot of data cheaply and fast. It siphoned speech data from free sources as diverse as YouTube and the Library of Congress. When it needed specialized data, it activated a network of global contractors “quite economically,” he said.

“You can find low-cost, low-quality data sources, then use algorithms to filter and curate the data. Controlling the amount of noise associated with the speech helped simplify training.” he added.

“In AI, access to data no one else has is the big win,” said Herbst. “The world has a lot of open source frameworks and tools, but a lot of the differentiation comes from proprietary access to the data that does the programming,” he added.

When seeking data-rich customers and partners “the fastest way to get in the door is knowing what their pain points are,” said Alen Capalik, founder of

Work in high-frequency trading on Wall Street taught Capalik the value of GPUs. When he came up with an idea for using them to ingest real-time data fast for any application, he sought out Herbst at NVIDIA in 2017.

“He almost immediately wrote me a check for $1.5 million,” Capalik said.

3. See (a Little) Ahead of the Market

Today, is poised for a Series A financing round to fuel its recently released PlasmaENGINE, which already has two customers and over 20 more in the pipeline. “I think we are 12-18 months ahead of the market, which is a great spot to be in,” said Capalik, whose product can process as much data as 100 Spark instances.

That wasn’t the position Capalik found himself in his last time out. His cybersecurity startup — GoSecure, formerly CounterTack — pioneered the idea of end-point threat detection as much as six years before it caught on.

“People told me I was crazy. Palo Alto Networks and FireEye were doing perimeter security, and users thought they’d never install agents again because they slowed systems down. So, we struggled for a while and had to educate the market a lot,” he said.

Education and awareness are the kinds of jobs established corporations tackle. For startups, being visionary is like Steve Jobs unveiling an iPhone — “show them what they didn’t know they wanted,” he said.

“Netflix went after video streaming before there was enough bandwidth or end points — they skated to where the puck was going,” said Herbst.

4. Make a Better Screwdriver

AI holds opportunities for arms dealers, too — the kind who sell the software tools data scientists use to tighten down the screws on their neural networks.

The current Swiss Army knife of AI is the workbench. It’s a software platform for developing and deploying machine-learning models in today’s DevOps IT environment.

Jupyter notebooks could be seen as a sort of two-blade model you get for free as open source. Giants such as AWS, IBM and Microsoft and dozens of startups such as and Dataiku are rolling out versions with more forks, corkscrews and toothpicks.

Despite all the players and a fast-moving market, there are still opportunities here, said James Kobielus, a lead analyst for AI and data science at Wikibon. Start as a plug-in for a popular workbench, he suggested.

Startups can write modules to support emerging frameworks and languages, or a mod to help a workbench tap into the AI goodness embedded in the latest smartphones. Alternatively, you can automate streaming operations or render logic automatically into code, the former IBM data-science evangelist advised.

If workbenches aren’t for you, try robotic process automation, another emerging category trying to make AI easier for more people to use. “You can clean up if you can democratize RPA for makers and kids — that’s exciting,” Kobielus said.

There’s a wide-open opportunity for tools that cram neural nets into the kilobytes of memory on devices such as smart speakers, appliances and even thermostats, BabbleLabs’ Rowen said. His company aims to run its speech models on some of the world’s smallest microcontrollers.

“We need compilers that take trained models and do quantization, model compression and optimized model generation to fit into the skinny memory of embedded systems — nothing solves this problem yet,” he said.

5. Expand Across the Clouds

The playing field is very competitive with more startups than ever because it’s easier than ever to start a company, said Herbst, who worked closely with entrepreneurs as a corporate and IP attorney even before he joined NVIDIA 18 years ago.

All you need to get started today is an idea, a laptop, a cup of coffee and a cloud-computing account. “All the infrastructure is a service now,” he said.

But if you get lucky and scale, that one cloud-computing account can become a bottleneck and your biggest cost after payroll.

“That’s a good problem to have, but to hit breakeven and make it easier for customers, you need your software running on any cloud,” said Madhavan.

The need is so striking, he wound up funding a startup to address it. is an expert in stateful and stateless workloads, helping companies become cloud-agnostic. “We have been extremely successful with 5G telcos going cloud native and embracing containers,” he said.

6. Stay Flexible as a Yogi

Few startups wind up where they thought they were going. Apple planned to make desktop computers, Amazon aimed to sell books online.

Over time “they pivot one way or another. They go in with a problem to solve, but as they talk to customers the smart ones learn from those interactions how to re-target or tailor themselves,” said Herbst, who gives an example from his pre-AI days

Keyhole Corp. wanted to provide 3D mapping services initially for real estate agents and other professionals. Its first product was distributed on CDs

As a veteran of early search startup AltaVista, “I thought this startup belonged more to a Yahoo! or some other internet company. I realized it was not a professional but a major consumer app,” said Herbst, who was happy to fund them as one of NVIDIA’s first investments outside gaming.

In time, Google agreed with Herbst and acquired the company. Keyhole’s technology became part of the underpinnings of Google Maps and Google Earth.

“They had a nice exit, their people went on to have rock-star careers at Google, and I believe were among the original creators of Pokemon Go,” he said.

The lesson is simple: Follow good directions — like the six success factors for AI software startups — and there’s no telling where you may end up.

The post Read ‘em and Reap: 6 Success Factors for AI Startups appeared first on The Official NVIDIA Blog.

Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.