Blog

Learn About Our Meetup

4500+ Members

Category: NVIDIA

How American Express Uses Deep Learning for Better Decision Making

Financial fraud is on the rise. As the number of global transactions increase and digital technology advances, the complexity and frequency of fraudulent schemes are keeping pace.

Security company McAfee estimated in a 2018 report that cybercrime annually costs the global economy some $600 billion, or 0.8 percent of global gross domestic product.

One of the most prevalent — and preventable — types of cybercrime is credit card fraud, which is exacerbated by the growth in online transactions.

That’s why American Express, a global financial services company, is developing deep learning generative and sequential models to prevent fraudulent transactions.

“The most strategically important use case for us is transactional fraud detection,” said Dmitry Efimov, vice president of machine learning research at American Express. “Developing techniques that more accurately identify and decline fraudulent purchase attempts helps us protect our customers and our merchants.”

Cashing into Big Data

The company’s effort spanned several teams that conducted research on using generative adversarial networks, or GANs, to create synthetic data based on sparsely populated segments.

In most financial fraud use cases, machine learning systems are built on historical transactional data. The systems use deep learning models to scan incoming payments in real time, identify patterns associated with fraudulent transactions and then flag anomalies.

In some instances, like new product launches, GANs can produce additional data to help train and develop more accurate deep learning models.

Given its global integrated network with tens of millions of customers and merchants, American Express deals with massive volumes of structured and unstructured data sets.

Using several hundred data features, including the time stamps for transactional data, the American Express teams found that sequential deep learning techniques, such as long short-term memory and temporal convolutional networks, can be adapted for transaction data to produce superior results compared to classical machine learning approaches.

The results have paid dividends.

“These techniques have a substantial impact on the customer experience, allowing American Express to improve speed of detection and prevent losses by automating the decision-making process,” Efimov said.

Closing the Deal with NVIDIA GPUs 

Due to the huge amount of customer and merchant data American Express works with, they selected NVIDIA DGX-1 systems, which contain eight NVIDIA V100 Tensor Core GPUs, to build models with both TensorFlow and PyTorch software.

Its NVIDIA GPU-powered machine learning techniques are also used to forecast customer default rates and to assign credit limits.

“For our production environment, speed is extremely important with decisions made in a matter of milliseconds, so the best solution to use are NVIDIA GPUs,” said Efimov.

As the systems go into production in the next year, the teams plan on using the NVIDIA TensorRT platform for high-performance deep learning inference to deploy the models in real time, which will help improve American Express’ fraud and credit loss rates.

Efimov will be presenting his team’s work at the GPU Technology Conference in San Jose in March. To learn more about credit risk management use cases from American Express, register for GTC, the premier AI conference for insights, training and direct access to experts on the key topics in computing across industries.

The post How American Express Uses Deep Learning for Better Decision Making appeared first on The Official NVIDIA Blog.

AI Came, AI Saw, AI Conquered: How Vysioneer Improves Precision Radiation Therapy

Of the millions diagnosed with cancer each year, over half receive some form of radiation therapy.

Deep learning is helping radiation oncologists make the process more precise by automatically labeling tumors from medical scans in a process known as contouring.

It’s a delicate balance.

“If oncologists contour too small an area, then radiation doesn’t treat the whole tumor and it could keep growing,” said Jen-Tang Lu, founder and CEO of Vysioneer, a Boston-based startup with an office in Taiwan. “If they contour too much, then radiation can harm the neighboring normal tissues.”

A member of the NVIDIA Inception startup accelerator program, Vysioneer builds AI tools to automate the time-consuming process of tumor contouring. To ensure the efficacy and safety of radiotherapy, radiation oncologists can easily spend hours contouring tumors from medical scans, Lu said.

The company’s first product, VBrain, can identify the three most common types of brain tumors from CT and MRI scans. Trained on NVIDIA V100 Tensor Core GPUs in the cloud and NVIDIA Quadro RTX 8000 GPUs on premises, the tool can speed up the contouring task by more than 6x — from over an hour to less than 10 minutes.

Vysioneer showcased its latest demos in the NVIDIA booth at the annual meeting of the Radiological Society of North America last week in Chicago. It’s one of more than 50 NVIDIA Inception startups that attended the conference.

Targeting Metastatic Brain Tumors

A non-invasive treatment, precision radiation therapy uses a high dosage of X-ray beams to destroy tumors without harming neighboring tissues.

Due to the availability of public datasets, most AI models that identify brain cancer from medical scans focus on gliomas, which are primary tumors — ones that originate in the brain.

VBrain, trained on more than 1,500 proprietary CT and MRI scans, identifies the vastly more common metastatic type of brain tumors, which occur when cancer spreads to the brain from another part of the body. Metastatic brain tumors typically occur in multiple parts of the brain at once, and can be tiny and hard to spot from medical scans.

VBrain integrates seamlessly into radiation oncologists’ existing clinical workflow, processing scans in just seconds using an NVIDIA GPU for inference. The tool could reduce variability among radiation oncologists, Lu says, and can also identify tiny lesions that radiologists or clinicians might miss.

The company has deployed its solution in a clinical trial at National Taiwan University Hospital, running on an on-premises server of NVIDIA GPUs.

In one case at the hospital, a patient had lung cancer that spread to the brain. During diagnosis, the patient’s radiologist identified a single large lesion from the brain scan. But VBrain revealed another two tiny lesions. This additional information led the oncologists to alter the patient’s radiation treatment plan.

Vysioneer is working towards FDA clearance for VBrain and plans to launch contouring AI models for medical images of other parts of the body. The company also plans to make VBrain available on NGC, a container registry that provides startups with streamlined deployment, access to the GPU compute ecosystem and a robust distribution channel.

NVIDIA tests and optimizes healthcare AI applications, like VBrain, to operate with the NVIDIA EGX platform, which enables fleets of devices and multiple physical locations of edge nodes to be remotely managed easily and securely, meeting the needs of data security and real-time intelligence in hospitals.

NVIDIA Inception helps startups during critical stages of product development, prototyping and deployment. Every Inception member receives a custom set of ongoing benefits, such as NVIDIA Deep Learning Institute credits, go-to-market support and hardware technology discounts that enable startups with fundamental tools to help them grow.

Lu says the technical articles, newsletters and better access to GPUs have helped the company — founded just six months ago — to efficiently build out its AI solution.

Lu previously was a member of the MGH & BWH Center for Clinical Data Science, where he led the development of DeepSPINE, an AI system to automate spinal diagnosis, trained on an NVIDIA DGX-1 system.

Main image shows VBrain-generated 3D tumor rendering (left) and tumor contours (right) for radiation treatment planning.

The post AI Came, AI Saw, AI Conquered: How Vysioneer Improves Precision Radiation Therapy appeared first on The Official NVIDIA Blog.

2D or Not 2D: NVIDIA Researchers Bring Images to Life with AI

Close your left eye as you look at this screen. Now close your right eye and open your left — you’ll notice that your field of vision shifts depending on which eye you’re using. That’s because while we see in two dimensions, the images captured by your retinas are combined to provide depth and produce a sense of three-dimensionality.

Machine learning models need this same capability so that they can accurately understand image data. NVIDIA researchers have now made this possible by creating a rendering framework called DIB-R — a differentiable interpolation-based renderer — that produces 3D objects from 2D images.

The researchers will present their model this week at the annual Conference on Neural Information Processing Systems (NeurIPS), in Vancouver.

In traditional computer graphics, a pipeline renders a 3D model to a 2D screen. But there’s information to be gained from doing the opposite — a model that could infer a 3D object from a 2D image would be able to perform better object tracking, for example.

NVIDIA researchers wanted to build an architecture that could do this while integrating seamlessly with machine learning techniques. The result, DIB-R, produces high-fidelity rendering by using an encoder-decoder architecture, a type of neural network that transforms input into a feature map or vector that is used to predict specific information such as shape, color, texture and lighting of an image.

It’s especially useful when it comes to fields like robotics. For an autonomous robot to interact safely and efficiently with its environment, it must be able to sense and understand its surroundings. DIB-R could potentially improve those depth perception capabilities.

It takes two days to train the model on a single NVIDIA V100 GPU, whereas it would take several weeks to train without NVIDIA GPUs. At that point, DIB-R can produce a 3D object from a 2D image in less than 100 milliseconds. It does so by altering a polygon sphere — the traditional template that represents a 3D shape. DIB-R alters it to match the real object shape portrayed in the 2D images.

The team tested DIB-R on four 2D images of birds (far left). The first experiment used a picture of a yellow warbler (top left) and produced a 3D object (top two rows).

NVIDIA researchers trained their model on several datasets, including a collection of bird images. After training, DIB-R could take an image of a bird and produce a 3D portrayal with the proper shape and texture of a 3D bird.

“This is essentially the first time ever that you can take just about any 2D image and predict relevant 3D properties,” says Jun Gao, one of a team of researchers who collaborated on DIB-R.

DIB-R can transform 2D images of long extinct animals like a Tyrannosaurus rex or chubby Dodo bird into a lifelike 3D image in under a second.

Built on PyTorch, a machine learning framework, DIB-R is included as part of Kaolin, NVIDIA’s newest 3D deep learning PyTorch library that accelerates 3D deep learning research.

The entire NVIDIA research paper, “Learning to Predict 3D Objects with an Interpolation-Based Renderer,” can be found here. The NVIDIA Research team consists of more than 200 scientists around the globe, focusing on areas including AI, computer vision, self-driving cars, robotics and graphics.

The post 2D or Not 2D: NVIDIA Researchers Bring Images to Life with AI appeared first on The Official NVIDIA Blog.

Pod Squad: Descript Uses AI to Make Managing Podcasts Quicker, Easier

You can’t have an AI podcast and not interview someone using AI to make podcasts better.

That’s why we reached out to serial entrepreneur Andrew Mason to talk to him about what he’s doing now. His company, Descript Podcast Studio, uses AI, natural language processing and automatic speech synthesis to make podcast editing easier and more collaborative.

Mason, Descript’s CEO and perhaps best known as Groupon’s founder, spoke with AI Podcast host Noah Kravitz about his company and the newest beta service it offers, called Overdub.

 

Key Points From This Episode

  • Descript works like a collaborative word processor. Users record audio, which Descript converts to text. They can then edit and rearrange text, and the program will change the audio.
  • Overdub, created in collaboration with Descript’s AI research division, eliminates the need to re-record audio. Type in new text, and Overdub creates audio in the user’s voice.
  • Descript 3.0 launched in November, adding new features such as a detector that can identify and remove vocalized pauses like “um” and “uh” as well as silence.

Tweetables

“We’re trying to use AI to automate the technical heavy lifting components of learning to use editors — as opposed to automating the craft — and we leave space for the user to display and refine their craft” — Andrew Mason [07:10]

“What’s really unique to us is a kind of tonal or prosodic connecting of the dots, where we’ll analyze the audio before and after whatever you’re splicing in with Overdub, and make sure that it sounds continuous in a natural transition” — Andrew Mason [10:30]

You Might Also Like

The Next Hans Zimmer? How AI May Create Music for Video Games, Exercise Routines

Imagine Wolfgang Amadeus Mozart as an algorithm or the next Hans Zimmer as a computer. Pierre Barreau and his startup, Aiva Technologies, are using deep learning to compose music. Their algorithm can create a theme in four minutes flat.

How Deep Learning Can Translate American Sign Language

Rochester Institute of Technology computer engineering major Syed Ahmed, a research assistant at the National Technical Institute for the Deaf, uses AI to translate between American sign language and English. Ahmed trained his algorithm on 1,700 sign language videos.

Tune in to the AI Podcast

Get the AI Podcast through iTunesGoogle PodcastsGoogle PlayCastbox, DoggCatcher, OvercastPlayerFM, Pocket Casts, PodbayPodBean, PodCruncher, PodKicker, SoundcloudSpotifyStitcher and TuneIn.

  

Make Our Podcast Better

Have a few minutes to spare? Fill out this short listener survey. Your answers will help us make a better podcast.

The post Pod Squad: Descript Uses AI to Make Managing Podcasts Quicker, Easier appeared first on The Official NVIDIA Blog.

AWS Outposts Station a GPU Garrison in Your Data Center

All the goodness of GPU acceleration on Amazon Web Services can now also run inside your own data center.

AWS Outposts powered by NVIDIA T4 Tensor Core GPUs are generally available starting today. They bring cloud-based Amazon EC2 G4 instances inside your data center to meet user requirements for security and latency in a wide variety of AI and graphics applications.

With this new offering, AI is no longer a research project.

Most companies still keep their data inside their own walls because they see it as their core intellectual property. But for deep learning to transition from research into production, enterprises need the flexibility and ease of development the cloud offers — right beside their data. That’s a big part of what AWS Outposts with T4 GPUs now enables.

With this new offering, enterprises can install a fully managed rack-scale appliance next to the large data lakes stored securely in their data centers.

AI Acceleration Across the Enterprise

To train neural networks, every layer of software needs to be optimized, from NVIDIA drivers to container runtimes and application frameworks. AWS services like Sagemaker, Elastic MapReduce and many others designed on custom-built Amazon Machine Images require model development to start with the training on large datasets. With the introduction of NVIDIA-powered AWS Outposts, those services can now be run securely in enterprise data centers.

The GPUs in Outposts accelerate deep learning as well as high performance computing and other GPU applications. They all can access software in NGC, NVIDIA’s hub for GPU-accelerated software optimization, which is stocked with applications, frameworks, libraries and SDKs that include pre-trained models.

For AI inference, the NVIDIA EGX edge-computing platform also runs on AWS Outposts and works with the AWS Elastic Kubernetes Service. Backed by the power of NVIDIA T4 GPUs, these services are capable of processing orders of magnitudes more information than CPUs alone. They can quickly derive insights from vast amounts of data streamed in real time from sensors in an Internet of Things deployment whether it’s in manufacturing, healthcare, financial services, retail or any other industry.

On top of EGX, the NVIDIA Metropolis application framework provides building blocks for vision AI, geared for use in smart cities, retail, logistics and industrial inspection, as well as other AI and IoT use cases, now easily delivered on AWS Outposts.

Alternatively, the NVIDIA Clara application framework is tuned to bring AI to healthcare providers whether it’s for medical imaging, federated learning or AI-assisted data labeling.

The T4 GPU’s Turing architecture uses TensorRT to accelerate the industry’s widest set of AI models. Its Tensor Cores support multi-precision computing that delivers up to 40x more inference performance than CPUs.

Remote Graphics, Locally Hosted

Users of high-end graphics have choices, too. Remote designers, artists and technical professionals who need to access large datasets and models can now get both cloud convenience and GPU performance.

Graphics professionals can benefit from the same NVIDIA Quadro technology that powers most of the world’s professional workstations not only on the public AWS cloud, but on their own internal cloud now with AWS Outposts packing T4 GPUs.

Whether they’re working locally or in the cloud, Quadro users can access the same set of hundreds of graphics-intensive, GPU-accelerated third-party applications.

The Quadro Virtual Workstation AMI, available in AWS Marketplace, includes the same Quadro driver found on physical workstations. It supports hundreds of Quadro-certified applications such as Dassault Systèmes SOLIDWORKS and CATIA; Siemens NX; Autodesk AutoCAD and Maya; ESRI ArcGIS Pro; and ANSYS Fluent, Mechanical and Discovery Live.

Learn more about AWS and NVIDIA offerings and check out our booth 1237 and session talks at AWS re:Invent.

The post AWS Outposts Station a GPU Garrison in Your Data Center appeared first on The Official NVIDIA Blog.

Healthcare Regulators Open the Tap for AI

Approvals for AI-based healthcare products are streaming in from regulators around the globe, with medical imaging leading the way.

It’s just the start of what’s expected to become a steady flow as submissions rise and the technology becomes better understood.

More than 90 medical imaging products using AI are now cleared for clinical use, thanks to approvals from at least one global regulator, according to Signify Research Ltd., a U.K. consulting firm in healthcare technology.

Regulators in Europe and the U.S. are leading the pace. Each has issued about 60 approvals to date. Asia is making its mark with South Korea and Japan issuing their first approvals recently.

Entrepreneurs are at the forefront of the trend to apply AI to healthcare.

At least 17 companies in NVIDIA’s Inception program, which accelerates startups, have received regulatory approvals. They include some of the first companies in Israel, Japan, South Korea and the U.S. to get regulatory clearance for AI-based medical products. Inception members get access to NVIDIA’s experts, technologies and marketing channels.

“Radiology AI is now ready for purchase,” said Sanjay Parekh, a senior market analyst at Signify Research.

The pipeline promises significant growth over the next few years.

“A year or two ago this technology was still in the research and validation phase. Today, many of the 200+ algorithm developers we track have either submitted or are close to submitting for regulatory approval,” said Parekh.

Startups Lead the Way

Trends in clearances for AI-based products will be a hot topic at the gathering this week of the Radiological Society of North America, Dec. 1-6 in Chicago. The latest approvals span products from startups around the globe that will address afflictions of the brain, heart and bones.

In mid-October, Inception partner LPIXEL Inc. won one of the first two approvals for an AI-based product from the Pharmaceuticals and Medical Devices Agency in Japan. LPIXEL’s product, called EIRL aneurysm, uses deep learning to identify suspected aneurysms using a brain MRI. The startup employs more than 30 NVIDIA GPUs, delivering more accurate results faster than traditional approaches.

In November, Inception partner ImageBiopsy Lab (Vienna) became the first company in Austria to receive 510(k) clearance for an AI product from the U.S. Food and Drug Administration. The Knee Osteoarthritis Labelling Assistant (KOALA) uses deep learning to process within seconds radiological data on knee osteoarthritis, a malady that afflicts 70 million patients worldwide.

In late October, HeartVista (Los Gatos, Calif.) won FDA 510(k) clearance for its One Click MRI acquisition software. The Inception partner’s AI product enables adoption for many patients of non-invasive cardiac MRIs, replacing an existing invasive process.

Regulators in South Korea cleared products from two Inception startups — Lunit and Vuno. They were among the first four companies to get approval to sell AI-based medical products in the country.

In China, a handful of Inception startups are in the pipeline to receive the country’s first class-three approvals needed to let hospitals pay for a product or service. They include companies such as 12Sigma and Shukun that already have class-two clearances.

Healthcare giants are fully participating in the trend, too.

Earlier this month, GE Healthcare recently won clearance for its Deep Learning Image Reconstruction engine that uses AI to improve reading confidence for head, whole body and cardiovascular images. It’s one of several medical imaging apps on GE’s Edison system, powered by NVIDIA GPUs.

Coming to Grips with Big Data

Zebra Medical Vision, in Israel, is among the most experienced AI startups in dealing with global regulators. European regulators approved more than a half dozen of its products, and the FDA has approved three with two more submissions pending.

AI creates new challenges regulators are still working through. “The best way for regulators to understand the quality of the AI software is to understand the quality of the data, so that’s where we put a lot of effort in our submissions,” said Eyal Toledano, co-founder and CTO at Zebra.

The shift to evaluating data has its pitfalls. “Sometimes regulators talk about data used for training, but that’s a distraction,” said Toledano.

“They may get distracted by looking at the test data, sometimes it is difficult to realise the idea that you can train your model on noisy data in large quantities but still generalize well. I really think they should focus on evaluation and test data,” he said.

In addition, it can be hard to make fair comparisons between new products that use deep learning and legacy product that don’t. That’s because until recently products only published performance metrics. They are allowed to keep their data sets hidden as trade secrets while companies submitting new AI products that would like to measure against each other or against other AI algorithms cannot compare apples to apples as done in public challenges

Zebra participated in feedback programs the FDA created to get a better understanding of the issues in AI. The company currently focuses on approvals in the U.S. and Europe because their agencies are seen as leaders with robust processes that other countries are likely to follow.

A Tour of Global Regulators

Breaking new ground, the FDA published in June a 20-page proposal for guidelines on AI-based medical products. It opens the door for the first time to products that improve as they learn.

It suggested products “follow pre-specified performance objectives and change control plans, use a validation process … and include real-world monitoring of performance once the device is on the market,” said FDA Commissioner Scott Gottlieb in an April statement.

AI has “the potential to fundamentally transform the delivery of health care … [with] earlier disease detection, more accurate diagnosis, more targeted therapies and significant improvements in personalized medicine,” he added.

For its part, the European Medicines Agency, Europe’s equivalent of the FDA, released in October 2018 a report on its goals through 2025. It includes plans to set up a dedicated AI test lab to gain insight into ways to support data-driven decisions. The agency is holding a November workshop on the report.

China’s National Medical Products Administration also issued in June technical guidelines for AI-based software products. It set up in April a special unit to set standards for approving the products.

Parekh, of Signify, recommends companies use data sets that are as large as possible for AI products and train algorithms for different types of patients around the world. “An algorithm used in China may not be applicable in the U.S. due to different population demographics,” he said.

Overall, automating medical processes with AI is a dual challenge.

“Quality needs to be not only as good as what a human can do, but in many cases it must be much better,” said Toledano, of Zebra. In addition, “to deliver(ing) value, you can’t just build an algorithm that detects something, it needs to deliver actionable results and many insights for many stakeholders such as both general practitioners and specialists,” he added.

You can see six approved AI healthcare products from Inception startups — including CureMetrix, Subtle Medical and Viz.ai — as well as NVIDIA’s technologies at our booth at the RSNA event.

The post Healthcare Regulators Open the Tap for AI appeared first on The Official NVIDIA Blog.

NVIDIA Clara Federated Learning to Deliver AI to Hospitals While Protecting Patient Data

With over 100 exhibitors at the annual Radiological Society of North America conference using NVIDIA technology to bring AI to radiology, 2019 looks to be a tipping point for AI in healthcare.

Despite AI’s great potential, a key challenge remains: gaining access to the huge volumes of data required to train AI models while protecting patient privacy. Partnering with the industry, we’ve created a solution.

Today at RSNA, we’re introducing NVIDIA Clara Federated Learning, which takes advantage of a distributed, collaborative learning technique that keeps patient data where it belongs — inside the walls of a healthcare provider.

Clara Federated Learning (Clara FL) runs on our recently announced NVIDIA EGX intelligent edge computing platform.

Federated Learning — AI with Privacy

Clara FL is a reference application for distributed, collaborative AI model training that preserves patient privacy. Running on NVIDIA NGC-Ready for Edge servers from global system manufacturers, these distributed client systems can perform deep learning training locally and collaborate to train a more accurate global model.

Here’s how it works: The Clara FL application is packaged into a Helm chart to simplify deployment on Kubernetes infrastructure. The NVIDIA EGX platform securely provisions the federated server and the collaborating clients, delivering everything required to begin a federated learning project, including application containers and the initial AI model.

NVIDIA Clara Federated Learning uses distributed training across multiple hospitals to develop robust AI models without sharing patient data.

Participating hospitals label their own patient data using the NVIDIA Clara AI-Assisted Annotation SDK integrated into medical viewers like 3D slicer, MITK, Fovia and Philips Intellispace Discovery. Using pre-trained models and transfer learning techniques, NVIDIA AI assists radiologists in labeling, reducing the time for complex 3D studies from hours to minutes.

NVIDIA EGX servers at participating hospitals train the global model on their local data. The local training results are shared back to the federated learning server over a secure link. This approach preserves privacy by only sharing partial model weights and no patient records in order to build a new global model through federated averaging.

The process repeats until the AI model reaches its desired accuracy. This distributed approach delivers exceptional performance in deep learning while keeping patient data secure and private.

US and UK Lead the Way

Healthcare giants around the world — including the American College of Radiology, MGH and BWH Center for Clinical Data Science, and UCLA Health — are pioneering the technology. They aim to develop personalized AI for their doctors, patients and facilities where medical data, applications and devices are on the rise and patient privacy must be preserved.

ACR is piloting NVIDIA Clara FL in its AI-LAB, a national platform for medical imaging. The AI-LAB will allow the ACR’s 38,000 medical imaging members to securely build, share, adapt and validate AI models. Healthcare providers that want access to the AI-LAB can choose a variety of NVIDIA NGC-Ready for Edge systems, including from Dell, Hewlett Packard Enterprise, Lenovo and Supermicro.

UCLA Radiology is also using NVIDIA Clara FL to bring the power of AI to its radiology department. As a top academic medical center, UCLA can validate the effectiveness of Clara FL and extend it in the future across the broader University of California system.

Partners HealthCare in New England also announced a new initiative using NVIDIA Clara FL. Massachusetts General Hospital and Brigham and Women’s Hospital’s Center for Clinical Data Science will spearhead the work, leveraging data assets and clinical expertise of the Partners HealthCare system.

In the U.K., NVIDIA is partnering with King’s College London and Owkin to create a federated learning platform for the National Health Service. The Owkin Connect platform running on NVIDIA Clara enables algorithms to travel from one hospital to another, training on local datasets. It provides each hospital a blockchain-distributed ledger that captures and traces all data used for model training.

The project is initially connecting four of London’s premier teaching hospitals, offering AI services to accelerate work in areas such as cancer, heart failure and neurodegenerative disease, and will expand to at least 12 U.K. hospitals in 2020.

Making Everything Smart in the Hospital 

With the rapid proliferation of sensors, medical centers like Stanford Hospital are working to make every system smart. To make sensors intelligent, devices need a powerful, low-power AI computer.

That’s why we’re announcing NVIDIA Clara AGX, an embedded AI developer kit that can handle image and video processing at high data rates, bringing AI inference and 3D visualization to the point of care.

NVIDIA Clara AGX scales from small, embedded devices to sidecar systems to full-size servers.

Clara AGX is powered by NVIDIA Xavier SoCs, the same processors that control self-driving cars. They consume as little as 10W, making them suitable for embedding inside a medical instrument or running in a small adjacent system.

A perfect showcase of Clara AGX is Hyperfine, the world’s first portable point-of-care MRI system. The revolutionary Hyperfine system will be on display in NVIDIA’s booth at this week’s RSNA event.

Hyperfine’s system is among the first of many medical instruments, surgical suites, patient monitoring devices and smart medical cameras expected to use Clara AGX. We’re witnessing the beginning of an AI-enabled internet of medical things.

Hyperfine’s mobile MRI system uses an NVIDIA GPU and will be on display at NVIDIA’s booth.

The NVIDIA Clara AGX SDK will be available soon through our early access program. It includes reference applications for two popular uses — real-time ultrasound and endoscopy edge computing.

NVIDIA at RSNA 2019

Visit NVIDIA and our many healthcare partners in booth 10939 in the RSNA AI Showcase. We’ll be showing our latest AI-driven medical imaging advancements, including keeping patient data secure with AI at the edge.

Find out from our deep learning experts how to use AI to advance your research and accelerate your clinical workflows. See the full lineup of talks and learn more on our website.

 

The post NVIDIA Clara Federated Learning to Deliver AI to Hospitals While Protecting Patient Data appeared first on The Official NVIDIA Blog.

Read ‘em and Reap: 6 Success Factors for AI Startups

Now that data is the new oil, AI software startups are sprouting across the tech terrain like pumpjacks in Texas. A whopping $80 billion in venture capital is fueling as many as 12,000 new companies.

Only a few will tap a gusher. Those who do, experts say, will practice six key success factors.

  1. Master your domain
  2. Gather big data fast
  3. See (a little) ahead of the market
  4. Make a better screwdriver
  5. Scale across the clouds
  6. Stay flexible

Some of the biggest wins will come from startups with AI apps that “turn an existing provider on its head by figuring out a new approach for call centers, healthcare or whatever it is,” said Rajeev Madhavan who manages a $300 million fund at Clear Ventures, nurturing nine AI startups.

1. Master Your Domain

Madhavan sold his electronic design automation startup Magma Design in 2012 to Synopsys for $523 million. His first stop on the way to becoming a VC was to take Andrew Ng’s Stanford course in AI.

“For a brief period in Silicon Valley every startup’s pitch would just throw in jargon on AI, but most of them were just doing collaborative filtering,” he said. “The app companies we look for have to be heavy on AI, but success comes down to how good a startup is in its domain space,” he added.

Chris Rowen agrees. The veteran entrepreneur who in 2013 sold his startup Tensilica to Cadence Design for $380 million considers domain expertise the top criteria for an AI software startup’s success.

Rowen’s latest startup, BabbleLabs, uses AI to filter noise from speech in real time. “At the root of it, I’m doing something analogous to what I’ve done in much of my career — work on really hard real-time computing problems that apply to mass markets,” Rowen said.

Overall, “deep learning is still at the stage where people are having challenges understanding which problems can be handled with this technique. The companies that recognize a vertical-market need and deliver a solution for it have a bigger chance of getting early traction. Over time, there will be more broad, horizontal opportunities,” he added.

Jeff Herbst nurtures more than 5,000 AI startups under the NVIDIA Inception program that fuels entrepreneurs with access to its technology and market connections. But the AI tag is just shorthand.

In a way, it’s like a rerun of The Invasion of the DotComs. “We call them AI companies today, but they are all in specialized markets — in the not-so-distant future, every company will be an AI company,” said Herbst, vice president of business development at NVIDIA.

Today’s AI software landscape looks like a barbell to Herbst. Lots of activity by a handful of cloud-computing giants at one end and a bazillion startups at the other.

2. Get Big Data Fast

Collecting enough bits to fill a data lake is perhaps the hardest challenge for an AI startup.

Among NVIDIA’s Inception startups, Zebra Medical Vision uses AI on medical images to make faster, smarter diagnoses. To get the data it needed, it partnered both with Israel’s largest healthcare provider as well as Intermountain Healthcare, which manages 215 clinics and 24 hospitals in the U.S.

“We understood data was the most important asset we needed to secure, so we invested a lot in the first two years of the startup not only in data but also in developing all kinds of algorithms in parallel,” said Eyal Toledano, co-founder and CTO of Zebra. “To find one good clinical solution, you have to go through many candidates.”

Getting access to 20 years of digital data from top drawer healthcare organizations “took a lot of convincing” both from Zebra’s chief executive and Toledano.

“My contribution was showing how security, compliance and anonymity could be done. There was a lot of education and co-development so they would release the data and we could do research that could contribute back to their patient population in return,” he added.

It’s working. To date Zebra has raised $50 million, received FDA approvals on three products with two more pending “and a few other submissions are on the way,” he said.

Toledano also gave kudos to NVIDIA’s Inception program.

“We had many opportunities to examine new technologies before they became widely used. We saw the difference in applying new GPUs to current processes, and looked at inference in the hospital with GPUs to improve the user experience, especially in time-critical applications,” he said.

“We also got some good know-how and ideas to improve our own infrastructure with training and infrastructure libraries to build projects. We tried quite a lot of the NVIDIA technologies and some were really amazing and fruitful, and we adopted a DGX server and decreased our development and training time substantially in many evaluations,” he added.

Six Steps to AI Startup Gold

Success Factor Call to Action Startups Using It
Master your domain Have deep expertise in your target application BabbleLabs
Gather big data fast Tap partners, customers to gather data and refine models Zebra Medical Vision, Scale
See (a little) ahead of the market Find solutions to customer pain points before rivals see them FASTDATA.io, Netflix
Make a better screwdriver Create tools that simplify the work of data scientists Scale, Dataiku
Scale across the clouds Support private and multiple public cloud services Robin.io
Stay flexible Follow changing customer pain points to novel solutions Keyhole Corp.

Another Inception startup, Scale, which provides training and validation data for self-driving cars and other platforms, got on board with Toyota and Lyft. “Working with more people makes your algorithms smarter, and then more people want to work with you — you get into a cycle of success,” said Herbst.

Reflektion, one of Madhavan’s startups, now has a database of 200 million unique shoppers, the third largest retail database after Amazon and Walmart. It started with zero. Getting big took three years and a few great partners.

Rowen’s BabbleLabs applied a little creativity and elbow grease to get a lot of data cheaply and fast. It siphoned speech data from free sources as diverse as YouTube and the Library of Congress. When it needed specialized data, it activated a network of global contractors “quite economically,” he said.

“You can find low-cost, low-quality data sources, then use algorithms to filter and curate the data. Controlling the amount of noise associated with the speech helped simplify training.” he added.

“In AI, access to data no one else has is the big win,” said Herbst. “The world has a lot of open source frameworks and tools, but a lot of the differentiation comes from proprietary access to the data that does the programming,” he added.

When seeking data-rich customers and partners “the fastest way to get in the door is knowing what their pain points are,” said Alen Capalik, founder of FASTDATA.io.

Work in high-frequency trading on Wall Street taught Capalik the value of GPUs. When he came up with an idea for using them to ingest real-time data fast for any application, he sought out Herbst at NVIDIA in 2017.

“He almost immediately wrote me a check for $1.5 million,” Capalik said.

3. See (a Little) Ahead of the Market

Today, FASTDATA.io is poised for a Series A financing round to fuel its recently released PlasmaENGINE, which already has two customers and over 20 more in the pipeline. “I think we are 12-18 months ahead of the market, which is a great spot to be in,” said Capalik, whose product can process as much data as 100 Spark instances.

That wasn’t the position Capalik found himself in his last time out. His cybersecurity startup — GoSecure, formerly CounterTack — pioneered the idea of end-point threat detection as much as six years before it caught on.

“People told me I was crazy. Palo Alto Networks and FireEye were doing perimeter security, and users thought they’d never install agents again because they slowed systems down. So, we struggled for a while and had to educate the market a lot,” he said.

Education and awareness are the kinds of jobs established corporations tackle. For startups, being visionary is like Steve Jobs unveiling an iPhone — “show them what they didn’t know they wanted,” he said.

“Netflix went after video streaming before there was enough bandwidth or end points — they skated to where the puck was going,” said Herbst.

4. Make a Better Screwdriver

AI holds opportunities for arms dealers, too — the kind who sell the software tools data scientists use to tighten down the screws on their neural networks.

The current Swiss Army knife of AI is the workbench. It’s a software platform for developing and deploying machine-learning models in today’s DevOps IT environment.

Jupyter notebooks could be seen as a sort of two-blade model you get for free as open source. Giants such as AWS, IBM and Microsoft and dozens of startups such as H20.ai and Dataiku are rolling out versions with more forks, corkscrews and toothpicks.

Despite all the players and a fast-moving market, there are still opportunities here, said James Kobielus, a lead analyst for AI and data science at Wikibon. Start as a plug-in for a popular workbench, he suggested.

Startups can write modules to support emerging frameworks and languages, or a mod to help a workbench tap into the AI goodness embedded in the latest smartphones. Alternatively, you can automate streaming operations or render logic automatically into code, the former IBM data-science evangelist advised.

If workbenches aren’t for you, try robotic process automation, another emerging category trying to make AI easier for more people to use. “You can clean up if you can democratize RPA for makers and kids — that’s exciting,” Kobielus said.

There’s a wide-open opportunity for tools that cram neural nets into the kilobytes of memory on devices such as smart speakers, appliances and even thermostats, BabbleLabs’ Rowen said. His company aims to run its speech models on some of the world’s smallest microcontrollers.

“We need compilers that take trained models and do quantization, model compression and optimized model generation to fit into the skinny memory of embedded systems — nothing solves this problem yet,” he said.

5. Expand Across the Clouds

The playing field is very competitive with more startups than ever because it’s easier than ever to start a company, said Herbst, who worked closely with entrepreneurs as a corporate and IP attorney even before he joined NVIDIA 18 years ago.

All you need to get started today is an idea, a laptop, a cup of coffee and a cloud-computing account. “All the infrastructure is a service now,” he said.

But if you get lucky and scale, that one cloud-computing account can become a bottleneck and your biggest cost after payroll.

“That’s a good problem to have, but to hit breakeven and make it easier for customers, you need your software running on any cloud,” said Madhavan.

The need is so striking, he wound up funding a startup to address it. Robin.io is an expert in stateful and stateless workloads, helping companies become cloud-agnostic. “We have been extremely successful with 5G telcos going cloud native and embracing containers,” he said.

6. Stay Flexible as a Yogi

Few startups wind up where they thought they were going. Apple planned to make desktop computers, Amazon aimed to sell books online.

Over time “they pivot one way or another. They go in with a problem to solve, but as they talk to customers the smart ones learn from those interactions how to re-target or tailor themselves,” said Herbst, who gives an example from his pre-AI days

Keyhole Corp. wanted to provide 3D mapping services initially for real estate agents and other professionals. Its first product was distributed on CDs

As a veteran of early search startup AltaVista, “I thought this startup belonged more to a Yahoo! or some other internet company. I realized it was not a professional but a major consumer app,” said Herbst, who was happy to fund them as one of NVIDIA’s first investments outside gaming.

In time, Google agreed with Herbst and acquired the company. Keyhole’s technology became part of the underpinnings of Google Maps and Google Earth.

“They had a nice exit, their people went on to have rock-star careers at Google, and I believe were among the original creators of Pokemon Go,” he said.

The lesson is simple: Follow good directions — like the six success factors for AI software startups — and there’s no telling where you may end up.

The post Read ‘em and Reap: 6 Success Factors for AI Startups appeared first on The Official NVIDIA Blog.

Speaking the Same Language: How Oracle’s Conversational AI Serves Customers

At Oracle, customer service chatbots use conversational AI to respond to consumers with more speed and complexity.

Suhas Uliyar, vice president for product management for digital assistance and AI at Oracle, stopped by to talk to AI Podcast host Noah Kravitz about how the newest wave of conversational AI can keep up with the nuances of human conversation.

Many chatbots frustrate consumers because of their static nature. Asking a question or using the wrong keyword confuses the bot and prompts it to start over or make the wrong selection.

Uliyar says that Oracle’s digital assistant uses a sequence-to-sequence algorithm to understand the intricacies of human speech, and react to unexpected responses.

Their chatbots can “switch the context, keep the memory, give you the response and then you can carry on with the conversation that you had. That makes it natural, because we as humans fire off on different tangents at any given moment.”

Key Points From This Episode:

  • The contextual questions that often occur in normal conversation stump single-intent systems, but the most recent iteration is capable of answering simple questions quickly and remembering customers.
  • The next stage in conversational AI, Uliyar believes, will allow bots to learn about users in order to give them recommendations or take action for them.
  • Learn more about Oracle’s digital assistant for enterprise applications and visit Uliyar’s Twitter.

Tweetable

“If machine learning is the rocket that’s going to take us to the next level, then data is the rocket fuel.” — Suhas Uliyar [15:59]

You Might Also Like

Charter Boosts Customer Service with AI

Jared Ritter, the senior director of wireless engineering at Charter Communications, describes their innovative approach to data collection on customer feedback. Rather than retroactively accessing the data to fix problems, Charter uses AI to evaluate data constantly to predict issues and address them as early as possible.

Using Deep Learning to Improve the Hands-Free, Voice Experience

What would the future of intelligent devices look like if we could bounce from using Amazon’s Alexa to order a new book to Google Assistant to schedule our next appointment, all in one conversation? Xuchen Yao, the founder of AI startup KITT.AI, discusses the toolkit that his company has created to achieve a “hands-free” experience.

AI-Based Virtualitics Demystifies Data Science with VR

Aakash Indurkha, head of machine learning projects at AI-based analytics platform Virtualitics, explains how the company is bringing creativity to data science using immersive visualization. Their software bridges the gap created by a lack of formal training to help inexperienced users identify anomalies on their own, and gives experts the technology to demonstrate their complex calculations.

Make Our Podcast Better

Have a few minutes to spare? Fill out this short listener survey. Your answers will help us make a better podcast.

The post Speaking the Same Language: How Oracle’s Conversational AI Serves Customers appeared first on The Official NVIDIA Blog.

NVIDIA and Microsoft Team Up to Aid AI Startups

NVIDIA and Microsoft are teaming up to provide the world’s most innovative young companies with access to their respective accelerator programs for AI startups.

Members of NVIDIA Inception and Microsoft for Startups can now receive all the benefits of both programs — including technology, training, go-to-market support and NVIDIA GPU credits in the Azure cloud — to continue growing and solving some of the world’s most complex problems.

The announcement was made at Slush, a startup event taking place this week in Helsinki.

With a variety of tools, technology and resources — including NVIDIA GPU cloud instances on Azure — AI startups can move into production and deployment faster.

NVIDIA and Microsoft will evaluate what startups in the joint program need, and how NVIDIA Inception and Microsoft for Startups can help them achieve their goals.

NVIDIA Inception members are eligible for the following benefits from Microsoft for Startups:

  • Free access to specific Microsoft technologies suited to every startup’s needs, including up to $120,000 in free credits in the Azure cloud
  • Go-to-market resources to help startups sell alongside Microsoft’s global sales channels

Microsoft for Startups members can access the following benefits from NVIDIA Inception:

  • Technology expertise on implementing GPU applications and hardware
  • Free access to NVIDIA Deep Learning Institute online courses, such as “Fundamentals of Deep Learning for Computer Vision” and “Accelerating Data Science”
  • Unlimited access to DevTalk, a forum for technical inquiries and community engagement
  • Go-to-market assistance and hardware discounts across the NVIDIA portfolio, from NVIDIA DGX AI systems to NVIDIA Jetson embedded computing platforms

Microsoft for Startups is a global program designed to support startups as they create and expand their companies. Since its launch in 2018, thousands of startups have applied and are active in the program. Microsoft for Startups members are on course to drive $1 billion in pipeline opportunity by the end of 2020.

NVIDIA Inception is a virtual accelerator program that supports startups harnessing GPUs for AI and data science applications during critical stages of product development, prototyping and deployment. Since its launch in 2016, the program has expanded to over 5,000 companies.

The post NVIDIA and Microsoft Team Up to Aid AI Startups appeared first on The Official NVIDIA Blog.

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.