Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: NVIDIA

In the AI of the Storm: Accelerating Disaster Relief Efforts with Artificial Intelligence

With lives at stake, and the clock ticking, mastering disaster may be the ultimate AI challenge.

Teams from Johns Hopkins University, Lockheed Martin, the U.S. Department of Defense’s Joint Artificial Intelligence Center and NVIDIA Wednesday outlined how they’re working to put AI to work speeding disaster relief to where it’s needed most.

The teams spoke about their work at GTC DC, the Washington edition of NVIDIA’s GPU Technology Conference, which brought together more than 3,500 registered attendees — policymakers, business leaders and researchers among them — to discuss and learn about the latest in AI and data science.

Their presentations underscored GTC DC’s role as Washington’s premier AI conference. They represent the latest efforts, detailed at the event over the past several years, to put the benefits of AI into the hands of policymakers and first-responders.

Detecting Damage with Satellite Imagery

A team from the Johns Hopkins Applied Physics Laboratory and the Joint AI Center (JAIC) spoke about how they’re using GPU-powered deep learning algorithms to track the damage caused by major storms from airborne and satellite imagery data processing.

Speakers included software engineer Beatrice Garcia and senior engineer Gordon Christie, both from the university’s Applied Physics Laboratory, and Captain Dominic Garcia, project lead at JAIC.

While their work hasn’t been deployed — yet — in disaster zones, their goal is to create AI systems that harness satellite and aerial imagery, along with other data, to point first responders and military and government decision-makers and analysts to where the need is greatest.

Such images will help first responders see, at a glance, where to deploy their resources, Christie said, as he showed an AI-enhanced map assessing the damage caused by a tornado that struck Joplin, Mississippi, in 2011.

The lab and JAIC have applied deep learning algorithms to the imagery of a number of severe storms collected from airborne platforms to accelerate detection of flooding and damaged infrastructure.

Based on the algorithms they developed and techniques they learned, the joint team is now creating a scalable environment that would provide these capabilities to any analysts. Users would have access to AI and machine learning algorithms, enabling a faster response to a variety of natural disasters.

Lockheed Prepares with Earthquake Simulation

Andrew Walsh, a senior staff systems engineer at Lockheed Martin, explained how the company is building an open dataset that can be used to train AI for better responses to earthquakes.

Lockheed Martin next explained the work that they’ve done in conjunction with a team from NVIDIA to build an open dataset for multi-platform, multi-sensor machine learning research and development.

The dataset, focused on humanitarian assistance and disaster relief, is being developed using a combination of real-world data collection events as well as simulation.  The current emphasis is on earthquake scenarios.

Andrew Walsh, a senior staff systems engineer at Lockheed Martin, joined May Casterline, a senior solutions architect at NVIDIA, to explain how they choreographed a real-world collection event that  included multiple sensors, aircraft, ground vehicles and teams of actors in a series of simulated earthquake scenarios. They also detailed the effort required to spatiotemporally align all the disparate data sources and described the challenges around labeling such a massive dataset.

Their dataset will be used to train AI and machine learning systems to improve responses to real earthquakes.

Disaster Planning with Data Science

Sean Griffin, president of Disaster Intelligence, spoke late Wednesday afternoon about his company’s approach to disaster prevention and response. His D.C.-based firm is working to create a common web platform that collects datasets relevant to natural and manmade disasters, which are then displayed graphically.

Users — from first responders to everyday citizens — can access the data to make more educated choices before and after a disaster.

“We used to share situational awareness by PDF or sharepoint sites,” said Griffin. But high performance computing is making it possible to update larger audiences with more relevant data.

“It’s our objective as a company to have complete saturation across the U.S. to have outage data in our platforms so that not only do we know that the power’s out, but that we can intersect that information with other key points of interest like healthcare facilities or water systems.”

Griffin presented two use cases. The first showed how Disaster Intelligence’s platform can model the consequences, cost and options for different disaster relief strategies. The second addressed how the platform improves coastal evacuations during hurricanes.

Route Planning with RAPIDS

NVIDIA is hosting a webinar on how RAPIDS, the company’s GPU-accelerated data science software stack, can help speed up route replanning for civilian and military disaster response assets. Register for the webinar, taking place Dec. 17 at 10 am PT, here.

The post In the AI of the Storm: Accelerating Disaster Relief Efforts with Artificial Intelligence appeared first on The Official NVIDIA Blog.

At GTC DC, Experts Describe Why Diversity in AI Makes a World of Difference

When Megan Gray, CEO of Moment AI, first tested one of her company’s services — a tool using AI to determine facial signs indicating a driver may have fallen asleep or suffered a medical issue — it didn’t work.

“The technology worked on our CTO, who is a white male. But then I tried it, and it couldn’t detect that my eyes were closed,” Gray said. “It didn’t work on me as an African-American woman.” This is just one example of how a lack of diversity in the field of AI affects the technologies that are created.

At GTC DC, this week’s Washington edition of the GPU Technology Conference, a range of events focused on sharing ideas on how workplaces can become more inclusive, and how researchers can improve their AI technology to avoid bias.

One of Forbes’ top conferences for women in tech, this year’s GTC DC was the most diverse yet. Over 20 percent of its 3,500 attendees were women.

The conference also featured an inaugural reception celebrating attendees from historically black colleges and universities and the Black in AI and LatinX in AI community groups.

As he opened the reception, Kevyn Orr, partner-in-charge at Jones Day, said, “You are the first generation that has the opportunity to make sure that development, that research and that algorithms are appropriately inclusive.”

‘Who’s Like Me?’: Finding Diverse Role Models

Catherine Ordun, senior data scientist at Booz Allen Hamilton, delivered the keynote at the GTC DC Women’s Early Career Accelerator.

GTC DC kicked off with the Women’s Early Career Accelerator, a day-long, invitation-only training and networking event attended by nearly 60 graduate students and early-career professionals.

Catherine Ordun, a senior data scientist at Booz Allen who presented the keynote at the accelerator, was honest about the challenges of being a woman in the field of AI.

“You’ll find yourself asking, ‘Who’s like me?’ And the truth is, there’s not a lot. Only 12 percent of people who do AI are women,” said Ordun, referencing a WIRED survey.

Events like the accelerator are helping to change that. After Ordun’s address, participants spent the day completing the NVIDIA Deep Learning Institute’sFundamentals of Deep Learning for Computer Vision” workshop, taught by Alex Qi, an enterprise solutions architect at NVIDIA.

The Women in AI Breakfast featured an AI ethics panel, with speakers (from left) Svetlana Matt, Emily Tait, Megan Gray and Tiffany Moore.

GTC DC also featured the third annual Women in AI Breakfast, hosted by Dell Technologies. Over quiche and coffee, a panel of experts in research, law and more discussed AI ethics.

Emily Tait, an intellectual property partner at Jones Day, provided a legal perspective on how companies can counter issues like the one Gray described. “The best companies are creating dedicated personnel and policies and cultures around diversity.” From there, they’re able to come up with more robust algorithms and identify biases in their technology.

And nearly 75 people filled out the eighth floor of the Ronald Reagan Building and International Trade Center to attend the Black and Latinx Communities Reception, sponsored by Jones Day.

The reception recognized the 50 students that were selected from historically black colleges and universities, Black in AI and LatinX in AI. They received full passes to DLI courses and the entirety of GTC DC.

Addressing a Changing Workforce

Andrew Ko, managing director for global education at AWS, spoke at the Workforce of the Future panel.

NVIDIA Senior Director of Corporate Social Responsibility Tonie Hansen moderated a panel of executives from government, nonprofits and business. They shared examples of how educational institutions, trade associations and companies can help employees prepare for modern jobs that incorporate AI and data science.

Andrew Ko, the managing director for global education at AWS, provided a corporate perspective and gave examples of career programs implemented by Amazon that help employees reskill.

Another panelist was former chief of staff for U.S. Representative Alma Adams and founder of diversity innovation house HBCU House Rhonda Foxx. She gave insight on how the federal government can help support HBCUs — historically black colleges and universities — which produce 47 percent of all black women engineers.

“With emerging technology and AI, we are on the precipice of the fourth revolution,” she said. “We all need to lean in right now and make sure there’s diversity of thought at the table as we move forward in these technological advances.”

The post At GTC DC, Experts Describe Why Diversity in AI Makes a World of Difference appeared first on The Official NVIDIA Blog.

Under the Microscope: Top Pathology Lab Fuses Data Sources to Develop Cancer-Detecting AI

Pathologists agreed just three-quarters of the time when diagnosing breast cancer from biopsy specimens, according to a recent study.

The difficult, time-consuming process of analyzing tissue slides is why pathology is one of the most expensive departments in any hospital.

Faisal Mahmood, assistant professor of pathology at Harvard Medical School and the Brigham and Women’s Hospital, leads a team developing deep learning tools that combine a variety of sources — digital whole slide histopathology data, molecular information, and genomics — to aid pathologists and improve the accuracy of cancer diagnosis.

Mahmood, who heads his eponymous Mahmood Lab in the Division of Computational Pathology at Brigham and Women’s Hospital, spoke this week about this research at GTC DC, the Washington edition of our GPU Technology Conference.

The variability in pathologists’ diagnosis “can have dire consequences, because an uncertain determination can lead to more biopsies and unnecessary interventional procedures,” he said in a recent interview. “Deep learning has the potential to assist with diagnosis and therapeutic response prediction, reducing subjective bias.”

Depending on the type of cancer and the pathologist’s level of experience, it can take 15 minutes or more for a pathologist to analyze a biopsy slide. If a single patient has a couple dozen slides, it can add up quick.

And to decide on a treatment plan, doctors also take into account other data sources like patient and familial medical history, as well as molecular and genomic data when it’s available.

Mahmood’s team uses NVIDIA GPUs on premises and in the cloud to develop its AI tools for pathology image analysis that incorporates all of these data sources.

“By working with whole slide images and fusing multimodal data sources we are algorithmically moving closer and closer to the clinical workflow,” Mahmood said. “This will enable us to run prospective studies with AI-assisted pathology diagnosis tools that use multimodal data.”

AI Sees the Big Picture

Digitized whole slide images taken during a tissue biopsy are huge — each can be more than 100,000 by 100,000 pixels. To efficiently compute with such large files, deep learning developers often choose to chop a slide into individual patches, making it easier for a neural network to process. But this tactic makes it incredibly time-consuming for researchers to hand-label the training data.

The Mahmood Lab is developing deep learning models that parse whole tissue slides at once in a data-efficient method, using NVIDIA GPUs to accelerate training and inference of their neural networks. These models can be used for patient selection and stratification into treatment groups for precision therapies.

For prototyping their deep learning models, and for inference, the team relies on four on-prem machines with NVIDIA GPU clusters. To train graph convolutional networks and contrastive predictive coding models with large pathology images, the researchers use NVIDIA V100 Tensor Core GPUs in Google Cloud.

“The modern GPU is what gives us the ability to train deep learning models on whole slides,” said Max Lu, a researcher in the Mahmood Lab. “The benefit is that it doesn’t require modifying the current clinical workflow, because pathologists are analyzing and preparing reports for whole slides anyways.”

Joining Sources

Pathologists often make their determinations using a wealth of data ranging from tissue slides, immunohistochemistry markers and genomic profiles. But most current deep-learning based diagnosis methods rely on a single data source or on trivial methods of fusing information.

This led Mahmood Lab researchers to develop mechanisms that combine microscope and genomic data in a much more heuristic and holistic manner. Initial results suggest that adding information from genomic profiles and graph convolutional networks can improve diagnostic and prognostic models.

Sliding into the Pathology Workflow

Mahmood sees two potential ways in which deep learning could be incorporated into pathologists’ workflow. AI-annotated slide images could be used as a second opinion for pathologists to help improve the quality and consistency of diagnoses.

Or, computational pathology tools could screen out all the negative cases, so that pathologists only need to review biopsy slides that are likely positive, significantly reducing their workloads. There’s a precedent for this: In the 1990s, hospitals began using third-party companies to scan and stratify pap smear slides, throwing out all the negative cases.

“If there are 40,000 breast cancer tissue slides and 20,000 are negative, that half would be stratified out and the pathologist wouldn’t see it,” Mahmood said. “Just by reducing the pathologist’s burden, variability is likely to go down.”

To test and validate their algorithms, the researchers plan to conduct retrospective and prospective studies using biopsy data from the Dana Farber Cancer Institute. They will study whether a pathologist’s analysis of a biopsy slide changes after seeing the algorithm’s determination — and whether using AI reduces variation in diagnosis.

Mahmood Lab researchers will present their deep learning projects at the NeurIPS conference’s ML4H workshop in December.

Main image shows a whole slide of keratocanthoma, a type of skin tumor. Image by Alex Brollo, licensed from Wikimedia Commons under CC BY-SA 3.0.

The post Under the Microscope: Top Pathology Lab Fuses Data Sources to Develop Cancer-Detecting AI appeared first on The Official NVIDIA Blog.

NVIDIA Shows Its Prowess in First AI Inference Benchmarks

Those who are keeping score in AI know that NVIDIA GPUs set the performance standards for training neural networks in data centers in December and again in July. Industry benchmarks released today show we’re setting the pace for running those AI networks in and outside data centers, too.

NVIDIA Turing GPUs and our Xavier system-on-a-chip posted leadership results in MLPerf Inference 0.5, the first independent benchmarks for AI inference. Before today, the industry was hungry for objective metrics on inference because its expected to be the largest and most competitive slice of the AI market.

Among a dozen participating companies, only the NVIDIA AI platform had results across all five inference tests created by MLPerf, an industry benchmarking group formed in May 2018. That’s a testament to the maturity of our CUDA-X AI and TensorRT software. They ease the job of harnessing all our GPUs that span uses from data center to the edge.

MLPerf defined five inference benchmarks that cover three established AI applications — image classification, object detection and translation. Each benchmark has four aspects. Server and offline scenarios are most relevant for data center uses cases, while single- and multi-stream scenarios speak to the needs of edge devices and SoCs.

Chart showing MLPerf use cases and scenarios

NVIDIA topped all five benchmarks for both data center scenarios (offline and server), with Turing GPUs providing the highest performance per processor among commercially available products.

MLPerf chart showing NVIDIA Turing performance
NVIDIA Turing topped among the commercially available processors in MLPerf scenarios geared for the data center.1

The offline scenario represents data center tasks such as tagging photos, where all the data is available locally. The server scenario reflects jobs such as online translation services, where data and requests are arriving randomly in bursts and lulls.

For its part, Xavier ranked as the highest performer under both edge-focused scenarios (single- and multi-stream) among commercially available edge and mobile SoCs.

An industrial inspection camera identifying defects in a fast-moving production line is a good example of a single-stream task. The multi-stream scenario tests how many feeds a chip can handle — a key capability for self-driving cars that might use a half-dozen cameras or more.

MPLerf chart showing NVIDIA Xavier performance
NVIDIA’s Xavier topped the group of commercially available edge and mobile SoCs in MLPerf scenarios geared for the edge.2

The results reveal the power of our CUDA and TensorRT software. They provide a common platform that enables us to show leadership results across multiple products and use cases, a capability unique to NVIDIA.

We competed in data center scenarios with two GPUs. Our TITAN RTX demonstrated the full potential of our Turing-class GPUs, especially in demanding tasks such as running a GNMT model used for language translation.

The versatile and widely used NVIDIA T4 Tensor Core GPU showed strong results across several scenarios. These 70-watt GPUs are designed to easily fit into any server with PCIe slots, enabling users to expand their computing power as needed for inference jobs known to scale well.

MLPerf has broad backing from industry and academia. Its members include Arm, Facebook, Futurewei, General Motors, Google, Harvard University, Intel, MediaTek, Microsoft, NVIDIA and Xilinx. To its credit, the new benchmarks attracted significantly more participants than two prior training competitions.

NVIDIA demonstrated its support for the work by submitting results in 19 of 20 scenarios, using three products in a total of four configurations. Our partner Dell EMC and our customer Alibaba also submitted results using NVIDIA GPUs. Together, we gave users a broader picture of the potential of our product portfolio than any other participant.

Fresh Perspectives, New Products

Inference is the process of running AI models in real-time production systems to filter actionable insights from a haystack of data. It’s an emerging technology that’s still evolving, and NVIDIA isn’t standing still.

Today we announced a low-power version of the Xavier SoC used in the MLPerf tests. At full throttle, Jetson Xavier NX delivers up to 21 TOPS while consuming just 15 watts. It aims to drive a new generation of performance-hungry, power-pinching robots, drones and other autonomous devices.

In addition to the new hardware, NVIDIA released new TensorRT 6 optimizations used in the MLPerf benchmarks as open source on GitHub. You can learn more about the optimizations in this MLPerf developer blog. We continuously evolve this software so our users can reap benefits from increasing AI automation and performance.

Making Inference Easier for Many

One big takeaway from today’s MLPerf tests is inference is hard. For instance, in actual workloads inference is even more demanding than in the benchmarks because it requires significant pre- and post-processing steps.

In his keynote address at GTC last year, NVIDIA founder and CEO Jensen Huang compressed the complexities into one word: PLASTER. Modern AI inference requires excellence in Programmability, Latency, Accuracy, Size-of-model, Throughput, Energy efficiency and Rate of Learning, he said.

That’s why users are increasingly embracing high-performance NVIDIA GPUs and software to handle demanding inference jobs. They include a who’s who of forward-thinking companies such as BMW, Capital One, Cisco, Expedia, John Deere, Microsoft, PayPal, Pinterest, P&G, Postmates, Shazam, Snap, Shopify, Twitter, Verizon and Walmart.

This week, the world’s largest delivery service — the U.S. Post Service— joined the ranks of organizations using NVIDIA GPUs for both AI training and inference.

Hard-drive maker Seagate Technology expects to realize up to a 10 percent improvement in manufacturing throughput thanks to its use of AI inference running on NVIDIA GPUs. It anticipates up to a 300 percent return on investment from improved efficiency and better quality.

Pinterest relies on NVIDIA GPUs for training and evaluating its recognition models and for performing real-time inference across its 175 billion pins.

Snap uses NVIDIA T4 accelerators for inference on the Google Cloud Platform, increasing advertising effectiveness while lowering costs compared to CPU-only systems.

A Twitter spokesman nailed the trend: “Using GPUs made it possible to enable media understanding on our platform, not just by drastically reducing training time, but also by allowing us to derive real-time understanding of live videos at inference time.”

The AI Conversation About Inference

Looking ahead, conversational AI represents a giant set of opportunities and technical challenges on the horizon — and NVIDIA is a clear leader here, too.

NVIDIA already offers optimized reference designs for conversational AI services such as automatic speech recognition, text-to-speech and natural-language understanding. Our open-source optimizations for AI models such as BERT, GNMT and Jasper give developers a leg up in reaching world-class inference performance.

Top companies pioneering conversational AI are already among our customers and partners. They include Kensho, Microsoft, Nuance, Optum and many others.

There’s plenty to talk about. The MLPerf group is already working on enhancements to its current 0.5 inference tests. We’ll work hard to maintain the leadership we continue to show on its benchmarks.

 

  1. MLPerf v0.5 Inference results for data center server form factors and offline and server scenarios retrieved from www.mlperf.org on Nov. 6, 2019, from entries Inf-0.5-15,Inf-0. 5-16, Inf-0.5-19, Inf-0.5-21. Inf-0.5-22, Inf-0.5-23, Inf-0.5-25, Inf-0.5-26, Inf-0.5-27. Per-processor performance is calculated by dividing the primary metric of total performance by number of accelerators reported.
  2. MLPerf v0.5 Inference results for edge form factors and single-stream and multi-stream scenarios retrieved from www.mlperf.org on Nov. 6, 2019, from entries Inf-0.5-24, Inf-0.5-28, Inf-0.5-29.

The post NVIDIA Shows Its Prowess in First AI Inference Benchmarks appeared first on The Official NVIDIA Blog.

U.S. Government CTO, CIO Among Leaders Flocking to GTC DC

The U.S. government’s CTO and CIO on Tuesday joined other key tech decision makers, lawmakers, and industry leaders at the start of the two-day GPU Technology Conference in Washington D.C.

Federal CIO Suzette Kent led a panel of civilian agency leaders explaining how they’re using AI.  Moments later, U.S. CTO Michael Kratsios led a discussion of how the federal government is supporting U.S. AI leadership.

And Moira Bergin, the House Committee on Homeland Security’s subcommittee director for cybersecurity and infrastructure protection, joined a discussion of how Congress and the administration are addressing new AI cybersecurity capabilities.

The talks were among the more than 160 sessions — led by a cross-section of Washington  leaders from government and industry — that have drawn more than 3,500 to downtown D.C. this week.

GTC DC — hosted by NVIDIA and its partners, including Booz Allen Hamilton, Dell, IBM, Lockheed Martin and others — has quickly become the capital’s largest AI event. And it’s research, not rhetoric, attendees will tell you, that makes DC an AI accelerator like no other.

The conference is packed with representatives from more than a score of federal agencies — among them the U.S. Department of Energy, NASA, and the National Institutes of Health — together able to marshal scientific efforts on a scale far beyond that of anywhere else in the world.

Putting AI to Work

The conference opened with a keynote from Ian Buck, NVIDIA’s vice president for accelerated computing.

Buck — known for creating the CUDA computing platform that puts GPUs to work powering everything from supercomputing to next-generation AI — detailed the broad range of AI tools NVIDIA makes available to help organizations advance their work.

“The challenge is how do we take AI from innovation to actually applying AI,” Buck said during his keynote address Wednesday morning. “Our challenge, NVIDIA’s challenge, and my challenge is ‘How can I bring AI to industries and activate it?’”

Buck’s message was buttressed by Kent, who led a panel of civilian agency leaders discussing how they’re using AI to improve government services.

“We’re using these AI capabilities to act faster,” Kent said. “In the areas where we’re keeping citizens safe, whether it’s reacting to weather or a problem caused by humans — the speed at which we help is increasing.”

Meanwhile, Kratsios led a discussion about how the U.S government — which has a decades long history of supporting key technology advances — is working to extend U.S. technology leadership in the AI age.

“We fundamentally believe that AI is something that’s going to touch every industry in the United States,” Kratsios said. “We view artificial intelligence as a tool that can empower workers to do their jobs better, safer, faster, and more effectively.”

Wrapping up the day, the House’s Bergin joined Coleman Mehta, senior director of U.S. policy at Palo Alto Networks; Daniel Kroese, associate director of the national risk management center at the Cybersecurity and Infrastructure Security Agency; and Joshua Patterson, GM of data science at NVIDIA.

In a panel moderated by NVIDIA’s  Iain Cunningham, VP of intellectual property and cybersecurity, the four spoke about the new AI capabilities, potential countermeasures, and preparations being made by the administration and Congress.

Bergin said she’s “excited” about the prospects for AI after what she described as a decade of underinvestment in R&D.

“There’s a lot of demystification that needs to happen about what AI actually is, what it’s capabilities are now, and what its capabilities will be later,” Bergin said.

Scores more discussions are slated through Wednesday afternoon.

Underscoring the role AI can play for good, speakers from the Johns Hopkins University Applied Physics Laboratory and the Joint AI Center will discuss how they’re harnessing AI to provide humanitarian assistance and disaster relief.

Expect their discussion — of how they harnessed airborne and satellite imagery data after Hurricane Florence hit North and South Carolina in 2018 — to point the way to more groundbreaking AI feats to come.

The post U.S. Government CTO, CIO Among Leaders Flocking to GTC DC appeared first on The Official NVIDIA Blog.

US Government CTO, CIO Among Leaders Flocking to GTC DC

The U.S. government’s CTO and CIO on Tuesday joined other key tech decision makers, lawmakers and industry leaders at the start of the two-day GPU Technology Conference in Washington, D.C.

U.S. CTO Michael Kratsios gave the conference’s policy day keynote on how the federal government is supporting U.S. AI leadership. And Federal CIO Suzette Kent led a panel of civilian agency leaders explaining how they’re using AI.

Another highlight: a panel on national AI strategy featuring Lynne Parker assistant director for AI with the White House Office of Science and Technology and National Security AI Commissioner Jason Matheny.

The talks were among the more than 160 sessions — led by a cross-section of Washington  leaders from government and industry — that have drawn more than 3,500 to downtown DC this week.

GTC DC — hosted by NVIDIA and its partners, including Booz Allen Hamilton, Dell, IBM, Lockheed Martin and others — has quickly become the capital’s largest AI event. And it’s research, not rhetoric, attendees will tell you, that makes DC an AI accelerator like no other.

The conference is packed with representatives from more than a score of federal agencies — among them the U.S. Department of Energy, NASA and the National Institutes of Health — together able to marshal scientific efforts on a scale far beyond that of anywhere else in the world.

Putting AI to Work

The conference opened with a keynote from Ian Buck, NVIDIA’s vice president for accelerated computing.

Buck — known for creating the CUDA computing platform that puts GPUs to work powering everything from supercomputing to next-generation AI — detailed the broad range of AI tools NVIDIA makes available to help organizations advance their work.

“The challenge is how do we take AI from innovation to actually applying AI,” Buck said during his keynote address Tuesday morning. “Our challenge, NVIDIA’s challenge and my challenge is ‘How can I bring AI to industries and activate it?’”

Buck then joined Kratsios for a discussion about how the U.S. government — which has a decades-long history of supporting key technology advances — is working to extend U.S. technology leadership in the AI age.

“We fundamentally believe that AI is something that’s going to touch every industry in the United States,” Kratsios said. “We view artificial intelligence as a tool that can empower workers to do their jobs better, safer, faster and more effectively.”

Kratsios’s points were buttressed by the speakers on the national AI strategy panel — which included Parker and Matheny — discussing the progress of the U.S. government’s national AI strategy.

They touched on the federal government’s ongoing investment in R&D, obtaining and training the highest quality talent, and implementation of AI across the federal government.

As part of that, Parker, invited listeners to participate in the 30-day public comment period in following the draft release of draft guidance on facilitating industry AI adoption from the U.S. Office of Management and Budget’s Office of Information and Regulatory Affairs.

Kent who is leading federal AI adoption efforts, participated in a discussion about advancing AI adoption across the federal government, as part of a panel of civilian agency leaders.

“We’re using these AI capabilities to act faster,” Kent said. “In the areas where we’re keeping citizens safe, whether it’s reacting to weather or a problem caused by humans — the speed at which we help is increasing.”

Wrapping up the day, Moira Bergin, the House Committee on Homeland Security’s subcommittee director for cybersecurity and infrastructure protection, joined a discussion of how Congress and the administration are addressing new AI cybersecurity capabilities.

Bergin joined Coleman Mehta, senior director of U.S. policy at Palo Alto Networks; Daniel Kroese, associate director of the national risk management center at the Cybersecurity and Infrastructure Security Agency; and Joshua Patterson, general manager of data science at NVIDIA.

Bergin said she’s “excited” about the prospects for AI after what she described as a decade of underinvestment in R&D.

“There’s a lot of demystification that needs to happen about what AI actually is, what it’s capabilities are now and what its capabilities will be later,” Bergin said.

Scores more discussions took place throughout the conference, including packed discussions discussions policies to speed adoption of AI in healthcare and building an inclusive AI workforce across the country.

Underscoring the role AI can play for good, speakers from the Johns Hopkins University Applied Physics Laboratory and the U.S. Department of Defense’s Joint Artificial Intelligence Center will discuss how they’re harnessing AI to provide humanitarian assistance and disaster relief.

Expect their discussion — of how they harnessed airborne and satellite imagery data after Hurricane Florence hit North and South Carolina in 2018 — to point the way to more groundbreaking AI feats to come.

The post US Government CTO, CIO Among Leaders Flocking to GTC DC appeared first on The Official NVIDIA Blog.

AI4Good: Canadian Lab Empowers Women in Computer Science

Doina Precup is applying Romanian wisdom to the gender gap in the fields of AI and computer science.

The associate professor at McGill University and research team lead at AI startup DeepMind spoke with AI Podcast host Noah Kravitz about her personal experiences, along with the AI4Good Lab she co-founded to give women more access to machine learning training.

Growing up in Romania, Precup attended a high school that specialized in computer science and a technical university. She didn’t experience gender disparity in these learning environments.

“If anything, programming was considered a very good job for women, because you did not need to be working in the fields,” she explained.

It made the gap in Canadian universities and companies even more noticeable. At McGill, Precup saw that female students were hesitant to speak up or pursue graduate studies.

Together with Angelique Mannella, CEO of AM Consulting and an Amazon employee, Precup was inspired to start the AI4Good Lab in 2017.

Key Points From This Episode:

  • Aimed at improving women’s access to advanced AI and machine learning, the AI4Good Lab brings together 30 women from across Canada every spring for a seven-week workshop
  • Workshop participants take classes, hear from speakers, visit companies and work in small groups to create projects.
  • This year’s projects ranged from identifying fake news to using a caf ’s food supplies efficiently to helping people manage chronic pain.
  • To hear Precup’s best sci-fi book recommendations, listen to the podcast for her guide to the genre.
  • Visit the AI4Good Lab website or Twitter to learn more about participants’ projects and to apply to next year’s workshop. And visit Precup’s Google Scholar page to see her most recent publications.

Tweetables:

“Emphasizing the creativity and the fun in computer science and algorithms is really important, for everybody” — Doina Precup [04:30]

“I also noticed that people were sometimes afraid to speak up in classes, even if they were really good at based on their exams and their assignments and their projects” — Doina Precup [05:43]

You Might Also Like

UC Berkeley’s Pieter Abbeel on How Deep Learning Will Help Robots Learn

Robots can do amazing things. Compare even the most advanced robots to a three-year-old, however, and they can come up short. UC Berkeley Professor Pieter Abbeel has pioneered the idea that deep learning could be the key to bridging that gap: creating robots that can learn how to move through the world more fluidly and naturally.


Teaching Families to Embrace AI

Tara Chklovski is CEO and founder of Iridescent, a nonprofit that provides access to hands-on learning opportunities to prepare underrepresented children and adults for the future of work. We spoke with her about the UN’s AI for Good Global Summit last May in Geneva and the AI World Championship, part of the AI Family Challenge, also in May in Silicon Valley.

Good News About Fake News: AI Can Now Help Detect False Information

With “fake news” embedding itself into, well, our news, it’s become more important than ever to distinguish between content that is fake or authentic. That’s why Vagelis Papalexakis, a professor of computer science at the University of California, Riverside, developed an algorithm that detects fake news with 75 percent accuracy.

Make Our Podcast Better

Have a few minutes to spare? Fill out this short listener survey. Your answers will help us make a better podcast.

 

The post AI4Good: Canadian Lab Empowers Women in Computer Science appeared first on The Official NVIDIA Blog.

Special Delivery: With U.S. Post Office on Board, NVIDIA to Enable AI Deployment, NVIDIA’s Ian Buck Says

The AI revolution is here — as near as the closest mailbox — and the time’s right to put AI to work solving your organization’s biggest challenges, NVIDIA’s Ian Buck said Tuesday.

Kicking off the Washington edition of our GPU Technology Conference, Buck, NVIDIA’s VP for accelerated computing, detailed a new generation of technologies that will help companies put modern AI to work.

Buck also announced that the United States Postal Service — the world’s largest delivery service, with 146 billion pieces of mail processed and delivered annually — is adopting end-to-end AI technology from NVIDIA.

“The challenge is how do we take AI from innovation to actually applying AI,” Buck told an audience of more than 3,500 developers, CIOs and federal employees at the three-day GTC DC. “Our challenge, NVIDIA’s challenge, and my challenge is ‘How can I bring AI to industries and activate it.’”

Over the course of his hour-long talk, Buck explained how modern AI is trained and deployed, and described how NVIDIA is adapting AI for the automotive, healthcare, robotics, and 5G industries, among others.

The U.S. Postal Service offers a glimpse at what’s possible.

Buck said the U.S. Postal Service will roll out a deep learning solution based on NVIDIA EGX to 200 processing facilities that should be operational in 2020.

Using EGX, Buck said, the USPS will be able to process packages 10 times faster with higher accuracy and enhance its ability to detect hazardous parcels.

“We’re very excited to see a U.S. agency really leaning into AI, they have a cool problem, processing your mail as quickly as possible, and by working with NVIDIA they’re accelerating that work,” Buck said.

A Day in the Life of AI

Buck, known for creating the CUDA computing platform while still a Stanford student, detailed how NVIDIA’s technologies accelerate every step in the process of putting AI to work from data ingestion, to AI training, and, ultimately, deployment. The next step will be the creation of vertical AI platforms that experts in a wide variety of industries will be able to put to work fast, Buck said.

In healthcare, workers are being inundated with data. A typical radiology department views 8,000 images a day. Three papers per minute are published on the PubMed medical research hub, Buck said.

Buck detailed how the NVIDIA Clara software development kit is able to use a new generation of transfer learning models to help healthcare workers quickly adapt by augmenting an existing pre-trained model to tackle new tasks, such as looking for a particular kind of cancer, in minutes or hours, while using less training data.

Telecommunications is another industry that can benefit from AI as it races to adopt 5G wireless technology. Compared to today’s 4G networks, 5G allows you to download an entire season of “Stranger Things” in just three minutes on a mobile device, compared to three hours. 5G is more responsive, with 1 millisecond of latency, versus 10 milliseconds for 4G. It’s ultra-reliable.

“It is a revolution, and I don’t use that word lightly,” Buck said. “It gives us the opportunity to send enormous streams of data and respond in real time,” Buck said.

To help telcos and their customers make the most of 5Gs, NVIDIA last month launched the NVIDIA EGX edge supercomputing platform. Scalable and secure, these servers pack as much as many as four NVIDIA CUDA Tensor Core GPUs.

Smart cities can benefit from EGX, too.

Running NVIDIA’s Metropolis Internet of Things application framework as part of a pilot program, public safety officials in Dubuque, Iowa, Metropolis picked out a vehicle driving the wrong way onto a freeway.

Of course, NVIDIA is working to help make cars smarter — safer — too. NVIDIA’s autonomous vehicle platform spans everything from cars to trucks to robo taxis to industrial vehicles.

To enable safe development and deployment NVIDIA’s built an end-to-end workflow to develop autonomous vehicles, including systems for collecting data, curating it, labeling it, training AI, replaying it, and using it to simulate the performance of new systems in all kinds of scenarios.

“This is a much more deeper, richer stack than just traditional inference and training,” Buck said.

Lastly, robots are another area where NVIDIA’s building tools that are unleashing a new wave of innovation. In the past, robots were very good at repetitive tasks. The future of robots, however, “is all about interaction,” Buck said.

These new generation of robots are being put to work in retail, agriculture, and the food delivery business — slated to grow to $100 billion by 2025.

To enable all this, NVIDIA built Jetson, an end-to-end robotics platform that lets companies deploy new kinds of robots more quickly. It includes a complete software stack built on to a range of powerful SoCs, starting with the $99 Jetson Nano.

“This is going to be an exciting time for robotics,” Buck said.

It’s yet another example of how NVIDIA is bringing AI to vertical industries, with NVIDIA Clara for healthcare, NVIDIA Metropolis for smart cities and retail, NVIDIA DRIVE for autonomous vehicles, NVIDIA Omniverse for design and media, and NVIDIA Aerial for telcos.

“What’s going to take it to the next level is vertical platforms that allow the healthcare data scientist or smart city engineer to get access to this technology” Buck said.

It’s an effort that extends to help with workforce training through NVIDIA’s Deep Learning Institute. NVIDIA’s Deep Learning Institute has just added 12 new courses, it’s already trained more than 180,000 AI workers. And it’s available to individuals, teams, and universities.

“My goal and my mission is to help put AI to work not just to do amazing demos but to help industries move to adopt AI,” Buck said.

The post Special Delivery: With U.S. Post Office on Board, NVIDIA to Enable AI Deployment, NVIDIA’s Ian Buck Says appeared first on The Official NVIDIA Blog.

Special Delivery: With U.S. Postal Service on Board, NVIDIA to Enable AI Deployment, NVIDIA’s Ian Buck Says

The AI revolution is here — as near as the closest mailbox — and the time’s right to put AI to work solving your organization’s biggest challenges, NVIDIA’s Ian Buck said Tuesday.

Kicking off the Washington edition of our GPU Technology Conference, Buck, NVIDIA’s vice president for accelerated computing, detailed a new generation of technologies that will help companies put modern AI to work.

Buck also announced that the United States Postal Service — the world’s largest delivery service, with 146 billion pieces of mail processed and delivered annually — is adopting end-to-end AI technology from NVIDIA.

“The challenge is how do we take AI from innovation to actually applying AI,” Buck told an audience of more than 3,500 developers, CIOs and federal employees at the three-day GTC DC. “Our challenge, NVIDIA’s challenge, and my challenge is ‘How can I bring AI to industries and activate it?’”

Over the course of his hour-long talk, Buck explained how modern AI is trained and deployed, and described how NVIDIA is adapting AI for the automotive, healthcare, robotics and 5G industries, among others.

The U.S. Postal Service offers a glimpse at what’s possible.

Buck said the USPS will roll out a deep learning solution based on NVIDIA EGX to 200 processing facilities that should be operational in 2020.

Using EGX, Buck said, the USPS will be able to process packages 10x faster with higher accuracy.

“We’re very excited to see a U.S. agency really leaning into AI. They have a cool problem, processing your mail as quickly as possible, and by working with NVIDIA they’re accelerating that work,” Buck said.

A Day in the Life of AI

Buck, known for creating the CUDA computing platform while still a Stanford student, detailed how NVIDIA’s technologies accelerate every step in the process of putting AI to work from data ingestion, to AI training, and, ultimately, deployment. The next step will be the creation of vertical AI platforms that experts in a wide variety of industries will be able to put to work fast, Buck said.

In healthcare, workers are being inundated with data. A typical radiology department views 8,000 images a day. Three papers per minute are published on the PubMed medical research hub, Buck said.

Buck detailed how the NVIDIA Clara software development kit uses a new generation of transfer learning models to help healthcare workers quickly adapt. It augments a pretrained model to tackle new tasks, such as looking for a particular kind of cancer, in minutes or hours, while using less training data.

Telecommunications is another industry that can benefit from AI as it races to adopt 5G wireless technology. Compared to today’s 4G networks, 5G allows you to download an entire season of “Stranger Things” in just three minutes on a mobile device, compared to three hours. 5G is more responsive, with 1 millisecond of latency, versus 10 milliseconds for 4G. It’s ultra-reliable.

“It is a revolution, and I don’t use that word lightly,” Buck said. “It gives us the opportunity to send enormous streams of data and respond in real time,” Buck said.

To help telcos and their customers make the most of 5Gs, NVIDIA last month launched the NVIDIA EGX Edge Supercomputing platform. Scalable and secure, these servers pack as many as four NVIDIA CUDA Tensor Core GPUs.

Smart cities can benefit from EGX, too.

Running NVIDIA’s Metropolis Internet of Things application framework as part of a pilot program, public safety officials in Dubuque, Iowa, picked out a vehicle driving the wrong way onto a freeway.

Of course, NVIDIA is working to help make cars smarter — safer — too. NVIDIA’s DRIVE autonomous vehicle platform spans everything from cars to trucks to robotaxis to industrial vehicles.

To enable safe development and deployment, NVIDIA built an end-to-end workflow to develop autonomous vehicles, including systems for collecting data, curating it, labeling it, training AI, replaying it and using it to simulate the performance of new systems in all kinds of scenarios.

“This is a much more deeper, richer stack than just traditional inference and training,” Buck said.

Lastly, robots are another area where NVIDIA is building tools that are unleashing a new wave of innovation. In the past, robots were very good at repetitive tasks. The future of robots, however, “is all about interaction,” Buck said.

These new generation of robots are being put to work in retail, agriculture and the food delivery business — slated to grow to $100 billion by 2025.

To enable all this, NVIDIA built Jetson, an end-to-end robotics platform that lets companies deploy new kinds of robots more quickly. It includes a complete software stack built onto a range of powerful SoCs, starting with the $99 Jetson Nano.

“This is going to be an exciting time for robotics,” Buck said.

It’s yet another example of how NVIDIA is bringing AI to vertical industries, with NVIDIA Clara for healthcare, NVIDIA Metropolis for smart cities and retail, NVIDIA DRIVE for autonomous vehicles, NVIDIA Omniverse for design and media, and NVIDIA Aerial for telcos.

“What’s going to take it to the next level is vertical platforms that allow the healthcare data scientist or smart city engineer to get access to this technology” Buck said.

It’s an effort that extends to help with workforce training through NVIDIA’s Deep Learning Institute. The DLI has just added 12 new courses, it’s already trained more than 180,000 AI workers. And it’s available to individuals, teams and universities.

“My goal and my mission is to help put AI to work not just to do amazing demos, but to help industries move to adopt AI,” Buck said.

The post Special Delivery: With U.S. Postal Service on Board, NVIDIA to Enable AI Deployment, NVIDIA’s Ian Buck Says appeared first on The Official NVIDIA Blog.

Closing the AI Skills Gap: Deep Learning Institute Adds A Dozen New Courses

From finding the best sushi near you to improving the manufacturing process of industrial components to making the car you drive safer, AI is advancing convenience, productivity and reliability across industries.

But taking advantage of the power of AI is not feasible without a skilled workforce. In fact, industry research indicates that lack of AI skills is the primary reason companies are unable to achieve business value with AI.

This is why companies and government agencies around the world are swarming the job market to hire developers, data scientists, engineers and researchers with AI expertise. But there just aren’t enough AI-trained developers to meet the demand.

To help bridge that gap, NVIDIA created the Deep Learning Institute in 2016 to train developers with hands-on courses in both fundamental and advanced AI topics. In that time, more than 183,000 students have taken advantage of this program to advance their skills.

Today, DLI is expanding its portfolio with a dozen new courses. Among the instructor-led workshops:

Onsite workshops are one of the most effective ways to train teams of developers and data scientists. DLI has delivered instructor-led workshops on-site at organizations as diverse as Adobe, Baker Hughes, Booz Allen Hamilton, Cisco, Groupe PSA, and the U.S. Food and Drug Administration. Plus, DLI is working with companies like Lockheed Martin to provide this training at multiple sites across their enterprise.

“Lockheed Martin Corporation is committed to providing our employees with access to advanced training and tools,” said Matt Tarascio, chief data and analytics officer at Lockheed Martin. “The outstanding instructors and material of NVIDIA’s DLI program have been instrumental in helping to accelerate the adoption of modern data-driven AI across the corporation in applications such as deep learning, computer vision, natural language processing, intelligent video analytics and more.”

In addition to in-person training, DLI launched new online, self-paced courses on:

Many of the courses offer a certificate of competency to support professional growth. Plus, DLI offers resources to universities including free DLI Teaching Kits to bring AI skills to their classrooms and the DLI Ambassador Program to teach DLI courses to students for free.

Enroll in online, self-paced courses or request an instructor-led workshop for your team.

The post Closing the AI Skills Gap: Deep Learning Institute Adds A Dozen New Courses appeared first on The Official NVIDIA Blog.