Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Category: NVIDIA

Bumper Crop of AI Helps Farmers Whack Weeds, Pesticide Use

Weeds compete with neighboring crops for light, water and nutrients, costing the farming industry billions each year in agricultural yield.

To keep a better eye on fields, improve crop yields and reduce the use of pesticides, farmers and agriculture researchers are turning to AI.

“We believe the digital agriculture revolution will help in reducing the use of chemical products in agriculture,” said Adel Hafiane, an associate professor at the Institut National des Sciences Appliquées, in France’s Centre Val de Loire. Hafiane is working with colleagues from the University of Orléans to develop AI that detects weeds from drone images of beet, bean and spinach crops.

“If farmers can map the location of weeds,” he said, “they don’t need to spray chemical products over an entire field — they can just target specific areas, intervening at the right time and site.”

Using the georeferenced coordinates of where an aerial image was captured, farmers can determine the location of weeds in a field. The insights provided by the researchers’ deep learning network could then be deployed in agricultural robots on the ground that can remove or spray weeds in large fields.

Hafiane and his colleagues used a cluster of NVIDIA Quadro GPUs to train the neural networks. Their work was supported by France’s Centre-Val de Loire region.

Deep Learning on Cropped Images

From a few hundred feet in the air, using low-resolution images, it’s not easy to tell the difference between weeds and crops — both are green and leafy. But with sufficient image resolution and enough training data, neural networks can learn to differentiate the two.

Using a dataset of tens of thousands of images for each crop (some labeled, some unlabeled), the team relied on transfer learning based on the popular ImageNet model to develop its deep learning models.

To partially automate the data labeling process, the researchers developed an algorithm that used geometric information in the images to label weeds and crops. Crops are often arranged in neat lines, with open patches of soil between the rows. When spots of green are visible in the space between crop rows, the AI knows it’s likely a weed.

A more complex challenge is detecting weeds within the crop rows. The researchers are working to improve their model’s results on spotting these trickier pests.

Developed using the TensorFlow and Caffe deep learning frameworks, the model recognizes weeds in fields of beets, spinach and beans. At a precision of 93 percent, the AI produced the best results analyzing beet crops.

Hafiane says using NVIDIA Quadro GPUs shrunk training time from one week on a high-end CPU down to a few hours. While the dataset used large, 36-megapixel images, the researchers say further increasing the image resolution captured by the drones would help boost the performance of their neural networks.

The researchers are also using NVIDIA GPUs to train neural networks to detect crop diseases in vineyards, and plan to collaborate with international colleagues to develop similar solutions to monitor other crops.

The post Bumper Crop of AI Helps Farmers Whack Weeds, Pesticide Use appeared first on The Official NVIDIA Blog.

NoTraffic, No Problems: AI Startup Improves Intersections

Israel-based technology company NoTraffic is using AI to transform intersections from danger zones to intelligent decision makers, cutting time delays and carbon dioxide emissions.

Next week, NoTraffic’s Yoav Valinsky, a computer vision researcher, will be going to GTC DC to discuss how the company is operating at the edge in a presentation called, “From Theory to Practice: Computer Vision on Edge Devices for Real-Time Optimization.”

Founded by Tal Kreisler, Or Sela, and Uriel Katz, and a member of NVIDIA’s Inception startup incubator, NoTraffic uses AI sensor units at intersections to analyze traffic and optimize traffic lights accordingly.

This proactive approach contrasts with today’s usual intersection technology, such as inductive-loop traffic detectors. Induction loops are installed underground — making it challenging to upgrade or replace — and work like metal detectors to sense cars.

Those traffic detectors are also constrained by the intersection’s fixed time plan. While they can minimize or maximize a light’s duration, the detectors can’t override a light — even if there are no cars coming in that direction.

Using AI at the edge, NoTraffic’s system reduces the cost of installation and maintenance, and gives intersections the ability to prepare for vehicles rather than just react to them.

Rewiring the Rules of the Road

NoTraffic starts by installing AI sensor units aimed in every direction of an intersection. The units use the NVIDIA Jetson platform and GPU-accelerated frameworks to fuse machine vision and radar, processing data roughly 15 times a second, according to Valinsky.

The sensor units also integrate connected vehicle capabilities based on dedicated short range communications (DSRC) and cellular vehicle-to-everything (CV2X). DSRC is a system of wireless communication channels between vehicles and infrastructure; and CV2X technology provides communication between vehicles, infrastructure, and any related entities.

NoTraffic’s units can then detect and classify all road users — including cars, buses, trucks, bicycles, and pedestrians — at the edge. The processed data is then streamed to the Optimization Engine, installed in the traffic signal control cabinets that are already present at most intersections.

There, the data is used to optimize and manage traffic lights, both at the individual intersection and across the city grid. By placing compute closer to the point of action, NoTraffic’s edge system saves bandwidth and lowers latency for faster calculations.

NoTraffic securely sends data from each intersection to the cloud for further processing and city-grid optimization, and presents the information in a dashboard designed for city engineers. They can use it for big data analytics, remote monitoring of intersections, and the implementation of new traffic policies.

NoTraffic, No Problems

Analyzing in real-time provides capabilities such as collision prediction. NoTraffic’s Director of Business Development Ilan Rozenberg explained that the sensor units calculate the speed, acceleration, and direction of vehicles, so they can infer when two cars can’t see each other and will probably collide.

The sensors’ ability to classify vehicles also makes it possible to prioritize certain road users. If a city wanted to prioritize public transportation or pedestrians at the intersections surrounding schools on weekday mornings, city engineers would input that policy in their dashboards. NoTraffic’s system would make the changes autonomously.

The company is currently focused on the U.S., and is conducting pilot projects in several cities and counties across the country. Annually, NoTraffic is reducing delays by 2,700 hours and preventing 33 tons of emissions per intersection.

The company looks forward to making their technology even smarter — the longer it’s implemented, the better it’ll be able to predict vehicle and pedestrian patterns.

To learn more about the AI powering NoTraffic, come out to GTC DC, Nov. 4-6.

The post NoTraffic, No Problems: AI Startup Improves Intersections appeared first on The Official NVIDIA Blog.

Picture-Perfect Product Help: AI Startup Brings Computer Vision to Customer Service

When your appliances break, the last thing you want to do is spend an hour on the phone trying to reach a customer service representative.

Using computer vision, Drishyam.AI is eliminating service lines to help consumers more quickly.

Satish Mandalika, the CEO and founder of the deep learning-based image recognition platform, spoke with AI Podcast host Noah Kravitz about the company.

“Customer support is ripe for disruption,” Mandalika said. Drishyam.AI is changing the game by giving customers an app that they use to take a picture of the product they need help with at any time of day or night, rather than calling a help line.

Using computer vision, Drishyam.AI analyzes the issue and communicates directly with manufacturers, rather than going through retail outlets. This is more efficient because a product’s lifetime warranty is usually held by the company that made it, rather than the stores selling it like Home Depot and Lowe’s.

Since Drishyam.AI’s founding two years ago, the company is only pursuing relationships with manufacturers, but that could change in the future Mandalika said, by collecting data more and more data. “We build that intelligence across product lines in a domain, and then we can turn around and help the consumer directly,” Mandalika said.

A member of NVIDIA’s Inception startup incubator, Drishyam.AI’s pilot projects include two large faucet manufacturing companies, which will soon be converted into paying client.

The home improvement domain is Drishyam.AI’s beachhead, given the numerous amount of products in that field that have lifetime warranties and require customer support. However, they’re expanding into a variety of fields.

Mandalika’s vision for Drishyam.AI is that eventually, “You should be able to get support for any product that you need by just pointing your mobile device at it. And platforms like ours will then help you identify the products, troubleshoot, and even order parts and all that.”

To find out more about Drishyam.AI, visit their website or their twitter.

How to Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Your favorite not listed here? Email us at aipodcast [at] nvidia [dot] com or fill out this short listener survey.

The post Picture-Perfect Product Help: AI Startup Brings Computer Vision to Customer Service appeared first on The Official NVIDIA Blog.

Connecting the Dots: Domino Data Lab Drops Into Data Science Wave

Event opportunity: Join Josh Poduska, Domino Data Lab’s chief data scientist, who will be presenting at GTC DC on Tuesday, Nov. 5

As Wall Street was morphing into a game of quants, Nick Elprin, Christopher Yang and Matthew Granade saw something big shaping up on the horizon: a data science wave swelling across industries.

So, the three left Bridgewater Associates, the world’s largest hedge fund, and shortly thereafter started Domino Data Lab, an open source data science platform now making a splash with AI developers worldwide. 

“My co-founders and I built a lot of the internal platforms and technology that those quants used at Bridgewater to do their quantitative research — what the rest of the world now calls data science,” said Elprin, the company’s CEO.

The San Francisco company, a member of the NVIDIA Inception program that helps startups scale, in August landed on Inc. magazine’s annual list of the fastest-growing private companies.

Bridgewater to Domino

After leaving Bridgewater in 2013, the three found that what companies lacked most was an industrialized platform for data science teams, so they started Domino Data Lab to fill the void.

“The experience and perspective at Bridgewater let us see the white space in the market to see what technology and products could do,” said Elprin.

Domino’s software platform automates infrastructure for data scientists, enabling users to accelerate research, deploy models and track projects.

Under Domino’s Hood

A data science supercharger, Domino’s customizable environment provides users with data science tools to speed workflows.

Its Domino Analytics Distribution offers a scientific computing stack for programming in Python, R, Julia and other popular languages. Domino offers access to commonly used interactive tools and notebooks, including Jupyter, RStudio, Zeppelin and Beaker.

Domino also provides deep learning packages and GPU drivers, including access to frameworks such as TensorFlow, Theano and Keras. The platform enables access to any NVIDIA GPUs in the cloud.

“Working with NVIDIA has helped Domino build products that allow our mutual customers to automate deployment of workloads to GPUs,” Elprin said. “NVIDIA Inception has also helped us grow our Fortune 500 customer base through  podcasts and conference talks.”

Customer Domino Effect

Companies are lining up. Red Hat, Dell, Bayer, AllState, Gap and Bristol-Myers Squibb are all using Domino to accelerate their data science workflows.

“Our investment in Domino has really paid off — probably a return around 10x in terms of efficiency of our data science community,” said Heidi Lanford, vice president of enterprise data and analytics at Red Hat, in a video.

Image credit: Photo by Shalom Jacobovitz, licensed under Creative Commons.

The post Connecting the Dots: Domino Data Lab Drops Into Data Science Wave appeared first on The Official NVIDIA Blog.

DC Startup Casts an AI Net to Stop Phishing and Malware

When the price went way up on a key service a small Washington, D.C., firm was using to protect its customers’ internet connectivity, the company balked.

After not finding a suitable alternative, the company decided to build its own. The result was a whole new business, called DNSFilter, which is casting a wide net around the market to combat phishing and malware.

Its innovation: It ditched the crowdsourcing model that has served for more than a decade as the bedrock for identifying whether websites are valid or corrupt. It opted, instead, for GPU-powered AI to make web surfing safer by identifying threats and objectionable content much faster than traditional offerings.

“We figured that if we built a whole new DNS from the ground up, built on artificial intelligence and machine learning, we could find threats faster and more effectively,” said Rustin Banks, chief revenue officer and one of four principals at DNSFilter.

Spinning Up Phishing Protection

DNS, or domain name system, is the naming system for computers, phones and services that connect to the internet. DNSFilter’s aim is to protect these assets from malicious websites and attacks.

The company’s algorithm takes seconds to compare websites to a machine learning model generated from 30,000 known phishing sites. To date, its AI prevents over 90 percent of new requests to visit potentially corrupt sites.

It’s this speed that largely separates DNSFilter from the rest of the industry, Banks said. It gets results in near real time, while competitors typically take around 24 hours.

The company’s algorithm has been built and trained in the cloud using NVIDIA P4 GPU clusters.

“NVIDIA GPUs allow us to rapidly train AI, while being able to use cutting-edge frameworks. It’s not a job I would want to do without them,” said Adam Spotton, chief data scientist at DNSFilter.

Inferencing occurs at 48 locations worldwide, hosted by 10 vendors who’ve passed DNSFilter’s rigorous security standards.

Banks said the company’s rivals primarily use a company in the Philippines that has a team of 150 people classifying sites all day. But for DNSFilter, the more corrupt sites it identifies, the faster and more accurate its algorithm becomes. (Disclosure: NVIDIA is one of the company’s biggest customers.)

Moreover, DNSFilter’s solution works at the network level so there’s no plug-in necessary and the solution works with any email client, protecting organizations regardless of where employees are or what device they’re using.

“If the CFO uses his Yahoo mail on his mobile device, it doesn’t matter,” said Banks. “It’s built right into the fabric of the internet request.”

Upping the Ante

Banks estimates that DNS filtering represents a billion-dollar market, and he’s confident that the $10 billion firewall market is in play for DNSFilter.

Already, the startup is fielding more than a billion DNS requests a day. Banks foresees that number rising to 10 billion by the end of 2020. He also expects accuracy will come to exceed 99 percent as the dataset of corrupt sites grows.

The company isn’t stopping there. More services are planned, including a log -analysis product currently in beta. It scans logos on sites linked from phishing emails and compares them against a database of approved sites to determine whether the logo is real. It then blocks phishing sites in real time.

Eventually, Banks said, the company intends to evolve from its current machine learning feedback loop to a neural network with sufficient cognition to identify things that its algorithms can’t find.

This, he said, would be like having an extra pair of eyes inside an organization’s security team, constantly monitoring suspicious web surfing wherever employees may be working.

“This is taking phishing protection to a new level,” said Banks. “It’s like network-level protection that comes with you wherever you go.”

The post DC Startup Casts an AI Net to Stop Phishing and Malware appeared first on The Official NVIDIA Blog.

AI’s Latest Adventure Turns Pets into GANimals

Imagine your Labrador’s smile on a lion or your feline’s finicky smirk on a tiger. Such a leap is easy for humans to perform, with our memories full of images. But the same task has been a tough challenge for computers — until the GANimal.

A team of NVIDIA researchers has defined new AI techniques that give computers enough smarts to see a picture of one animal and recreate its expression and pose on the face of any other creature. The work is powered in part by generative adversarial networks (GANs), an emerging AI technique that pits one neural network against another.

You can try it for yourself with the GANimal app. Input an image of your dog or cat and see its expression and pose reflected on dozens of breeds and species from an African hunting dog and Egyptian cat to a Shih-Tzu, snow leopard or sloth bear.

I tried it, using a picture of my son’s dog, Duke, a mixed-breed mutt who resembles a Golden Lab. My fave — a dark-eyed lynx wearing Duke’s dorky smile.

There’s potential for serious applications, too. Someday movie makers may video dogs doing stunts and use AI to map their movements onto, say, less tractable tigers.

The team reports its work this week in a paper at the International Conference on Computer Vision (ICCV) in Seoul. The event is one of three seminal conferences for researchers in the field of computer vision.

Their paper describes what the researchers call FUNIT, “a Few-shot, UNsupervised Image-to-image Translation algorithm that works on previously unseen target classes that are specified, at test time, only by a few example images.”

“Most GAN-based image translation networks are trained to solve a single task. For example, translate horses to zebras,” said Ming-Yu Liu, a lead computer-vision researcher on the NVIDIA team behind FUNIT.

“In this case, we train a network to jointly solve many translation tasks where each task is about translating a random source animal to a random target animal by leveraging a few example images of the target animal,” Liu explained. “Through practicing solving different translation tasks, eventually the network learns to generalize to translate known animals to previously unseen animals.”

Before this work, network models for image translation had to be trained using many images of the target animal. Now, one picture of Rover does the trick, in part thanks to a training function that includes many different image translation tasks the team adds to the GAN process.

The work is the next step in Liu’s overarching goal of finding ways to code human-like imagination into neural networks. “This is how we make progress in technology and society by solving new kinds of problems,” said Liu.

The team — which includes seven of NVIDIA’s more than 200 researchers — wants to expand the new FUNIT tool to include more kinds of images at higher resolutions. They are already testing it with images of flowers and food.

Liu’s work in GANs hit the spotlight earlier this year with GauGAN, an AI tool that turns anyone’s doodles into photorealistic works of art.

The GauGAN tool has already been used to create more than a million images. Try it for yourself on the AI Playground.

At the ICCV event, Liu will present a total of four papers in three talks and one poster session. He’ll also chair a paper session and present at a tutorial on how to program the Tensor Cores in NVIDIA’s latest GPUs.

The post AI’s Latest Adventure Turns Pets into GANimals appeared first on The Official NVIDIA Blog.

Clean Sweep: Tokyo Robotics Company Builds Tidying Robots

Though creating an autonomous robot that can tidy a room seems like enough of an achievement, Tokyo-based Preferred Networks goes one step further. By integrating natural language processing into their technology, their robots respond to commands and adjust their actions.

Jun Hatori, a software engineer at Preferred Networks, spoke with AI Podcast host Noah Kravitz about the company’s latest developments.

To create robots that can understand how to clean up a room and respond to demands, Hatori described two main obstacles.

“I started to realize that robots can’t do as much as we can instruct,” he said. While NLP technology allows robots to understand the commands being given, their hardware isn’t always advanced enough to carry out the tasks.

The second challenge is crafting a robot that can understand the nuances of human language. “If you’re going to give it a command — like, ‘Pick up that white stuff’ — then the robot basically has to know what kind of items are there, how they’re placed, and what the word ‘white’ means,” Hatori said.

Preferred Networks has overcome these challenges to craft a robot with computer vision and object detection technology, as well as human-robot interaction capabilities such as gesture recognition and spoken language interpretation.

Their robot first assesses the room and creates a task list based on the objects that are out of place. Using a “paragrip” — a pinching hand — the robot grasps objects and puts them away.

By integrating NLP capabilities, users can instruct the robot to put objects elsewhere.

Preferred Networks has also applied this human-robot interaction technology in the biohealth, industrial and automobile domains.

But their focus is still on personal robots. “Everyone knows it has huge potential if someone can build something actually usable,” Hatori said. “In the coming years, I think there’s going to be very big competition among many companies and research groups.”

You can see Preferred Networks’ cleaning robot in action along with their other projects at their website.

How to Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. Your favorite not listed here? Email us at aipodcast [at] nvidia [dot] com or fill out this short listener survey.

The post Clean Sweep: Tokyo Robotics Company Builds Tidying Robots appeared first on The Official NVIDIA Blog.

AI Gold Seen in Healthcare’s Mountain of Waste

A new report estimates the cost of waste in the U.S. healthcare system alone ranges as high as $935 billion a year, about 25 percent of total healthcare spending.

A growing army of startups and established practitioners sees the inefficiencies as a trillion-dollar opportunity to apply AI.

The U.S. spends about 18 percent of its gross domestic product on healthcare, more than any other country. A report published online by the Journal of the American Medical Association surveyed 54 studies to estimate annual waste figures in six broad categories, including failures from choosing ineffective treatments (up to $166 billion), failures  from coordinating multiple treatments ($78 billion), fraud and abuse ($84 billion) and administrative complexity ($266 billion).

“Implementation of effective measures to eliminate waste represents an opportunity to reduce the continued increases in U.S. health care expenditures,” the report concluded.

MICCAI Heard the Call

Researchers echoed that theme at a major medical imaging conference in Shenzhen, China, recently.

Headshot Shiyuan Liu
Shiyuan Liu

Catherine Mohr, vice president of strategy at Intuitive Surgical, reviewed the history of medtech with an eye on “how to think about distinguishing price from value when developing the next generation of medical devices,” in a keynote at this year’s International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI).

Attendees also got an update on the state of the art in using AI in medical imaging in a keynote from Shiyuan Liu, president of the Chinese Medical Imaging AI Innovation Alliance. Liu called for practitioners, vendors and academics to work together to drive AI forward.

700+ AI Healthcare Startups

Opportunities span the waterfront. “Every single type of health professional” will be impacted by AI, said Eric Topol, founder and director of the Scripps Research Translational Institute, in a keynote at NVIDIA’s GTC event in Silicon Valley earlier this year. AI will help practitioners provide “better, faster, cheaper” care, said the author of the recently released book, “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again.”

That message has not been lost on entrepreneurs. A recent healthcare event sponsored by a major Wall Street bank was “crawling with tech VCs, and five years ago that was not the case,” said Jeff Herbst, vice president of business development at NVIDIA.

With more than 700 startups, healthcare represents the largest category in NVIDIA’s Inception accelerator program that provides AI training and tools to fuel their growth. Herbst calls out Biotrillion as one to watch. The startup generates digital biomarkers to detect disease using its own analytics on sensor data from a user’s smartphone and smartwatch.

“The biggest opportunity in healthcare is in using AI to keep people well — this is the most exciting area to me,” he said.

There’s no shortage of other examples. San Francisco-based Fathom is developing deep learning tools to automate the painstaking medical coding process while increasing accuracy. Its tools use NVIDIA P100 and V100 Tensor Core GPUs in Google Cloud for both training and inference, reducing human time spent on medical coding by as much as 90 percent.

Houston-based InformAI helps reduce fatigue and stress for radiologists by building deep learning tools that can help them analyze medical scans faster. It’s image classifiers and patient outcome predictors run both on NVIDIA V100 GPUs in the Microsoft Azure cloud platform and an onsite NVIDIA DGX Station. In just 30 seconds they can analyze a patient’s 3D CT scan for 20 sinus conditions.

Subtle Medical of Menlo Park, CA, announced this week that it received FDA clearance for SubtleMR, its deep learning solution for improving the image quality of MRIs. The Inception member’s first product, SubtlePET, which can produce PET images in as little as a quarter of the scanning time of current systems, received FDA clearance last year. Both products are trained on DGX-1 and DGX Station and enabled by TensorRT.

Major Players Embrace AI

Medical imaging is one of the biggest areas in healthcare AI, with startups scattered around the globe. They include South Korean startup Lunit and InferVISION, one of China’s top medical imaging startups, focusing on lung nodule analysis and prediction from CT scans.

Major providers and vendors are also embracing AI. Two developers from UnitedHealth Group, one of the largest healthcare companies in the U.S., shared in a talk at GTC earlier this year how the provider is adopting AI for tasks that span prior authorization of medical procedures to directing phone calls.

In June, Siemens Healthineers and NVIDIA shared their latest work in AI for medical imaging at the Society for Imaging Informatics in Medicine annual conference. Siemens Healthineers is using an NVIDIA GPU-based supercomputing infrastructure to develop AI software for generating organ segmentations that enable precision radiation therapy.

“The area that will have the biggest impact in AI is healthcare,” said Ian Buck, vice president of NVIDIA’s Accelerated Computing Group in a recent interview.

“The healthcare industry is chock full of data … there are many obstacles ahead, but I am truly hopeful AI can help cure diseases and save lives — that makes me excited about the work we do,” Buck said.

The post AI Gold Seen in Healthcare’s Mountain of Waste appeared first on The Official NVIDIA Blog.

AI’s New Onramp: Meet the Data Science PC

The trip to AI and big-data analytics is now just a click away. Starting today, three NVIDIA partners are selling online a new class of computers we call data science PCs.

The systems bundle the hardware and software data scientists need to hit an “on” button and start managing datasets and models to make AI predictions. Data science PCs tap NVIDIA TITAN RTX GPUs and RAPIDS software to deliver 3-6x speed-ups compared to CPU-only desktops.

Three experts in building high-end PCs — Digital Storm, Maingear and Puget Systems — are offering the products now. They’re targeting an expanding class of independent data scientists to help them achieve better results faster.

data science PC benchmark
A data science PC handled extract-transform-load (ETL) and XGBoost training on a dataset derived from New York City taxis, delivering end-to-end predictions in one-sixth the time of a CPU-only desktop.

Some of the world’s largest and most innovative organizations are already using GPU-accelerated servers and workstations to tackle their demanding data-science jobs.

For example, Walmart’s supermarket of the future that can compute in real time more than 1.6 terabytes of data generated per second using NVIDIA’s EGX platform. The Summit system at Oak Ridge National Laboratory can tap its 27,648 NVIDIA V100 Tensor Core GPUs to drive 3.3 exaflops of mixed-precision horsepower on AI tasks.

But data science isn’t just for large enterprises. Startups, researchers, students and enthusiasts are jumping into this burgeoning field. They’re contributing to the corporate momentum making the role of data scientist one of the fastest growing jobs in the U.S.

The data science PC aims to fuel this growing class of independent data science practitioners. The combination of powerful, pre-configured systems and a tested software stack can jumpstart their work.

The Speeds and Feeds

Under the hood, a data science PC includes one or two TITAN RTX GPUs, each with up to 24GB of memory. NVLink high-speed interconnect technology connects the two GPUs to tackle datasets that demand more GPU memory.

The systems can accommodate 48-128GB of main memory and storage options include drives that range up to 10TB.

Each data science PC will ship with Linux and RAPIDS, NVIDIA’s data science software stack, powered by its popular CUDA-X AI programming libraries.

NVIDIA RAPIDS eases the job of porting existing code for GPU acceleration. Its APIs are modeled after popular libraries used in data science. In many cases, it’s only necessary to change a few lines of code in order to tap the potential of GPU acceleration.

Here are some of the key elements of RAPIDS:

  • cuDF is a Python GPU data-frame library for loading, joining, aggregating, filtering and otherwise manipulating data. The API is designed to be similar to Pandas, so existing code easily maps to the GPU.
  • cuML accelerates popular machine learning algorithms, including XGBoost, PCA, K-means, k-Nearest Neighbors and more. It is closely aligned with sciKit-learn.
  • cuGraph is a library of graph algorithms, similar to NetworkX, that works with data stored in a GPU data frame.

An ecosystem of startups in Inception, NVIDIA virtual accelerator program for startups focused on AI and data science, provides applications and services that run on top of RAPIDS. They include companies, such as Graphistry and OmniSci, that offer big-data visualization tools.

Data scientists can also use NVIDIA’s data science developer forum to ask questions and learn more about data science on GPUs.

The data science PC is here, ready to propel you to an AI future.  Learn more from our partners Digital Storm, Maingear and Puget Systems.

The post AI’s New Onramp: Meet the Data Science PC appeared first on The Official NVIDIA Blog.

Keys to the (Smart) City: NVIDIA Powers Mini AI Metropolis at MWC Los Angeles

This neighborhood has shopping, cafes, even a place to go to school. But it happens to fit nearly inside a 2,000 square-foot trade show booth, showing off smart city technology.

With a miniature town erected on the show floor, NVIDIA welcomed more than 22,000 telecom industry professionals attending the Mobile World Congress in Los Angeles.

Crowds jammed into the booth to see how pervasive AI and connectivity can elevate experiences in the world around us. While most cities are powered by elaborate power and water grids, this one’s infrastructure is built on NVIDIA AI technologies.

They include the just-introduced NVIDIA EGX Edge Supercomputer Platform, the NVIDIA Metropolis smart city developer kit, and NVIDIA Xavier, the world’s most powerful system-on-a-chip, and a multitude of others.

A Vibrant Ecosystem

Front and center, a huge display at the front of the booth monitoring the heart of our virtual city told the stories of a diverse array of NVIDIA EGX-powered technologies.

Qwake Technologies, for example, creates augmented reality maps to guide firefighters. Volvo Trucks has built Vera, the first cabin-less autonomous truck to move cargo. Blue River Technologies is using AI to apply tiny doses of pesticides with incredible precision.

Shopping Spree

Steps away, showgoers could stroll into a convenience store stocked with everything from Fuji spring water to KitKats to Chex Mix. Thanks to startup AIFi’s EGX-powered systems, customers could simply grab what they needed and go, and get invisibly charged to the sale.

Another startup, AnyVision, showed how it’s using EGX to do real-time analytics of customer’s shopping behaviors, giving real-world stores the same kind of insights into shopping behavior long enjoyed by online ones.

Nearby, Malong Technologies showed how its GPU-powered system allows shoppers to grab a bunch of grapes or a banana, and have the checkout system instantly recognize it — no bar codes needed.

Around the corner, food delivery service Postmates showed off what it describes as the first socially aware food delivery robot. Its diminutive yellow robot, equipped with powerful lidar sensors and a playful digital face, is powered by NVIDIA EGX servers running in a data center, Xavier, and NVIDIA JetPack software developer kit.

No modern city is complete without a gaming café. The NVIDIA Edge Cafe’s games, however, are hosted on a data center miles away and beamed to devices over Wi-Fi and Verizon’s 5G network.

The result: cheap, light laptops and smartphones equipped with game controllers were able to play the latest games in stunning high-definition quality at 60 frames per second.

Gawking at an Invisible Car

Of course no great street scene — or trade show booth — is complete without a car. In a witty twist, this car’s invisible until you pick up an ordinary smartphone.

Looking through the phone’s screen, you can check out a million-dollar, cherry-red McLaren Senna sports coupe mounted on a pedestal at the front of the booth.

“Okay, so that was pretty cool,” said Danny Miller, a car aficionado who works in sales and marketing for a media company after taking a long look at the curvaceous virtual coupe.

The demo relied on the NVIDIA CloudXR software developer kit, which lets enterprises deliver virtual and augmented reality experiences across 5G networks to let showgoers see a virtual car created out of 28 million polygons and running an NVIDIA Quadro RTX 8000 GPU.

No City’s Complete Without Top-Rated Schools

This city is even equipped with not one, but two places where you can go to school to learn more.

The NVIDIA theater features speakers from around the industry — like Kundana Palagiri, principal program manager for Microsoft Azure; and Usman Sadiq, deep learning product manager at Cisco — who shared their real-world experience with scores of listeners.

Around the corner, NVIDIA’s Deep Learning Institute had set up a bank of 15 laptops for  hands-on training in AI and accelerated computing to solve real-world problems to developers, data scientists, researchers and students led by expert instructors.

Attendees from marketing, security and customer service companies — among others — inspired by what they’ve heard at the show, signed up for hands-on training.

Join the Crowd

This tiny city, in short, has almost anything you could need. The only downside: like any bustling city, there’s plenty of traffic. All of it, in this case, on foot.

Scores of attendees crowded into the booth to gawk, snap photos and grab black-shirted NVIDIA employees to ask questions and exchange business cards.

In town? Stop by our town at booth 1745 in the South Hall of the Los Angeles Convention Center.

The post Keys to the (Smart) City: NVIDIA Powers Mini AI Metropolis at MWC Los Angeles appeared first on The Official NVIDIA Blog.