Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Run ONNX models with Amazon Elastic Inference

At re:Invent 2018, AWS announced Amazon Elastic Inference (EI), a new service that lets you attach just the right amount of GPU-powered inference acceleration to any Amazon EC2 instance. This is also available for Amazon SageMaker notebook instances and endpoints, bringing acceleration to built-in algorithms and to deep learning environments.

In this blog post, I show how to use the models in the ONNX Model Zoo on GitHub to perform inference by using MXNet with Elastic Inference Accelerator (EIA) as a backend.

The benefits of Amazon Elastic Inference

Amazon Elastic Inference allows you to attach low-cost GPU-powered acceleration to Amazon EC2 and Amazon SageMaker instances to reduce the cost of running deep learning inference by up to 75 percent.

Amazon Elastic Inference provides support for Apache MXNet, TensorFlow, and ONNX models. ONNX is an open standard format for deep learning models that enables interoperability between deep learning frameworks such as Apache MXNet, Caffe2, Microsoft Cognitive Toolkit (CNTK), PyTorch, and more. This means that you can use any of these frameworks to train a model, export the model in ONNX format, and then import them into Apache MXNet for inference.

You can see the collection of pre-trained, state-of-the-art models in ONNX format at the ONNX Model Zoo on GitHub.

Getting started with inference by using Resnet 152v1 model

To start with the tutorial, I use an AWS Deep Learning AMI (DLAMI), which already provides support for Apache MXNet, EIA, ONNX and other required libraries. You can review Elastic Inference Prerequisites for the instructions related to Elastic Inference. For detailed instructions on how to launch a DLAMI with an Elastic Inference Accelerator, see the Elastic Inference documentation. I use the standard ResNet-152v1 ONNX model from model zoo for inference in MXNet.

Step 1: Activate the MXNet EI environment

To begin the tutorial, log in to your Deep Learning AMI with Conda console. Activate the Python 3 MXNet EI environment.

source activate amazonei_mxnet_p36

Step 2: Import dependencies and download

From the ONNX model zoo, download both the Resnet-152v1 model and synset.txt file, which contains class labels.

import mxnet as mx
import matplotlib.pyplot as plt
import numpy as np
from mxnet.gluon.data.vision import transforms
from mxnet.contrib.onnx.onnx2mx.import_model import import_model
import os
# Download model and synset.txt files containing class labels
mx.test_utils.download('https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet152v1/resnet152v1.onnx')
mx.test_utils.download('https://s3.amazonaws.com/onnx-model-zoo/synset.txt')
with open('synset.txt', 'r') as f:
    labels = [l.rstrip() for l in f]

# Download image for inference
img_path = mx.test_utils.download('https://s3.amazonaws.com/onnx-mxnet/examples/mallard_duck.jpg')

Step 3: Import ONNX model in MXNet and perform inference

Import ONNX model in MXNet with the help of ONNX-MXNet API.

# Enter path to the ONNX model file
model_path= 'resnet152v1.onnx'
sym, arg_params, aux_params = import_model(model_path)

Load the resnet152v1 network for inference using CPU context.

# Determine and set context
ctx = mx.cpu()
# Load module
mod = mx.mod.Module(symbol=sym, context=ctx, label_names=None)
mod.bind(for_training=False, data_shapes=[('data', (1,3,224,224))], 
         label_shapes=mod._label_shapes)
mod.set_params(arg_params, aux_params, allow_missing=True, allow_extra=True)

Define a predict function, which takes the path of the input image and prints the top five predictions.

# Preprocess input image
def preprocess(img):   
    transform_fn = transforms.Compose([
    	transforms.Resize(256),
    	transforms.CenterCrop(224),
    	transforms.ToTensor(),
    	transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])
    img = transform_fn(img)
    img = img.expand_dims(axis=0)
    return img

def predict(path):
    img = preprocess(path)
    # Run forward pass
    mod.predict(mx.nd.array(img))
    # Take softmax to generate probabilities
    scores = mx.ndarray.softmax(mod.get_outputs()[0]).asnumpy()
    # print the top-5 inferences class
    scores = np.squeeze(scores)
    a = np.argsort(scores)[::-1]
    for i in a[0:5]:
        print('class=%s ; probability=%f' %(labels[i],scores[i]))

Plot the input image for inference.

img = mx.image.imread(img_path)
plt.imshow(img.asnumpy())

Step 4: Generate prediction on input image

The top five classes, in order, along with the probabilities generated for the image displayed are as below.

predict(img)

Result:
class=n01847000 drake ; probability=0.999519
class=n02018207 American coot, marsh hen, mud hen, water hen, Fulica americana ; probability=0.000230
class=n01855032 red-breasted merganser, Mergus serrator ; probability=0.000130
class=n01855672 goose ; probability=0.000044
class=n09332890 lakeside, lakeshore ; probability=0.000022

Evaluate your output and improve performance

Inference on this model takes approximately 131 milliseconds on C5.4xlarge. So, for 100,000 inference requests, this would cost $2.46 USD. This can be expensive for production use cases. So, let’s look at how Amazon Elastic Inference can help.

Amazon Elastic Inference is available in the following three sizes, making it efficient for a wide range of inference models including computer vision, natural language processing, and speech recognition.

  • eia1.medium: 8 teraflops of mixed-precision performance
  • eia1.large: 16 teraflops of mixed-precision performance
  • eia1.xlarge: 32 teraflops of mixed-precision performance

This lets you select the best price-to-performance ratio for your application. I ran the inference on the same model using GPU and EIA contexts to see the difference in the cost and performance.

To run the model with mx.eia() context, you just need to do minor changes in the code.

  1. With EIA context, when you use either the Symbol API or the Module API, make sure you set for_training=False.
  2. Set the context to bind your model as ctx=mx.eia().

EI typically aims to minimize the host instance CPU memory requirements by offloading to the EI accelerator, but some pre and post-processing must still be done on the host. Depending on the application’s compute and memory requirements, you can select the instance types that are most appropriate.

I evaluated performance of this model with C5 and M5 instances but found that this model required more CPU memory. The M5 instances with more RAM were the most cost effective solution. I ran tests with a few different sized M5 instances with an EIA1.Medium accelerator and observed that instance sizes larger than the M5.xlarge didn’t materially improve latency performance. Next, I tested the M5.xlarge with different EI accelerator sizes. Inference calls with an EIA1.large accelerator were significantly faster than an EIA1.Medium, but my EIA1.Medium at 50ms for an inference request met my requirements, so I didn’t need more horsepower.

Based on my requirements, I decided on an M5.xlarge with an EIA1.Medium as the right infrastructure combination for my workload. Comparing the hourly costs for the instances in our comparison: a P2.xlarge cost $0.90 per hour, whereas the M5.xlarge + EIA1.Medium costs $0.32 per hour, and lastly the C5.4xlarge is $0.68 per hour. But let’s also compare the cost to perform 100,000 inferences, this will incorporate hourly cost and performance to give us a meaningful comparison. The P2.xlarge costs $1.23 to execute 100,000 inferences, whereas this new EI based combination costs $0.45, a whopping 74% reduction in cost, sacrificing just 2% speed. If you use C5.4xlarge, it costs $2.47 and is 2.5x slower than M5.xlarge with EIA1.Medium! See the graph below for more information:

Conclusion

As you can see from the tutorial here, Amazon Elastic Inference gives you the opportunity to select the best price-to-performance ratio suitable for your application. For ONNX ResNet152 model inference, EIA1.medium is 2.5x faster and 81% cheaper than C5.4xlarge! Also with ONNX support, you can export models trained in different deep learning frameworks to run inference with EIA using Apache MXNet as a backend.

For general information about how to use EI, see Working with Amazon EI in the EC2 user guide. You can also find more information about ONNX support in MXNet, in the ONNX API documentation on the MXNet website.


About the Authors

Roshani Nagmote is a Software Developer for AWS Deep Learning. She focusses on building distributed Deep Learning systems and innovative tools to make Deep Learning accessible for all. In her spare time, she enjoys hiking, exploring new places and is a huge dog lover.

 

 

 

Vandana Kannan is a Software Developer for AWS Deep Learning focusing on building scalable deep learning systems. In her spare time, she enjoys painting, learning Indian classical dance, and spending time with family and friends.

 

 

 

Hagay Lupesko is an Engineering Manager for AWS Deep Learning. He focuses on building Deep Learning tools that enable developers and scientists to build intelligent applications. In his spare time he enjoys reading, hiking and spending time with his family.