Blog

Learn About Our Meetup

4500+ Members

Category: Amazon

Modernizing wound care with Spectral MD, powered by Amazon SageMaker

Spectral MD, Inc. is a clinical research stage medical device company that describes itself as “breaking the barriers of light to see deep inside the body.” Recently designated by the FDA as a “Breakthrough Device,” Spectral MD provides an impressive solution to wound care using cutting edge multispectral imaging and deep learning technologies. This Dallas-based company relies on AWS services including Amazon SageMaker and Amazon Elastic Compute Cloud (Amazon EC2) to support their unprecedented wound care analysis efforts. With AWS as their cloud provider, the Spectral MD team can focus on healthcare breakthroughs, knowing their data is stored and processed swiftly and effectively.

“We chose AWS because it gives us access to the computational resources we need to rapidly train, optimize, and validate the state-of-the-art deep learning algorithms used in our medical device,” explained Kevin Plant, the software team lead at Spectral MD. “AWS also serves as a secure repository for our clinical dataset that is critical for the research, development and deployment of the algorithm.”

The algorithm is the 10-year-old company’s proprietary DeepView Wound Imaging System, which uses a non-invasive digital approach that allows clinical investigators to see hidden ailments without ever coming in contact with the patient. Specifically, the technology combines visual inputs with digital analyses to understand complex wound conditions and predict a wound’s healing potential. The portable imaging device in combination with computational power from AWS, allows clinicians to capture a precise snapshot of what is hidden to the human eye.

Spectral MD’s revolutionary solution is possible thanks to a number of AWS services for both core computational power and machine learning finesse. The company stores data captured by their device on Amazon Simple Storage Service (Amazon S3), with the metadata living in Amazon DynamoDB. From there, they back up all the data in Amazon S3 Glacier. This data fuels their innovation with AWS machine learning (ML).

To manage the training and deployment of their image classification algorithms, Spectral MD uses Amazon SageMaker and Amazon EC2. These services also help the team to achieve improved algorithm performance and to conduct deep learning algorithm research.

Spectral MD particularly appreciates that using AWS services saves the data science team a tremendous amount of time. Plant described, “The availability of AWS on-demand computational resources for deep learning algorithm training and validation has reduced the time it takes to iterate algorithm development by 80%. Instead of needing weeks for full validation, we’re now able to cut the time to 2 days. AWS has enabled us to maximize our algorithm performance by rapidly incorporating the latest developments in the state-of-the-art of deep learning into our algorithm.”

That faster timeline in turn benefits the end patients, for whom time is of the essence. Diagnosing burns quickly and accurately is critical for accelerating recovery and can have important long-term implications for the patient. Yet, current medical practice (without Spectral MD) suffers a 30% diagnostic error rate, meaning that some patients unnecessarily treated with surgery while others who would have benefited from surgery are not offered that option.

Spectral MD’s solution takes advantage of the natural patterns in the chemicals and tissues that compose human skin – and the natural pattern-matching abilities of ML. Their model has been trained on thousands of images of accurately diagnosed burns. Now, it is precise enough that the company can create datasets from scratch that differentiate pathologies from healthy skin, operating to a degree that is impossible for the human eye.

These datasets are labeled by expert clinicians using Amazon SageMaker Ground Truth. Spectral MD has extended Amazon SageMaker Ground Truth with the ability to review clinical reference data stored in Amazon S3. During the labeling process this provides clinicians with the ideal information set to maximize the accuracy of the diagnostic ground truth labels.

Going forward, Spectral MD plans to push the boundaries of ML and of healthcare. Their team has recently been investigating the use of Amazon SageMaker Neo for deploying deep learning algorithms to edge hardware. In Plant’s words, “There are many barriers to incorporating new technology into medical devices. But AWS continually improves how easy it is for us to take advantage of new and powerful features; no one else can keep pace with AWS.”


About the Author

Marisa Messina is on the AWS ML marketing team, where her job includes identifying the most innovative AWS-using customers and showcasing their inspiring stories. Prior to AWS, she worked on consumer-facing hardware and then university-facing cloud offerings at Microsoft. Outside of work, she enjoys exploring the Pacific Northwest hiking trails, cooking without recipes, and dancing in the rain.

 

 

 

 

 

Authenticate users with one-time passwords in Amazon Lex chatbots

Today, many companies use one-time passwords (OTP) to authenticate users. An application asks you for a password to proceed. This password is sent to you via text message to a registered phone number. You enter the password to authenticate. It is an easy and secure approach to verifying user identity. In this blog post, we’ll describe how to integrate the same OTP functionality in an Amazon Lex chatbot.

Amazon Lex lets you easily build life-like conversational interfaces into your existing applications using both voice and text.

Before we jump into the details, let’s take a closer look at OTPs. OTP is usually a sequence of numbers that is valid for only one login session or transaction. The OTP expires after a certain time period, and, after that, a new one has to be generated. It can be used on a variety of channels such as web, mobile, or other devices.

In this blog post, we’ll show how to authenticate your users using an example of a food-ordering chatbot on a mobile device. The Amazon Lex bot will place the order for users only after they have been authenticated by OTP.

Let’s consider the following conversation that uses OTP.

To achieve the interaction we just described, we build a food delivery bot first with the following intents: GetFoodMenu and OrderFood. The OTP password is used in intents that involve transactions, such as OrderFood.

We’ll show you two different implementations of capturing the OTP – one via voice and the other via text. In the first implementation, the OTP is captured directly by Amazon Lex as a voice or text modality. The OTP value is sent to directly to Amazon Lex as a slot value. In the second implementation, the OTP is captured by the client application (using text modality). The client application captures the OTP from the dialog box on the client and sends it to Amazon Lex as a session attribute. Session attributes can be encrypted.

It is important to note that all API calls made to the Amazon Lex runtime are encrypted using HTTPS. The encryption of the OTP when used via session attributes provides an extra level of security. Amazon Lex passes the OTP received via the session attribute or slot value to an AWS Lambda function that can verify the OTP.

Application architecture

The bot has an architecture that is based on the following AWS services:

  • Amazon Lex for building the conversational interface.
  • AWS Lambda to run data validation and fulfillment.
  • Amazon DynamoDB to store and retrieve data.
  • Amazon Simple Notification Service (SNS) to publish SMS messages.
  • AWS Key Management Service (KMS) to encrypt and decrypt the OTP.
  • Amazon Cognito identity pool to obtain temporary AWS credentials to use KMS.

The following diagram illustrates how the various services work together.

Capturing the OTP using voice modality

When the user first starts an interaction with the bot, the user’s email or other metadata are passed from the frontend to the Amazon Lex runtime.

An AWS Lambda validation code hook is used to perform the following tasks:

  1. AWS Lambda generates an OTP and stores it in the DynamoDB table.
  2. AWS Lambda sends the OTP to user’s mobile phone using SNS.
  3. The user inputs the OTP into the client application, which gets sent as a slot type to Amazon Lex.
  4. AWS Lambda verifies the OTP, and, if the authentication is successful, it signals Amazon Lex to proceed with the conversation.

After the user is authenticated the user is able to place an order with the Amazon Lex bot.

Capturing the OTP using text modality

Similar to the first implementation, the user’s email or other metadata are sent to the Amazon Lex runtime from the front end.

In the second implementation, an AWS Lambda validation code hook is used to perform the following tasks:

  1. AWS Lambda generates an OTP and stores it in the DynamoDB table.
  2. Lambda sends the OTP to user’s mobile phone using SNS.
  3. User enters the OTP into the dialog box of the client application.
  4. The client application encrypts the OTP entered by the user and sends it to the Amazon Lex runtime in the session attributes.

Note: Session attributes can be encrypted.

  1. AWS Lambda verifies the OTP, and, if the authentication is successful, it signals Amazon Lex to proceed with the conversation.

Note: if the OTP is encrypted, the Lambda function will need to decrypt it first.

After the user is authenticated, the user can place an order with the Amazon Lex bot.

Generating an OTP

There are many methods of generating an OTP. In our example, we generate a random six-digit number as an OTP that is valid for one minute and store it in a DynamoDB table. To verify the OTP, we compare the value entered by the user with the value in the DynamoDB table.

Deploying the OTP bot

Use this AWS CloudFormation button to launch the OTP bot in the AWS Region us-east-1:

The source code is available in our GitHub repository.

Open the AWS CloudFormation console, and on the Parameters page, enter a valid phone number. This is the phone number the OTP is sent to.

Choose Next twice to display the Review page.

Select the acknowledgement checkbox, and choose Create to deploy the ExampleBot.

The CloudFormation stack creates the following resources in your AWS account:

  • Amazon S3 buckets to host the ExampleBot web UIs.
  • Amazon Lex Bot to provide natural language processing.
  • AWS Lambda functions used to send and validate the OTP.
  • AWS IAM roles for the Lambda function.
  • Amazon DynamoDB tables to store session data.
  • AWS KMS key for encrypting and decrypting data.
  • Amazon Cognito identity pool configured to authenticate clients and provide temporary AWS credentials.

When the deployment is complete (after about 15 minutes), the stack Output tab shows the following:

  • ExampleBotURL: Click on this URL to interact with ExampleBot.

Let’s create the bot

This blog post builds upon the bot building basics covered in Building Better Bots Using Amazon Lex. Following the guidance from that blog post, we create an Amazon Lex bot with two intents: GetFoodMenu and OrderFood.

The GetFoodMenu intent does not require authentication. The user can ask the bot what food items are on the menu such as:

Please recommend something.

Show me your menu please.

What is on the menu?

What kind of food do you have?

The bot returns a list of food the user can order when the GetFoodMenu intent is elicited.

If the user already knows which food item they want to order, they can order the food item with the following input text to invoke the OrderFood intent:

I would like to order some pasta.

Can I order some food please?

Cheese burger!

Amazon Lex uses the Lambda code hook to check if the user is authenticated. If the user is authenticated, Amazon Lex adds the food item to the user’s current order.

If the user has not been authenticated yet, the interaction looks like this:

User: I would like to order some pasta.

Bot: It seems that you are not authenticated yet. We have sent an OTP to your registered phone number. Please enter the OTP.

User: 812734

Note: If the user is using the text modality, the user’s “to” input can be encrypted.

Bot: Thanks for entering the OTP. We’ve added pizza to your order. Would you like anything else?

If the user is not authenticated, Amazon Lex initiates the multifactor authentication (MFA) process. AWS Lambda queries the DynamoDB table for that user’s mobile phone number and delivery address. After DynamoDB returns the values, AWS Lambda generates an OTP based on the metadata of the user, saves it in a DynamoDB table with a UUID as a primary key, and stores it in the session attributes. Then AWS Lambda uses SNS to send an OTP to the user and elicit the pin using the pin slot in the OrderFood intent.

After the user inputs the OTP, Amazon Lex uses the Lambda code hook to validate the pin. AWS Lambda queries the DynamoDB table with the UUID in the session attributes to verify the OTP. If the pin is correct, the Lambda function queries DynamoDB for the secret data; if the pin is incorrect, Lambda performs the validation step again.

Implementation details

The following tables and screenshots show you the different slot types and intents, and how you can use the AWS Management Console to specify the ones you want.

Slots

Slot type: Slot Values
Food Amazon.Food
Pin Amazon.Number

Intents

Intent Name Sample Utterances
OrderFood

I would like to order some {Food}

{Food}

I would like {food}

order {food}

Can I order some food please

Can I order some {Food} please

GetFoodMenu

Please recommend something.

Show me your menu, please.

What is on the menu?

What kind of food do you have?

The GetFoodMenu intent uses the GetMenu Lambda function as the initialization and validation code hook to perform the logic, whereas the OrderFood intent uses the OrderFood Lambda function as the initialization and validation code hook to perform the logic.

These are the steps the Lambda function follows:

  1. The Lambda function first checks which intent the user has invoked.
  2. If the payload is for the GetFoodMenu intent:
    1. We’re assuming that the client will send the following items in the session attribute for the first Amazon Lex runtime API call. Since we cannot pass session attributes in the Lex console, for testing purposes, our Lambda function will create the following session attributes if the session attribute is empty.
      {’email’ : ‘user@domain.com’, ‘auth’ : ‘false’, ‘uuid’: None, ‘currentOrder’: None, ‘encryptedPin’: None}

      • ’email’ is the email address of the user.
      • ‘auth’ : ‘false’ implies the user is unauthenticated.
      • ‘uuid’ is a flag used later as the primary key to store the OTP into the DynamoDB.
      • ‘currentOrder’ will keep track of the food items ordered by the user.
      • ‘encryptedPin’ will be used by the frontend client to send encrypted OTP. If the implementation does not require OTP to be encrypted, then this attribute is optional.
    2. The Lambda function will return a list of food items and ask the user which food items they wish to order. In other words, the Lambda function will elicitSlot for the Food slot in the OrderFood
  3. If the payload is for the OrderFood intent:
    1. As we stated earlier, for testing purposes our Lambda function will create the following session attribute if the session attributes are empty.
      {’email’ : ‘user@domain.com’, ‘auth’ : ‘false’, ‘uuid’: None, ‘currentOrder’: None, ‘encryptedPin’: None}
    2. If the user is authenticated, the Lambda function will add the requested food item to currentOrder in the session attributes.
    3. The Lambda function will query the phoneNumbers DynamoDB table using the email for the phone number of the user.
      1. If DynamoDB is not able to return a phone number matching that email address, the Lambda function will tell the user it wasn’t able to find a phone number associated with that email and will ask the user to contact support.
    4. The Lambda function will generate an OTP and an uuid. The uuid is stored in the session attributes and the key value pair { uuid : OTP} will be stored as a record in the onetimepin DynamoDB table.
    5. The Lambda function will use SNS to send the OTP to the user’s phone number and ask the user to enter the one time pin they received by eliciting the pin slot in the OrderFood
    6. After the user enters the pin, the Lambda function will query the onetimepin DynamoDB table for the record with the uuid stored in the sessionAttributes.
      1. If the user enters an incorrect pin, the Lambda function will generate a new OTP, store it in DynamoDB, update the uuid in Session Attributes, send this new OTP to the user via SNS again, and ask the user to enter the pin again. The following screenshot illustrates this.
      2. If the pin is correct Lambda will validate the food item the user is requesting.
      3. If the “Food” slot type is null, Lambda will ask the user which data they are interested in by eliciting the “Food” slot in the “OrderFood” intent.
      4. Lambda will add the requested food item to ‘currentOrder’ in session attributes.
  4. After the user is authenticated, the subsequent items they want to add to the order will not require authentication as long as the session does not expire.

OTP encryption and decryption

In this section, we’ll show you how to encrypt the OTP from the client side before sending the OTP as a session attribute to Amazon Lex. We’ll also show you how to decrypt the session attribute in the Lambda function.

To encrypt the OTP, the frontend needs to use Amazon Cognito identity pool to assume an unauthenticated role that has the permissions to perform the encrypt action using KMS before sending the OTP through to Amazon Lex as a session attribute. For more information on Amazon Cognito identity pool, see the documentation.

After the Lambda function receives the OTP if it is encrypted Lambda uses KMS to decrypt the OTP and query a Dynamo DB table to confirm if the OTP is correct.

Please refer to the documentation linked here for the prerequisites:

  1. Create an Amazon Cognito identity pool.

Ensure that unauthenticated identities are enabled.

  1. Create a KMS key.
  2. Lock down access to the KMS key to the unauthenticated role created in step 1.
    1. Allow the unauthenticated Amazon Cognito role to use this KMS to perform the encrypt action.
    2. Allow the Lambda function’s IAM role the decrypt KMS action.
    3. Here is an example of the two key policy statements that illustrate this.
      {
            "Sid": "Allow use of the key to encrypt",
            "Effect": "Allow",
            "Principal": {
              "AWS": [
                "arn:aws:iam::<your account>:role/<unauthenticated_role>",
              ]
            },
            "Action": [
              "kms:Encrypt",
            ],
            "Resource": "arn:aws:kms:AWS_region:AWS_account_ID:key/key_ID"
          }
      
      {
            "Sid": "Allow use of the key to decrypt",
            "Effect": "Allow",
            "Principal": {
              "AWS": [
                "arn:aws:iam::<your account>:role/<Lambda_functions_IAM_role>",
              ]
            },
            "Action": [
              "kms:Decrypt",
            ],
            "Resource": "arn:aws:kms:AWS_region:AWS_account_ID:key/key_ID"
          }

Now we are ready to use the frontend to encrypt the OTP:

  1. In the frontend client, use the GetCredentialsForIdentity API to get the temporary AWS credentials for the unauthenticated Amazon Cognito role. These temporary credentials are used by the frontend to access the AWS KMS service.
  2. The frontend uses the KMS Encrypt API to encrypt the OTP.
  3. The encrypted OTP is sent to Amazon Lex in the session attributes.
  4. The Lambda function uses the KMS Decrypt API to decrypt the encrypted OTP.
  5. After the OTP is decrypted the Lambda function validates the OTP value.

Conclusion

In this post we showed how to use OTP functionality on an Amazon Lex bot using a simple example. In our design we used AWS Lambda to run data validation and fulfillment; DynamoDB to store and retrieve data; SNS to publish SMS messages; KMS to encrypt and decrypt the OTP; and Amazon Cognito identity pool to obtain temporary AWS credentials to use KMS.

It’s easy to incorporate the OTP functionality described here into any bot. You can pass the OTP pin from the frontend to Amazon Lex either as a slot value or session attribute value in your intent. Then, send and perform the validation using a Lambda function, and your bot is ready to accept OTP!


About the Author

Kun Qian is is a Cloud Support Engineer at AWS. He enjoys providing technical guidance to customers, and helping them troubleshoot and design solutions on AWS.

AWS DeepRacer League weekly challenges – compete in the AWS DeepRacer League virtual circuit to win cash prizes and a trip to re:Invent 2019!

The AWS DeepRacer League is the world’s first global autonomous racing league, open to anyone. Developers of all skill levels can get hands-on with machine learning in a fun and exciting way, racing for prizes and glory at 21 events globally and online using the AWS DeepRacer console. The Virtual Circuit launched at the end of April, allowing developers to compete from anywhere in the world via the console – no car or track required – for a chance to top the leaderboard and score points in one of the six monthly competitions. The top prize is an expenses paid trip to re:Invent to compete in the 2019 Championship Cup finals, but that is not the only prize.

More chances to win with weekly challenges

The 2019 virtual racing season ends in the last week of October. Between now and then, the AWS DeepRacer League will run weekly challenges, offering more opportunities to win prizes, and compete for a chance to advance to the Championship Cup on an expenses paid trip to re:Invent 2019. Multiple challenges will launch each week providing cash prizes in the form of AWS credits, that will help you to keep rolling your way up the leaderboard as you continue to tune and train your model.

More detail on each of the challenges

The Rookie – Even the best in the league had to start somewhere! To help those brand new to AWS DeepRacer and machine learning, there are going to be rewards for those who make their debut on the leaderboard. When you submit your first model of the week to the virtual leaderboard, you’re in the running for a chance to win prizes, even if the top spot seems out of reach. If you win, you will receive AWS credits to help you build on these new found skills and climb the leaderboard. Anything is possible once you know how, and this could be the boost you need to take that top prize of a trip to re:Invent 2019.

The Most Improved – Think the top spot is out of reach and the only way to win? Think again. A personal record is an individual achievement that should be rewarded. This challenge is designed to help developers, new and existing, reach new heights in their machine learning journey. The drivers who improve their lap time the most each week will receive AWS credits to help them continue to train, improve, and win!

The Quick Sprint challenge – In this challenge, you have the opportunity to put your machine learning skills to the test and see how quickly you can create your model for success. Those who train a model in the fastest time and complete a successful lap of the track have the chance to win cash prizes!

New challenges every week

The AWS DeepRacer League will be rolling out new challenges throughout the month of August, including rewards for hitting that next milestone. Time Target 50, is rewarding those who finish a lap closest to 50 seconds.

Current open challenges:

The August track, Shanghai Sudu, is open now, so what are you waiting for? Start training in the AWS DeepRacer console today, submit a model to the leaderboard, and you will automatically be entered to win! You can also learn more about the points and prizes up for grabs on the AWS DeepRacer League points and prizes page.


About the Author

Alexandra Bush is a Senior Product Marketing Manager for AWS AI. She is passionate about how technology impacts the world around us and enjoys being able to help make it accessible to all. Out of the office she loves to run, travel and stay active in the outdoors with family and friends.

 

 

De

Kinect Energy uses Amazon SageMaker to Forecast energy prices with Machine Learning

The Amazon ML Solutions Lab worked with Kinect Energy recently to build a pipeline to predict future energy prices based on machine learning (ML). We created an automated data ingestion and inference pipeline using Amazon SageMaker and AWS Step Functions to automate and schedule energy price prediction.

The process makes special use of the Amazon SageMaker DeepAR forecasting algorithm. By using a deep learning forecasting model to replace the current manual process, we saved Kinect Energy time and put a consistent, data-driven methodology into place.

The following diagram shows the end-to-end solution.

The data ingestion is orchestrated using a step function which loads and processes data daily and deposits it is into a data lake in Amazon S3. The data is then passed to Amazon SageMaker which handles inference generation via a batch transform call that triggers an inference pipeline model.

Project motivation

The natural power market depends on a range of sources for production—wind, hydro-reservoir generation, nuclear, coal, and oil & gas—to meet consumer demand. The actual mix of power sources used to satisfy that demand depends on the price of each energy component on a given day. That price depends on that day’s power demand. Investors then trade the price of electricity in an open market.

Kinect Energy buys and sells energy to clients, and an important piece of their business model involves trading financial contracts derived from energy prices. This requires an accurate forecast of the energy price.

Kinect Energy wanted to improve and automate the process of forecasting—historically done manually—by using ML. The spot price is the current commodity price, as opposed to the future or forward price—the price at which a commodity can be bought or sold for future delivery. Comparing predicted spot prices and forward prices provides opportunities for the Kinect Energy team to hedge against future price movements based on current predictions.

Data requirements

In this solution, we wanted to predict spot prices for a four-week outlook on an hourly interval. One of the major challenges for the project involved creating a system to gather and process the required data automatically. The pipeline required two main components of the data:

  • Historic spot prices
  • Energy production and consumption rates and other external factors that influence the spot price

(We denote the production and consumption rates as external data.)

To build a robust forecasting model, we had to gather enough historical data to train the model, preferably spanning multiple years. We also had to update the data daily as the market generates new information. The model also needed access to a forecast of the external data components for the entire period over which the model forecasts.

Vendors update hourly spot prices to an external data feed daily. Various other entities provide data components on production and consumption rates, publishing their data on different schedules.

The analysts of the Kinect Energy team require the spot price forecast at a specific time of the day to shape their trading strategy. So, we had to build a robust data pipeline that periodically calls multiple API actions. Those actions collect data, perform the necessary preprocessing, and then store it in an Amazon S3 data lake where the forecasting model accesses it.

The data ingestion and inference generation pipeline

The pipeline consists of three main steps orchestrated by an AWS Step Function state machine: data ingestion, data storage, and inference generation. An Amazon CloudWatch event triggers the state machine to run on a daily schedule to prepare the consumable data.

The flowchart above details the individual steps that consist of the entire step function. The step function coordinates downloading of new data, updating of the historical data, and generating new inferences so that the whole process can be carried out in a single continuous workflow.

Although we built the state machine around a daily schedule, it employs two modes of data retrieval. By default, the state machine downloads data daily. A user can manually trigger the process to download the full historical data on demand as well for setup or recovery. The step function calls multiple API actions to gather the data, each with different latencies. The data gathering processes run in parallel. This step also performs all the required preprocessing, and stores the data in S3 organized by time-stamped prefixes.

The next step updates the historical data for each component by appending the respective daily elements. Additional processing prepares it in the format that DeepAR requires and sends the data to another designated folder.

The model then triggers an Amazon SageMaker batch transform job that pulls the data from that location, generates the forecast, and finally stores the result in another time-stamped folder. An Amazon QuickSight dashboard picks up the forecast and displays it to the analysts.

Packaging required dependencies into AWS Lambda functions

We set up Python pandas and the scikit-learn (sklearn) library to handle most of the data preprocessing. These libraries aren’t available by default for import into a Lambda function that Step Function calls. To adapt, we packaged the Lambda function Python script and its necessary imports into a .zip file.

cd ../package
pip install requests --target .
pip install pandas --target .
pip install lxml --target .
pip install boto3 --target .
zip -r9 ../lambda_function_codes/lambda_all.zip .
cd -
zip -g lambda_all.zip util.py lambda_data_ingestion.py

This additional code uploads the .zip file to the target Lambda function:

aws lambda update-function-code 
  --function-name update_history 
  --zip-file fileb://lambda_all.zip

Exception handling

One of the common challenges of writing robust production code is anticipating possible failure mechanisms and mitigating them. Without instructions to handle unusual events, the pipeline can fall apart.

Our pipeline presents two major potentials for failure. First, our data ingestion relies on external API data feeds, which could experience downtime, leading our queries to fail. In this case, we set a fixed number of retry attempts before the process marks the data feed temporarily unavailable. Second, feeds may not provide updated data and instead return old information. In this case, the API actions do not return errors, so our process needs the ability to decide for itself if the information is new.

Step Functions provide a retry option to automate the process. Depending on the nature of the exception, we can set the interval between two successive attempts (IntervalSeconds) and the maximum number of times to try the action (MaxAttempts). The parameter BackoffRate=1 arranges the attempts at a regular interval, whereas BackoffRate=2 means every interval is twice the length of the previous one.

"Retry": [
    {
      "ErrorEquals": [ "DataNotAvailableException" ],
      "IntervalSeconds": 3600,
      "BackoffRate": 1.0,
      "MaxAttempts": 8
    },
    {
      "ErrorEquals": [ "WebsiteDownException" ],
      "IntervalSeconds": 3600,
      "BackoffRate": 2.0,
      "MaxAttempts": 5
    }
]

Flexibility in data retrieval modes

We built the Step Function state machine to provide functionality for two distinct data retrieval modes:

  • A historical data pull to grab the entire existing history of the data
  • A refreshed data pull to grab the incremental daily data

The Step Function normally only has to extract the historical data one time in the beginning and store it in the S3 data lake. The stored data grows as the state machine appends new daily data. The option to refresh the historical data exists by setting the parameter full_history_download to True in the Lambda function that the CheckHistoricalData step calls. Doing so refreshes the entire dataset.

import json
from datetime import datetime
import boto3
import os

def lambda_handler(payload, context):
    if os.environ['full_history_download'] == 'True':
        print("manual historical data download required")
        return { 'startdate': payload['firstday'], 'pull_type': 'historical' }

    s3_bucket_name = payload['s3_bucket_name']
    historical_data_path = payload['historical_data_path']

    s3 = boto3.resource('s3')
    bucket = s3.Bucket(s3_bucket_name)
    objs = list(bucket.objects.filter(Prefix=historical_data_path))
    print(objs)

    if (len(objs) > 0) and (objs[0].key == historical_data_path):
        print("historical data exists")
        return { 'startdate': payload['today'], 'pull_type': 'daily' }
    else:
        print("historical data does not exist")
        return { 'startdate': payload['firstday'], 'pull_type': 'historical' }

Building the forecasting model

We built the ML model in Amazon SageMaker. After putting together a collection of historical data on S3, we cleaned and prepared it using popular Python libraries such as pandas and sklearn.

A separate Amazon SageMaker ML algorithm called principal component analysis (PCA) was used to perform the feature engineering. To reduce the scope of our feature space while preserving information and creating desirable features, we applied PCA to our dataset before training our forecasting model.

We used a separate Amazon SageMaker ML algorithm called DeepAR as the forecasting model. DeepAR is a custom forecasting algorithm that specializes in processing time-series data. Amazon originally used the algorithm for product demand forecasting. Its ability to predict consumer demand based on temporal data and various external factors made the algorithm a strong choice to predict the fluctuations in energy price based on usage.

The following figure demonstrates some modelling results. We tested the model on available 2018 data after training it on historical data. A benefit of using the DeepAR model is that it returns a confidence interval from 10%-90%, providing a forecasted range. Zooming in to different time periods of the forecast, we can see that DeepAR excels at reproducing past periodic temporal patterns compared to the actual price records.

Above shows a comparison between values predicted by the DeepAR model versus the actual values over the test set of January to September 2018.

Amazon SageMaker also provides a straightforward way to perform hyperparameter optimization (HPO). After model training, we tuned the hyperparameters of the model to extract incrementally better model performance. Amazon SageMaker HPO uses Bayesian optimization to search the hyperparameter space and identify the ideal parameters for different models.

The Amazon SageMaker HPO API makes it simple to specify resource constraints such as the number of training jobs and computing power allocated to the process. We chose to test ranges for common parameters important to the DeepAR structure, such as the dropout rate, embedding dimension, and the number of layers in the neural network.

from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner

objective_metric_name = 'test:RMSE'

hyperparameter_ranges = {'num_layers': IntegerParameter(1, 4),
                        'dropout_rate': ContinuousParameter(0.05, 0.2),
                        'embedding_dimension': IntegerParameter(5, 50)}
                       
tuner = HyperparameterTuner(estimator_DeepAR,
                    objective_metric_name,
                    hyperparameter_ranges,
                    objective_type = "Minimize",
                    max_jobs=30,
                    max_parallel_jobs=2)
                    
data_channels = {"train": "{}{}/train/".format(s3_data_path, model_name),
                "test": "{}{}/test/".format(s3_data_path, model_name)}
                
 
tuner.fit(inputs=data_channels, wait=False)

Packaging modeling steps into an Amazon SageMaker inference pipeline with sklearn containers

To implement and deploy an ML model effectively, we had to ensure that the data input format and processing from the inference matched up with the format and processing used for model training.

Our model pipeline uses sklearn functions for data processing and transformations, as well as a PCA feature engineering step before training using DeepAR. To preserve this process in an automated pipeline, we used prebuilt sklearn containers within Amazon SageMaker and the Amazon SageMaker inference pipelines model.

Within the Amazon SageMaker SDK, a set of sklearn classes handles end-to-end training and deployment of custom sklearn code. For example, the following code shows a sklearn Estimator executing an sklearn script in a managed environment. The managed sklearn environment is an Amazon Docker container that executes functions defined in the entry_point Python script. We supplied the preprocessing script as a .py file path. After fitting the Amazon SageMaker sklearn model on the training data, we can ensure that the same pre-fit model processes the data at inference time.

from sagemaker.sklearn.estimator import SKLearn

script_path = 'sklearn_preprocessing.py'

sklearn_preprocessing = SKLearn(
    entry_point=script_path,
    train_instance_type="ml.c4.xlarge",
    role=Sagemaker_role,
    sagemaker_session=sagemaker_session)
 
sklearn_preprocessing.fit({'train': train_input})

After this, we strung together the modeling sequence using Amazon SageMaker inference pipelines. The PipelineModel class within the Amazon SageMaker SDK creates an Amazon SageMaker model with a linear sequence of two to five containers to process requests for data inferences. With this, we can define and deploy any combination of trained Amazon SageMaker algorithms or custom algorithms packaged in Docker containers.

Like other Amazon SageMaker model endpoints, the process handles pipeline model invocations as a sequence of HTTP requests. The first container in the pipeline handles the initial request, and then the second container handles the intermediate response, and so on. The last container in the pipeline eventually returns the final response to the client.

A crucial consideration when constructing a PipelineModel is to note the data formatting for both the input and output for each container. For example, the DeepAR model requires a specific data structure for input data during training and a JSON Lines data format for inference. This format is different from supervised ML models because a forecasting model requires additional metadata, such as the starting date and the time interval of the time-series data.

from sagemaker.model import Model
from sagemaker.pipeline import PipelineModel

sklearn_inference_model = sklearn_preprocessing.create_model()

PCA_model_loc="s3://..."
PCA_inference_model = Model(model_data=PCA_model_loc,
                               image=PCA_training_image, 
                               name="PCA-inference-model", 
                               sagemaker_session=sagemaker_session)


DeepAR_model_loc="s3://..."
DeepAR_inference_model = Model(model_data=DeepAR_model_loc,
                               image=deepAR_training_image, 
                               name="DeepAR-inference-model", 
                               sagemaker_session=sagemaker_session)

DeepAR_pipeline_model_name = "Deep_AR_pipeline_inference_model"

DeepAR_pipeline_model = PipelineModel(
    name=DeepAR_pipeline_model_name, role=Sagemaker_role, 
    models=[sklearn_inference_model, PCA_inference_model, DeepAR_inference_model])

After creation, using a pipeline model provided the benefit of a single endpoint that could handle inference generation. Besides ensuring that preprocessing on training data matched that at inference time, we can deploy a single endpoint that runs the entire workflow from data input to inference generation.

Model deployment using batch transform

We used Amazon SageMaker batch transform to handle inference generation. You can deploy a model in Amazon SageMaker in one of two ways:

  • Create a persistent HTTPS endpoint where the model provides real-time inference.
  • Run an Amazon SageMaker batch transform job that starts an endpoint, generates inferences on the stored dataset, outputs the inference predictions, and then shuts down the endpoint.

Due to the specifications of this energy forecasting project, the batch transform technique proved the better choice. Instead of real-time predictions, Kinect Energy wanted to schedule daily data collection and forecasting, and use this output for their trading analysis. With this solution, Amazon SageMaker takes care of starting up, managing, and shutting down the required resources.

DeepAR best practices suggest that the entire historical time-series for the target and dynamic features should be provided to the model at both training and inference. This is because the model uses data points further back to generate lagged features and input datasets can grow very large.

To avoid the usual 5-MB request body limit, we created a batch transform using the inference pipeline model and set the limit for input data size using the max_payload argument. Then, we generated input data using the same function we used on the training data and added it to an S3 folder. We could then point the batch transform job to this location and generate an inference on that input data.

input_location = "s3://...input"
output_location = "s3://...output"

DeepAR_pipelinetransformer=sagemaker.transformer.Transformer(
    base_transform_job_name='Batch-Transform',
    model_name=DeepAR_pipeline_model_name,
    instance_count=1,
    instance_type='ml.c4.xlarge',
    output_path=output_location,
    max_payload=100)
    
DeepAR_pipelinetransformer.transform(input_location, content_type="text/csv")

Automating inference generation

Finally, we created a Lambda function that generates daily forecasts. To do this, we converted the code to the Boto3 API so Lambda can use it.

The Amazon SageMaker SDK library lets us access and invoke the trained ML models, but is far larger than the 50-MB limit for including in a Lambda function. Instead, we used the natively-available Boto3 library.

# Create the json request body
batch_params = {
    "MaxConcurrentTransforms": 1,
    "MaxPayloadInMB": 100,
    "ModelName": model_name,
    "TransformInput": {
        "ContentType": "text/csv",
        "DataSource": {
            "S3DataSource": {
                "S3DataType": "S3Prefix",
                "S3Uri": input_data_path
            }
        },
    },
    "TransformJobName": job_name,
    "TransformOutput": {
        "S3OutputPath": output_data_path
    },
    "TransformResources": {
        "InstanceCount": 1,
        "InstanceType": 'ml.c4.xlarge'
    }
}

# Create the SageMaker Boto3 client and send the payload
sagemaker = boto3.client('sagemaker')
ret = sagemaker.create_transform_job(**batch_params)

Conclusion

Along with the Kinect Energy team, we were able to create an automated data ingestion and inference generation pipeline. We used AWS Lambda and AWS Step Functions to automate and schedule the entire process.

In the Amazon SageMaker platform, we built, trained, and tested a DeepAR forecasting model to predict electricity spot prices. Amazon SageMaker inference pipelines combined preprocessing, feature engineering, and model output steps. A single Amazon SageMaker batch transform job could put the model into production and generate an inference. These inferences now help Kinect Energy make more accurate predictions of spot prices and improve their electricity price trading capabilities.

The Amazon ML Solutions Lab engagement model provided the opportunity to deliver a production-ready ML model. It also gave us the chance to train the Kinect Energy team on data science practices so that they can maintain, iterate, and improve upon their ML efforts. With the resources provided, they can expand to other possible future use cases.

Get started today! You can learn more about Amazon SageMaker and kickoff your own Machine Learning solution by visiting the Amazon SageMaker console.


About the Authors

Han Man is a Data Scientist with AWS Professional Services. He has a PhD in engineering from Northwestern University and has several years of experience as a management consultant advising clients across many industries. Today he is passionately working with customers to develop and implement machine learning, deep learning, & AI solutions on AWS. He enjoys playing basketball in his spare time and taking his bulldog, Truffle, to the beach.

 

 

Arkajyoti Misra is a Data Scientist working in AWS Professional Services. He loves to dig into Machine Learning algorithms and enjoys reading about new frontiers in Deep Learning.

 

 

 

Matt McKenna is a Data Scientist focused on machine learning in Amazon Alexa, and is passionate about applying statistics and machine learning methods to solve real world problems. In his spare time, Matt enjoys playing guitar, running, craft beer, and rooting for Boston sports teams.

 

 

 

Many thanks to Kinect Energy team who worked on the project. Special thanks to following leaders from Kinect Energy who encouraged and reviewed the blog post.

  • Tulasi Beesabathuni: Tulasi is the squad lead for both Artificial Intelligence / Machine Learning and Box / OCR (content management) at World Fuel Services. Tulasi oversaw the initiation, development and deployment of the Power Prediction Model and employed his technical and leadership skillsets to complete the team’s first use case using new technology.
  • Andrew Stypa: Andrew is the lead business analyst for the Artificial Intelligence / Machine learning squad at World Fuel Services. Andrew used his prior experiences in the business to initiate the use case and ensure that the trading team’s specifications were met by the development team.

 

 

 

 

Managing Amazon Lex session state using APIs on the client

Anyone who has tried building a bot to support interactions knows that managing the conversation flow can be tricky. Real users (people who obviously haven’t rehearsed your script) can digress in the middle of a conversation. They could ask a question related to the current topic or take the conversation in an entirely new direction. Natural conversations are dynamic and often cover multiple topics.

In this post, we review how APIs can be used to manage a conversation flow that contains switching to a new intent or returning to a prior intent. The following screenshot shows an example with a real user and a human customer service agent.

In the example above, the user query regarding the balance (“Wait – What’s the total balance on the card?”) is a digression from the main goal of making a payment. Changing topics comes easy to people. Bots have to store the state of the conversation when the digression occurs, answer the question, and then return to the original intent, reminding the user of what they’re waiting on.

For example, they have to remember that the user wants to make a payment on the card. After they store the data related to making a payment, they switch contexts to pull up the information on the total balance on the same card. After responding to the user, they continue with the payment. Here’s how the conversation can be broken down into two separate parts:

Figure 2: Digress and Resume

For a tastier example, consider how many times you’ve said, “Does that come with fries?” and think about the ensuing conversation.

Now, let’s be clear:  you could detect a digression with a well-constructed bot. You could switch intents on the server-side using Lambda functions, persist conversation state with Amazon ElastiCache or Amazon DynamoDB, or return to the previous intent with pre-filled slots and a new prompt. You could do all of this today. But you’d have to write and manage code for a real bot that does more than just check the weather, which is no easy task. (Not to pick on weather bots here, I find myself going on tangents just to find the right city!)

So, what are you saying?

Starting today, you can build your Amazon Lex bots to address these kinds of digressions and other interesting redirects using the new Session State API. With this API, you can now manage a session with your Amazon Lex bot directly from your client application for granular control over the conversation flow.

To implement the conversation in this post, you would issue a GetSession API call to Amazon Lex to retrieve the intent history for the previous turns in the conversation. Next, you would direct the Dialog Manager to use the correct intent to set the next dialog action using the PutSession operation. This would allow you to manage the dialog state, slot values, and attributes to return the conversation to a previous step.

In the earlier example, when the user queries about the total balance, the client can handle the digression by placing calls to GetSession followed by PutSession to continue the payment. The response from the GetSession operation includes a summary of the state of the last three intents that the user interacted with. This includes the intents MakePayment (accountType: credit, amount: $100), and AccountBalance. The following diagram shows the GetSession retrieval of the intent history.

A  GetSession request object in Python contains the following attributes:

response = client.get_session(
 botName='BankBot',
 botAlias='Prod',
 userId='ae2763c4'
)

A GetSession response object in Python contains the following attributes:

{
 'recentIntentSummaryView': [
  {
   'intentName': 'AccountBalance',
   'slots': {
    'accountType': 'credit'
   },
   'confirmationStatus': 'None',
   'dialogActionType': 'Close',
   'fulfillmentState': 'Fulfilled'
  },
  {
   'intentName': 'MakePayment',
   'slots': {
    'accountType': 'credit',
    'amount': '100'
   },
   'confirmationStatus': 'None',
   'dialogActionType': 'ConfirmIntent'
  },
  {
   'intentName': 'Welcome',
   'slots': {},
   'confirmationStatus': 'None',
   'dialogActionType': 'Close',
   'fulfillmentState': 'Fulfilled'
  }
 ],
 'sessionAttributes': {},
 'sessionId': 'XXX',
 'dialogAction': {
  'type': 'Close',
  'intentName': 'AccountBalance',
  'slots': {
   'accountType': 'credit'
   },
  'fulfillmentState': 'Fulfilled'
 }
}

Then, the application selects the previous intent and calls PutSession for MakePayment followed by Delegate. The following diagram shows that PutSession resumes the conversation.

A PutSession request object in Python for the MakePayment intent contains the following attributes:

response = client.put_session(
 botName='BankBot',
 botAlias='Prod',
 userId='ae2763c4',
 dialogAction={
  'type':'ElicitSlot',
  'intentName':'MakePayment',
  'slots': {
   'accountType': 'credit'
  },
  'message': 'Ok, so let’s continue with the payment. How much would you like to pay?',
  'slotToElicit': 'amount',
  'messageFormat': 'PlainText'
 },
 accept = 'text/plain; charset=utf-8'
)

 A PutSession response object in Python contains the following attributes:

{
 'contentType': 'text/plain;charset=utf-8',
 'intentName': 'MakePayment',
 'slots': {
  'amount': None,
  'accountType': 'credit'
 },
 'message': 'Ok, so let’s continue with the payment. How much would you like to pay?',
 'messageFormat': 'PlainText',
 'dialogState': 'ElicitSlot',
 'slotToElicit': 'amount',
 'sessionId': 'XXX'
}

You can also use the Session State API operations to start a conversation. You can have the bot start the conversation. Create a “Welcome” intent with no slots and a response message that greets the user with “Welcome. How may I help you?” Then call the PutSession operation, set the intent to “Welcome” and set the dialog action to Delegate.

A PutSession request object in Python for the “Welcome” intent contains the following attributes:

 

response = client.put_session(
  botName='BankBot',
  botAlias='Prod',
  userId='ae2763c4',
  dialogAction={
    'type':'Delegate',
    'intentName':'Welcome'
  },
  accept='text/plain; charset=utf-8'
)

A PutSession response object in Python contains the following attributes:

{
 'contentType': 'text/plain;charset=utf-8',
 'intentName': 'Welcome',
 'message': 'Welcome to the Banking bot. How may I help you?',
 'messageFormat': 'PlainText',
 'dialogState': 'Fulfilled',
 'sessionId': 'XXX'
}

Session State API operations are now available using the SDK.

For more information about incorporating these techniques into real bots, see the Amazon Lex documentation and FAQ page. Want to learn more about designing bots using Amazon Lex? See the two-part tutorial, Building Better Bots Using Amazon Lex! Check out the Alexa Design Guide for tips and tricks. Got .NET?  Fret not. We’ve got you covered with Bots Just Got Better with .NET and the AWS Toolkit for Visual Studio.


About the Authors

Minaxi Singla works as a Software Development Engineer in Amazon AI contributing to microservices that enable human-like experience through chatbots. When not working, she can be found reading about software design or poring over Harry Potter series one more time.

 

 

Pratik Raichura is a Software Development Engineer with Amazon Lex team. He works on building scalable distributed systems that enhance Lex customer experience. Outside work, he likes to spend time reading books on software architecture and making his home smarter with AI.

 

 

 

 

Adding a data labeling workflow for named entity recognition with Amazon SageMaker Ground Truth

Launched at AWS re:Invent 2018, Amazon SageMaker Ground Truth enables you to efficiently and accurately label the datasets required to train machine learning (ML) systems. Ground Truth provides built-in labeling workflows that take human labelers step-by-step through tasks and provide tools to help them produce good results. Built-in workflows are currently available for object detection, image classification, text classification, and semantic segmentation labeling jobs.

Today, AWS launched support for a new use case: named entity recognition (NER). NER involves sifting through text data to locate noun phrases called named entities, and categorizing each with a label, such as “person,” “organization,” or “brand.” So, in the statement “I recently subscribed to Amazon Prime,” “Amazon Prime” would be the named entity and could be categorized as a “brand.”

You can broaden this use case to label longer spans of text and categorize those sequences with any pre-specified labels. For example, the following screenshot identifies spans of text in a performance review that demonstrate the Amazon leadership principle “Customer Obsession.”

Overview

In this post, I walk you through the creation of a NER labeling job:

  1. Gather a dataset.
  2. Create the labeling job.
  3. Select a workforce.
  4. Create task instructions.

For this exercise, your NER labeling task is to identify brand names from a dataset. I have provided a sample dataset of ten tweets from the Amazon Twitter account. Alternatively, feel free to bring your own dataset, and define a specific NER labeling task that is relevant to your use case.

Prerequisites

To follow the steps outlined in this post, you need an AWS account and access to AWS services.

Step 1: Gather your dataset and store data in Amazon S3

Gather the dataset to label, save it to a text file, and upload the file to Amazon S3. For example, I gathered 10 tweets, saved them to a text file with one tweet per return-separated line, and uploaded the text file to an S3 bucket called “ner-blog.” For your reference, the following box contains the uploaded tweets from the text file.

Don’t miss the 200th episode of Today’s Deals Live TODAY! Tune in to watch our favorite moments and celebrate our 200th episode milestone! #AmazonLive (link: https://amzn.to/2JQ2vDm) amzn.to/2JQ2vDm
It's the thought that counts, but our Online Returns Center makes gift exchanges and returns simple (just in case!) https: (link: https://amzn.to/2l6qYKG) amzn.to/2l6qYKG
Did you know you can trade in select Apple, Samsung, and other tablets? With the Amazon Trade-in program, you can receive an Amazon Gift Card + 25% off toward a new Fire tablet when you trade in your used tablets. (link: https://amzn.to/2Ybdu1Y) amzn.to/2Ybdu1Y
Thank you, Prime members, for making this #PrimeDay the largest shopping event in our history! You purchased more than 175 million items, from devices to groceries!
Hip hip for our Belei charcoal mask! This staple in our skincare line is a @SELFMagazine 2019 Healthy Beauty Award winner.
Looking to take your photography skills to the next level? Check out (link: http://amazon.com/primeday) amazon.com/primeday for an amazing camera deal.
Is a TV on your #PrimeDay wish list? Keep your eyes on (link: http://amazon.com/primeday) amazon.com/primeday for a TV deal soon.
Improve your musical talents by staying in tune on (link: http://amazon.com/primeday) amazon.com/primeday for an acoustic guitar deal launching soon.
.@LadyGaga’s new makeup line, @HausLabs, is available now for pre-order! #PrimeDay (link: http://amazon.com/hauslabs) amazon.com/hauslabs
#PrimeDay ends tonight, but the parade of deals is still going strong. Get these deals while they’re still hot! (link: https://amzn.to/2lgqZM3) amzn.to/2lgqZM3

Step 2: Create a labeling job

  1. In the Amazon SageMaker console, choose Labeling jobs, Create labeling job.
  2. To set up the input dataset location, choose Create manifest file.
  3. Point to the S3 location of the text file that you uploaded in Step 1, and select Text, Create.
  4. After the creation process finishes, choose Use this manifest, and complete the following fields:
    • Job name—Custom value.
    • Input dataset location—S3 location of the text file to label. (The previous step should have populated this field.)
    • Output dataset location—S3 location to which Amazon SageMaker sends labels and job metadata.
    • IAM Role—A role that has read and write permissions for this task’s Input dataset and Output dataset locations in S3.
  5. Under Task type, for Task Category, choose Text.
  6. For Task selection, select Named entity recognition.

Step 3: Selecting a labeling workforce

The Workers interface offers three Worker types:

The console includes other Workers settings, including Price per task and the optional Number of workers per dataset.

For this demo, use Public. Set Price per task at $0.024. Mechanical Turk workers should complete the relatively straightforward task of identifying brands in a tweet in 5–7 seconds.

Use the default value for Number of workers per dataset object (in this case, a single tweet), which is 3. SageMaker Ground Truth asks three workers to label each tweet and then consolidates those three workers’ responses into one high-fidelity label. To learn more about consolidation approaches, see Annotation Consolidation.

Step 4: Creating the labeling task instructions

While critically important, effective labeling instructions often require significant iteration and experimentation. To learn about best practices for creating high-quality instructions, see Create high-quality instructions for Amazon SageMaker Ground Truth labeling jobs. Our exercise focuses on identifying brand names in tweets. If there are no brand names in a specific tweet, the labeler has the option of indicating there are no brands in the tweet.

An example of labeling instructions is shown on the following screenshot.

Conclusion

In this post, I introduced Amazon SageMaker Ground Truth data labeling. I showed you how to gather a dataset, create a NER labeling job, select a workforce, create instructions, and launch the job. This is a small labeling job with only 10 tweets and should be completed within one hour by Mechanical Turk workers. Visit the AWS Management Console to get started.

As always, AWS welcomes feedback. Please submit comments or questions below.


About the Author

Vikram Madan is the Product Manager for Amazon SageMaker Ground Truth. He focusing on delivering products that make it easier to build machine learning solutions. In his spare time, he enjoys running long distances and watching documentaries.

 

 

 

Use Amazon Lex as a conversational interface with Twilio Media Streams

Businesses use the Twilio platform to build new ways to communicate with their customers: whether it’s fully automating a restaurant’s food orders with a conversational Interactive Voice Response (IVR) or building a next generation advanced contact center. With the launch of Media Streams, Twilio is opening up their Voice platform by providing businesses access to the raw audio stream of their phone calls in real time.

You can use Media Streams to increase productivity in the call center by transcribing speech in real time with Amazon Transcribe Streaming WebSockets or to automate end-user interactions and make recommendations to agents based on the caller’s intent using Amazon Lex.

In this blog post, we show you how to use Amazon Lex to integrate conversational interfaces (chatbots) to your voice application using the raw audio stream provided by Twilio Media Streams. Lex uses deep learning to do the heavy lifting required to recognize the intent of human speech so that you can easily build engaging user experiences and lifelike conversations.

The solution follows these steps:

  1. Receive audio stream from Twilio
  2. Send the audio stream to a voice activity detection component to determine voice in audio
  3. Start streaming the user data to Amazon Lex when voice is detected
  4. Stop streaming the user data to Amazon Lex when silence is detected
  5. Update the ongoing Twilio call based on the response from Amazon Lex

The Voice activity detection (VAD) implementation provided in this sample is for reference/demo purpose only and uses a rudimentary approach to detect voice and silence by looking at amplitude. It is not recommended for production use. You will need to implement a robust form of VAD module as per your need for use in production scenarios.

The diagram below describes the steps:

The instructions for integrating an Amazon Lex Bot with the Twilio Voice Stream are provided in the following steps:

  • Step 1: Create an Amazon Lex Bot
  • Step 2: Create a Twilio Account and Setup Programmable Voice
  • Step 3: Build and Deploy the Amazon Lex and Twilio Voice Stream Integration code to Amazon ECS/Fargate
  • Step 4: Test the deployed service
  • As an optional next step, you can build and test the service locally. For instructions, see Step 5 (Optional): Build and Test the service locally

To build and deploy the service, the following pre-requisites are needed:

  1. Python (The language used to build the service)
  2. Docker (The tool used for packaging the service for deployment)
  3. AWS CLI installed and configured (For creating the required AWS services and deploying the service to AWS). For instructions see, Configuring AWS CLI

In addition, you need a domain name for hosting your service and you must register an SSL certificate for the domain using Amazon Certificate Manager.  For instructions, see Request a Public Certificate. Record the Certificate ARN from the console.

An SSL certificate is needed to communicate securely over wss (WebSocket Secure), a persistent bidirectional communication protocol used by the Twilio voice stream. The <Stream> instruction in the templates/streams.xml file allows you to receive raw audio streams from a live phone call over WebSockets in near real-time. On successful connection, a WebSocket connection to the service is established and audio will start streaming.

Step 1: Create an Amazon Lex Bot

If you don’t already have an Amazon Lex Bot, create and deploy one. For instructions, see Create an Amazon Lex Bot Using a Blueprint (Console).

Once you’ve created the bot, deploy the bot and create an alias. For instructions, see Publish a Version and Create an Alias.

In order to call the Amazon Lex APIs from the service, you must create an IAM user with an access type “Programmatic Access” and attach the appropriate policies.

For this, in the AWS Console, go to IAM->Users->Add user

Provide a user name, select “Programmatic access” Access type, then click on “Next: Permissions”

Using the “Attach existing policies directly” option, filter for Amazon Lex policies and select AmazonLexReadOnly and AmazonLexRunBotsOnly policies.

Click “Next: Tags”, “Next: Review”, and “Create User” in the pages that follow to create the user. Record the access key ID and the secret access key. We use these credentials during the deployment of the stack.

Step 2: Create a Twilio account and setup programmable voice

Sign up for a Twilio account and create a programmable voice project.

For sign-up instructions, see https://www.twilio.com/console.

Record the “AUTH TOKEN”. You can find this information on the Twilio dashboard under Settings->General->API Credentials.

You must also verify the caller ID by adding the phone number that you are using to make calls to the Twilio phone number. You can do this by clicking on the   button on the Verify caller IDs page.

Step 3: Build and deploy the Amazon Lex and Twilio Stream Integration code to Amazon ECS

In this section, we create a new service using AWS Fargate to host the integration code. AWS Fargate is a deployment option in Amazon Elastic Container Service (ECS) that allows you to deploy containers without worrying about provisioning or scaling servers. For our service, we use Python and Flask in a Docker container behind an Application Load Balancer (ALB).

Deploy the core infrastructure

As the first step in creating the infrastructure, we deploy the core infrastructure components such as VPC, Subnets, Security Groups, ALB, ECS cluster, and IAM policies using a CloudFormation Template.

Clicking on the “Launch Stack” button below takes you to the AWS CloudFormation Stack creation page. Click “Next” and fill in the parameters. Please note that you will be using the same “EnvironmentName” parameter later in the process where we will be launching the service on top of the core infrastructure. This allows us to reference the stack outputs from this deployment.

Once the stack creation is complete, from the “outputs” tab, record the value of the “ExternalUrl” key.

Package and deploy the code to AWS

In order to deploy the code to Amazon ECS, we package the code in a Docker container and upload the Docker image to the Amazon Elastic Container Registry (ECR).

The code for the service is available at the GitHub repository below. Clone the repository on your local machine.

git clone https://github.com/veerathp/lex-twiliovoice.git
cd lex-twiliovoice

Next, we update the URL for the Streams element inside templates/streams.xml to match the DNS name for your service that you configured with the SSL certificate in the pre-requisites section.

<Stream url="wss://<Your DNS>/"></Stream>

Now, run the following command to build the container image using the Dockerfile.

docker build -t lex-twiliovoice .

Next, we create the container registry using the AWS CLI by passing in the value for the repository name. Record the “repositoryUri” from the output.

aws ecr create-repository --repository-name <repository name>

In order to push the container image to the registry, we must authenticate. Run the following command:

aws ecr get-login --region us-west-2 --no-include-email

Execute the output of the above command to complete the authentication process.

Next, we tag and push the container image to ECR.

docker tag lex-twiliovoice <repositoryUri>/lex-twiliovoice:latest
docker push <repositoryUri>/lex-twiliovoice:latest

We now deploy the rest of the infrastructure using a CloudFormation template. As part of this stack, we deploy components such as ECS Service, ALB Target groups, HTTP/HTTPS Listener rules, and Fargate Task. The environment variables are injected into the container using the task definition properties.

Since we are working with WebSocket connections in our service, we enable stickiness with our load balancer using the target group attribute to allow for persistent connection with the same instance.

TargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      ….
      ….
      TargetGroupAttributes:
        - Key: stickiness.enabled
          Value: true
        …

Clicking on the “Launch Stack” button below takes you to the AWS CloudFormation Stack creation page. Click “Next” and fill in the correct values for the following parameters that are collected from the previous steps – IAMAccessKeyId, IAMSecretAccessKey, ImageUrl, LexBotName, LexBotAlias, and TwilioAuthToken. You can use default values for all the other parameters. Make sure to use the same “EnvironmentName” from the previous stack deployment since we are referring to the outputs of that deployment.

Once the deployment is complete, we can test the service. However, before we do that, make sure to point your custom DNS to the Application Load Balancer URL.

To do that, we create an “A Record” under Route 53, Hosted Zones to point your custom DNS to the ALB Url that was part of the core infrastructure stack deployment (“ExternalUrl” key from output). In the “Create Record Set” screen, in the name field use your DNS name, for type select “A – IPv4 address”, select “Yes” for Alias field, select the Alias target as the ALB Url, and click “Create”.

Step 4: Test the deployed service

You can verify the deployment by navigating to the Amazon ECS Console and clicking on the cluster name. You can see the AWS Fargate service under the “services” tab and the running task under the “tasks” tab.

To test the service, we will first update the Webhook url field under the “Voice & Fax” section in the Twilio console with the URL of the service that is running in AWS (http://<url>/twiml). You can now call the Twilio phone number to reach the Lex Bot. Make sure that the number you are calling from is verified using the Twilio console. Once connected, you will hear the prompt “You will be interacting with Lex bot in 3, 2, 1. Go.” that is configured in the templates/streams.xml file. You can now interact with the Amazon Lex Bot.

You can monitor the service using the “CloudWatch Log Groups” and troubleshoot any issues that may arise while the service is running.

Step 5(Optional): Build and test the service locally

Now that the service is deployed and tested, you may be interested in building and testing the code locally. For this, navigate to the cloned GitHub repository on your local machine and install all the dependencies using the following command:

pip install -r requirements.txt

You can test the service locally by installing “ngrok”. See https://ngrok.com/download for more details. This tool provides public URLs for exposing the local web server. Using this, you can test the Twilio webhook integration.

Start the ngrok process by using the following command in another terminal window. The ngrok.io url can be used to access the web service from external applications.

ngrok http 8080

Next, configure the “Stream” element inside the templates/streams.xml file with the correct ngrok url.

<Stream url="wss://<xxxxxxxx.ngrok.io>/"></Stream>

In addition, we also need to configure the environment variables used in the code. Run the following command after providing appropriate values for the environment variables:

export AWS_REGION=us-west-2
export ACCESS_KEY_ID=<Your IAM User Access key ID from Step 1>
export SECRET_ACCESS_KEY=<Your IAM User Secret Access key from Step 1>
export LEX_BOT_NAME=<Bot name for the Lex Bot you created in Step 1>
export LEX_BOT_ALIAS=<Bot Alias for the Lex Bot you created in Step 1>
export TWILIO_AUTH_TOKEN=<Twilio AUTH TOKEN from Step 2>
export CONTAINER_PORT=8080
export URL=<http://xxxxxxxx.ngrok.io> (update with the appropriate url from ngrok)

Once the variables are set, you can start the service using the following command:

python server.py

To test, configure the Webhook field under “Voice & Fax” in the Twilio console with the correct url (http://<url>/twiml) as shown below.

Initiate a call to the Twilio phone number from a verified phone. Once connected, you hear the prompt “You will be interacting with Lex bot in 3, 2, 1. Go.” that is configured in the templates/streams.xml file. You are now able to interact with the Amazon Lex bot that you created in Step 1.

In this blog post, we showed you how to use Amazon Lex to integrate your chatbot to your voice application. To learn how to build more with Amazon Lex, check out the developer resources.


About the Author

Praveen Veerath is a Senior AI Solutions Architect for AWS.

 

 

 

Harvesting success using Amazon SageMaker to power Bayer’s digital farming unit

By the year 2050, our planet will need to feed ten billion people. We can’t expand the earth to create more agricultural land, so the solution to growing more food is to make agriculture more productive and less resource-dependent. In other words, there is no room for crop losses or resource waste. Bayer is using Amazon SageMaker to help eliminate losses from happening in fields around the world.

Households contribute to food loss by discarding food such as kitchen waste or leftover cooked meals. However, the vast majority of food loss in many countries is actually from crops that “die on the vine” in one form or another—from pests, diseases, weeds, or poor nutrition in the soil. The Climate Corporation—a Bayer subsidiary—provides digital farming offerings that help resolve these challenges.

The Climate Corporation’s solutions include automatic recording of data from tractors and satellite-enabled field-health maps. By delivering these services and others to thousands of farmers globally, The Climate Corporation enables farmers to keep their land healthy and fertile.

The team is also working on an upcoming service called FieldCatcher that enables farmers to use smartphone images to identify weeds, pests, and diseases. “By using image recognition, we provide farmers with access to a virtual agronomist that helps with the often difficult task to identify the cause of crop issues. This empowers farmers who don’t have access to advice, as well as enable all farmers to more efficiently capture and share field observations,” said Matthias Tempel, Proximal Sensing Lead at The Climate Corporation.

FieldCatcher uses image recognition models trained with Amazon SageMaker, then optimizes them for mobile phones with Amazon SageMaker Neo. With this setup, the farmers are able to use the model and get instant results even without internet access (as many fields lack connectivity). Using Amazon SageMaker helps FieldCatcher to identify the cause of the problem with confidence, which is critical to providing farmers with the right remediation guidance. In many cases, acting immediately and being certain about an issue makes a huge difference for fields’ yields and farmers’ success.

To power the FieldCatcher solution, Bayer collects images—seeking a wide variety as well as a high quantity to create training data that includes various environments, growth stages, weather conditions, and levels of daylight. Each photo is uploaded from a smartphone and eventually becomes part of the ongoing library that makes the recognition better and better. The figure below depicts the journey of each image and its metadata.

Specifically, the process starts with ingestion to Amazon Cognito, which protects uploads to the Amazon API Gateway and Amazon Simple Storage Service (Amazon S3). The serverless architecture—chosen because it is more scalable and easier to maintain than any alternative—relies on AWS Lambda to execute its steps and finally move the received data into a data lake.

Multiple AWS services work in concert to support the data lake. In addition to Amazon S3 for image storing, Amazon DynamoDB stores the metadata, as features of the image such as location and date taken are important for searchability later on. Amazon Elasticsearch Service (Amazon ES) powers the indexing and querying of this metadata.

The engineering team appreciates that this set of services does not require a data schema to be defined upfront, enabling many different possible use cases for images to be collected in the FieldCatcher application. Another benefit is that the data lake queries allow questions as different as “search for all images taken in Germany with an image resolution larger than 800×600 pixels” or “search for all images of diseases in winter wheat.”

For machine learning (ML) model development, training, and inferencing, the team relies on Amazon SageMaker. Specifically, Amazon SageMaker’s built-in Jupyter notebooks are the central workspace for developing ML models as well as the corresponding ML algorithms. Developers also use GitLab for source code management and GitLab-CI for automated tasks.

AWS Step Functions are the final piece, used to support the full roundtrip of preprocessing images from the data lake, automated training of ML models, and finally inference. Using these services, Bayer’s developers can operate with confidence in the infrastructure and can focus on the ML models.

The Bayer team members, as longstanding AWS users, are familiar with the power of ML to solve problems that would otherwise be exceedingly complex for humans to tackle. The company previously developed an AWS based data-collection and analysis platform that leverages AWS IoT and sensors in the harvest fields to power real-time decision-making with information fed to mobile devices.

Their choice to expand their offerings to include the new FieldCatcher application was driven by the positive feedback from some of these other services. Giuseppe La Tona, Enterprise Solution Architect at The Climate Corporation described, “We used to make this type of service fully ourselves, but it was an enormous amount of work to do and maintain. We realized that, with Amazon SageMaker, the solution was infinitely easier, so we started implementing it and have never looked back.”

At the moment, FieldCatcher is used internally in over 20 countries around the world. The next step is expanding what it can offer farmers. Right now, its main use is for weed, disease, or pest detection. The Climate Corporation is exploring additional ML-powered solutions as broad as predicting harvest quality with images and drone-based crop protection on an individual plant level. 

Going forward, the team plans to use Amazon SageMaker for all their ML work, as it has been so powerful and saved them so much time. In fact, the team’s entire workflow uses only AWS for ML. Alexander Roth Cloud Architect at Bayer, explained, “With machine learning on AWS, the huge impact we’ve seen is that the whole pipeline runs smoothly and we’re able to reduce errors.”

With these solutions in place and constantly improving (as is inherent to ML), Bayer and The Climate Corporation see themselves as pioneering the sustainable agriculture of the future. Their hope is that this effort and others it inspires will make it possible to support our growing population for years to come.

 


About the Author

Marisa Messina is on the AWS ML marketing team, where her job includes identifying the most innovative AWS-using customers and showcasing their inspiring stories. Prior to AWS, she worked on consumer-facing hardware and then university-facing cloud offerings at Microsoft. Outside of work, she enjoys exploring the Pacific Northwest hiking trails, cooking without recipes, and dancing in the rain.

 

 

 

 

 

Git integration now available for the Amazon SageMaker Python SDK

Git integration is now available in the Amazon SageMaker Python SDK. You no longer have to download scripts from a Git repository for training jobs and hosting models. With this new feature, you can use training scripts stored in Git repos directly when training a model in the Python SDK. You can also use hosting scripts stored in Git repos when hosting a model. The scripts are hosted in GitHub, another Git-based repo, or an AWS CodeCommit repo.

This post describes in detail how to use Git integration with the Amazon SageMaker Python SDK.

Overview

When you train a model with the Amazon SageMaker Python SDK, you need a training script that does the following:

  • Loads data from the input channels
  • Configures training with hyperparameters
  • Trains a model
  • Saves the model

You specify the script as the value of the entry_point argument when you create an estimator object.

Previously, when users constructed an Estimator or Model object, in the Python SDK, the training script had to be a path in the local file system when you provided it as the entry_point value. This location was inconvenient when you had training scripts in Git repos because you had to download them locally.

If multiple developers were contributing to the Git repo, you would have to keep track of any updates to the repo. Also, if your local version was out of date, you’d need to pull the latest version prior to every training job. This also makes scheduling periodic training jobs even more challenging.

With the launch of Git integration, these issues are solved, which results in a notable improvement in convenience and productivity.

Walkthrough

Enable the Git integration feature by passing a dict parameter named git_config when you create the Estimator or Model object. The git_config parameter provides information about the location of the Git repo that contains the scripts and the authentication for accessing that repo.

Locate the Git repo

To locate the repo that contains the scripts, use the repo, branch, and commit fields in git_config. The repo field is required; the other two fields are optional. If you only provide the repo field, the latest commit in master branch is used by default:

git_config = {'repo': 'your-git-repo-url'}

To specify a branch, use both the repo and branch fields. The latest commit in that branch is used by default:

git_config = {'repo': 'your-git-repo-url', 'branch': 'your-branch'}

To specify a commit of a specific branch in a repo, use all three fields in git_config:

git_config = {'repo': 'your-git-repo-url', 
              'branch': 'your-branch', 
              'commit': 'a-commit-sha-under-this-branch'}

If only the repo and commit fields are provided, this works when the commit is under the master branch and the commit is used. However, if the commit is not under the master branch, the repo is not found:

git_config = {'repo': 'your-git-repo-url', 'commit': 'a-commit-sha-under-master'}

Get access to the Git repo

If the Git repo is private (all CodeCommit repos are private), you need authentication information to access it.

For CodeCommit repos, first make sure that you set up your authentication method. For more information, see Setting Up for AWS CodeCommit. The topic lists the following ways by which you can authenticate:

Authentication for SSH URLs

For SSH URLs, you must configure the SSH key pair. This applies to GitHub, CodeCommit, and other Git-based repos.

Do not set an SSH key passphrase for the SSH key pairs. If you do, access to the repo fails.

After the SSH key pair is configured, Git integration works with SSH URLs without further authentication information:

# for GitHub repos
git_config = {'repo': 'git@github.com:your-git-account/your-git-repo.git'}

# for CodeCommit repos
git_config = {'repo': 'ssh://git-codecommit.us-west-2.amazonaws.com/v1/repos/your-repo/'}

Authentication for HTTPS URLs

For HTTPS URLs, there are two ways to deal with authentication:

  • Have it configured locally.
  • Configure it by providing extra fields in git_config, namely 2FA_enabled, username, password, and token. Things can be slightly different here between CodeCommit, GitHub, and other Git-based repos.

Authenticating using Git credentials

If you authenticate with Git credentials, you can do one of the following:

  1. Provide the credentials in git_config:
    git_config = {'repo': 'https://git-codecommit.us-west-2.amazonaws.com/v1/repos/your-repo/',
                  'username': 'your-username',
                  'password': 'your-password'}

  2. Have the credentials stored in local credential storage. Typically, the credentials are stored automatically after you provide them with the AWS CLI. For example, macOS stores credentials in Keychain Access.

With the Git credentials stored locally, you can specify the git_config parameter without providing the credentials, to avoid showing them in scripts:

git_config = {'repo': 'https://git-codecommit.us-west-2.amazonaws.com/v1/repos/your-repo/'}

Authenticating using AWS CLI Credential Helper

If you follow the setup documentation mentioned earlier to configure AWS CLI Credential Helper, you don’t have to provide any authentication information.

For GitHub and other Git-based repos, check whether two-factor authentication (2FA) is enabled for your account. (Authentication is disabled by default and must be enabled manually.) For more information, see Securing your account with two-factor authentication (2FA).

If 2FA is enabled for your account, provide 2FA_enabled when specifying git_config and set it to True. Otherwise, set it to False. If 2FA_enabled is not provided, it is set to False by default. Usually, you can use either username+password or a personal access token to authenticate for GitHub and other Git-based repos. However, when 2FA is enabled, you can only use a personal access token.

To use username+password for authentication:

git_config = {'repo': 'https://github.com/your-account/your-private-repo.git',
              'username': 'your-username',
              'password': 'your-password'}

Again, you can store the credentials in local credential storage to avoid showing them in the script.

To use a personal access token for authentication:

git_config = {'repo': 'https://github.com/your-account/your-private-repo.git',
              'token': 'your-token'}

Create the estimator or model with Git integration

After you correctly specify git_config, pass it as a parameter when you create the estimator or model object to enable Git integration. Then, make sure that the entry_point, source_dir, and dependencies are all be relative paths under the Git repo.

You know that if source_dir is provided, entry_point should be a relative path from the source directory. The same is true with Git integration.

 

For example, with the following structure of the Git repo ‘amazon-sagemaker-examples’ under branch ‘training-scripts’:

amazon-sagemaker-examples 
   |
   |-------------char-rnn-tensorflow
   |                          |----------train.py
   |                          |----------utils.py
   |                          |----------other files
   |
   |-------------pytorch-rnn-scripts
   |-------------.gitignore
   |-------------README.md

You can create the estimator object as follows:

git_config = {'repo': 'https://github.com/awslabs/amazon-sagemaker-examples.git', 'branch': 'training-scripts'}

estimator = TensorFlow(entry_point='train.py',
                       source_dir='char-rnn-tensorflow',
                       git_config=git_config,
                       train_instance_type=train_instance_type,
                       train_instance_count=1,
                       role=sagemaker.get_execution_role(), # Passes to the container the AWS role that you are using on this notebook
                       framework_version='1.13',
                       py_version='py3',
                       script_mode=True)

In this example, source_dir 'char-rnn-tensorflow' is a relative path inside the Git repo, while entry_point 'train.py' is a relative path under ‘char-rnn-tensorflow’.

Git integration example

Now let’s look at a complete example of using Git integration. This example trains a multi-layer LSTM RNN model on a language modeling task based on PyTorch example. By default, the training script uses the Wikitext-2 dataset. We train a model on SageMaker, deploy it, and then use deployed model to generate new text.

Run the commands in a Python script, except for those that start with a ‘!’, which are bash commands.

First let’s do the setup:

import sagemaker
 sagemaker_session = sagemaker.Session()
 bucket = sagemaker_session.default_bucket()
 prefix = 'sagemaker/DEMO-pytorch-rnn-lstm'
 role = sagemaker.get_execution_role()

Next get the dataset. This data is from Wikipedia and is licensed CC-BY-SA-3.0. Before you use this data for any other purpose than this example, you should understand the data license, described at https://creativecommons.org/licenses/by-sa/3.0/:

!wget http://research.metamind.io.s3.amazonaws.com/wikitext/wikitext-2-raw-v1.zip
 !unzip -n wikitext-2-raw-v1.zip
 !cd wikitext-2-raw
 !mv wiki.test.raw test && mv wiki.train.raw train && mv wiki.valid.raw valid

Upload the data to S3:

inputs = sagemaker_session.upload_data(path='wikitext-2-raw', bucket=bucket, key_prefix=prefix)

Specify git_config and create the estimator with it:

from sagemaker.pytorch import PyTorch

git_config = {'repo': 'https://github.com/awslabs/amazon-sagemaker-examples.git', 'branch': 'training-scripts'}

estimator = PyTorch(entry_point='train.py',
                     role=role,
                     framework_version='1.1.0',
                     train_instance_count=1,
                     train_instance_type='ml.c4.xlarge',
                     source_dir='pytorch-rnn-scripts',
                     git_config=git_config,
                     hyperparameters={
                         'epochs': 6,
                         'tied': True
                     })

Train the mode:

estimator.fit({'training': inputs})

Next let’s host the model. We are going to provide custom implementation of model_fninput_fnoutput_fn, and predict_fn hosting functions in a separate file ‘generate.py’, which is in the same Git repo. The PyTorch model uses a npy serializer and deserializer by default. For this example, since we have a custom implementation of all the hosting functions and plan on using JSON instead, we need a predictor that can serialize and deserialize JSON:

from sagemaker.predictor import RealTimePredictor, json_serializer, json_deserializer

 class JSONPredictor(RealTimePredictor):
     def __init__(self, endpoint_name, sagemaker_session):
         super(JSONPredictor, self).__init__(endpoint_name, sagemaker_session, json_serializer, json_deserializer)

Create the model object:

from sagemaker.pytorch import PyTorchModel

training_job_name = estimator.latest_training_job.name
desc = sagemaker_session.sagemaker_client.describe_training_job(TrainingJobName=training_job_name)
trained_model_location = desc['ModelArtifacts']['S3ModelArtifacts']
model = PyTorchModel(model_data=trained_model_location,
                      role=role,
                      framework_version='1.0.0',
                      entry_point='generate.py',
                      source_dir='pytorch-rnn-scripts',
                      git_config=git_config,
                      predictor_cls=JSONPredictor)

Create the hosting endpoint:

predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')

Now we are going to use our deployed model to generate text by providing random seed, temperature (higher will increase diversity), and number of words we would like to get:

input = {
     'seed': 111,
     'temperature': 2.0,
     'words': 100
 }
 response = predictor.predict(input)
 print(response)

You get the following results:

acids west 'igan 1232 keratinous Andrews argue cancel mauling even incorporating Jewish
centimetres Fang Andres cyclic logjams filth nullity Homarinus pilaris Emperors whoops punts
followed Reichsgau envisaged Invisible alcohols are osteoarthritis twilight Alexandre Odes Bucanero Genesis
crimson Hutchison genus Brighton 1532 0226284301 Harikatha p Assault Vaisnava plantie 1829
Totals established outcast hurricane herbs revel Lebens Metoposaurids Pajaka initialize frond discarding
walking Unusually Ľubomír Springboks reviewing leucocythemia blistered kinder Nowels arriving 1350 Weymouth
Saigon cantonments genealogy alleging Upright typists termini doodle conducts parallelisms cypresses consults
others estate cover passioned recognition channelled breathed straighter Visibly dug blanche motels
Barremian quickness constrictor reservist 

Finally delete the endpoint after you are done using it:

sagemaker_session.delete_endpoint(predictor.endpoint)

Conclusion

In this post, I walked through how to use Git integration with the Amazon SageMaker Python SDK. With Git integration, you no longer have to download scripts from Git repos for training jobs and hosting models. Now you can use scripts in Git repos directly, simply by passing an additional parameter git_config when creating the Estimator or Model object.

If you have questions or suggestions, please leave them in the comments.


About the Authors

Yue Tu is a summer intern on the AWS SageMaker ML Frameworks team. He works on Git integration for the SageMaker Python SDK during his internship. Outside of work he likes playing basketball, his favorite basketball teams are the Golden State Warriors and Duke basketball team. He also likes paying attention to nothing for some time.

 

 

Chuyang Deng is a software development engineer on the AWS SageMaker ML Frameworks team. She enjoys playing LEGO alone.

 

 

 

 

Using model attributes to track your training runs on Amazon SageMaker

With a few clicks in the Amazon SageMaker console or a few one-line API calls, you can now quickly search, filter, and sort your machine learning (ML) experiments using key model attributes, such as hyperparameter values and accuracy metrics, to help you more quickly identify the best models for your use case and get to production faster. The new Amazon SageMaker model tracking capability is available through both the console and AWS SDKs in all available AWS Regions, at no additional charge.

Developing an ML model requires experimenting with different combinations of data, algorithm, and parameters—all the while evaluating the impact of small, incremental changes on performance and accuracy. This iterative fine-tuning exercise often leads to data explosion, with hundreds or sometimes thousands of experiments spread across many versions of a model.

Managing these experiments can significantly slow down the discovery of a solution. It also makes it tedious to trace back the lineage of a given model version so that the exact ingredients that went into brewing it can be identified. This adds unnecessary extra work to auditing and compliance verifications. The result is that new models don’t move to production fast enough to provide better solutions to problems.

With Amazon SageMaker’s new model tracking capabilities, you can now find the best models for your use case by searching on key model attributes—such as the algorithm used, hyperparameter values, and any custom tags. Using custom tags lets you find the models trained for a specific project or created by a specific data science team, helping you meaningfully categorize and catalog your work.

You can also rank and compare your model training attempts based on their performance metrics, such as training loss and validation accuracy. Do this right in the Amazon SageMaker console to more easily pick the best models for your use case. Finally, you can use the new model tracking capability to trace the lineage of a model all the way back to the dataset used in training and validating the model.

Now, I dive into the step-by-step experience of using this new capability.

Find and evaluate model training experiments

In this example, you train a simple binary classification model on the MNIST dataset using the Amazon SageMaker Linear Learner algorithm. The model predicts whether a given image is of the digit 0 or otherwise. You tune the hyperparameters of the Linear Learner algorithm, such as mini_batch_size, while evaluating the binary_classification_accuracy metric that measures the accuracy of predictions made by the model. You can find the code for this example in the sample notebook in the amazon-sagemaker-examples GitHub repo.

Step 1: Set up the experiment tracking by choosing a unique label for tagging all of the model training runs

You can also add the tag using the Amazon SageMaker Python SDK API while you are creating a training job using the Amazon SageMaker estimator.

You can also add the tag using the Amazon SageMaker Python SDK API while you are creating a training job using SageMaker estimator.

linear_1 = sagemaker.estimator.Estimator(
  linear_learner_container, role, 
  train_instance_count=1, train_instance_type = 'ml.c4.xlarge',
  output_path=<you model output S3 path URI>,
  tags=[{"Key":"Project", "Value":"Project_Binary_Classifier"}],
  sagemaker_session=sess)

Step 2: Perform multiple model training runs with new hyperparameter settings

For demonstration purposes, try three different batch_sizes values of 100, 200, and 300. Here is some example code:

linear_1.set_hyperparameters(feature_dim=784,predictor_type='binary_classifier', mini_batch_size=100)
linear_1.fit({'train': <your training dataset S3 URI>})

You are consistently tagging all three model training runs with the same unique label so you can track them under the same project. In the next step, I show how you can find and group all model training runs labeled with the “Project” tag.

Step 3: Find the relevant training runs for further evaluation

You can find the training runs on the Amazon SageMaker console.

You can search for the tag used in Steps 1 and 2.

This lists all labeled training runs in a table.

You can also use the AWS SDK API for Amazon SageMaker.

………………
search_params={
   "MaxResults": 10,
   "Resource": "TrainingJob",
   "SearchExpression": { 
      "Filters": [{ 
            "Name": "Tags.Project",
            "Operator": "Equals",
            "Value": "Project_Binary_Classifier"
         }]},
  "SortBy": "Metrics.train:binary_classification_accuracy",
  "SortOrder": "Descending"
}
smclient = boto3.client(service_name='sagemaker')
results = smclient.search(**search_params)

While I have demonstrated searching by tags, you can search using any metadata for model training runs. This includes the learning algorithm used, training dataset URIs, and ranges of numerical values for hyperparameters and model training metrics.

Step 4: Sort on the objective performance metric of your choice to get the best model

The model training runs identified in Step 3 are presented to you in a table, with all of the hyperparameters and training metrics presented in sortable columns. Choose the column header to rank the training runs for the objective performance metric of your choice, in this case, binary_classification_accuracy.

You can also print the table inline in your Amazon SageMaker Jupyter notebooks. Here is some example code:

import pandas
headers=["Training Job Name", "Training Job Status", "Batch Size", "Binary Classification Accuracy"]
rows=[]
for result in results['Results']: 
    trainingJob = result['TrainingJob']
    metrics = trainingJob['FinalMetricDataList']
    rows.append([trainingJob['TrainingJobName'],
     trainingJob['TrainingJobStatus'],
     trainingJob['HyperParameters']['mini_batch_size'],
     metrics[[x['MetricName'] for x in  
     metrics].index('train:binary_classification_accuracy')]['Value']
    ])
df = pandas.DataFrame(data=rows,columns=headers)
from IPython.display import display, HTML
display(HTML(df.to_html()))

As you can see in Step 3, you had already given the sort criteria in the search() API call for returning the results sorted on the metric of interest as follows:

"SortBy":  "Metrics.train:binary_classification_accuracy" 
"SortOrder": "Descending"

The previous example code parses the JSON response and presents the results in a leaderboard format, which looks like the following:

Now that you have identified the best model—with batch_size = 300, and classification accuracy of 0.99344—you can now deploy this model to a live endpoint. The sample notebook has step-by-step instructions for deploying an Amazon SageMaker endpoint.

Tracing a model’s lineage

Now I show an example of picking a prediction endpoint and quickly tracing back to the model training run used in creating the model in the first place.

Using single-click on the Amazon SageMaker console

In the left navigation pane of the Amazon SageMaker console, choose Endpoints, and select the relevant endpoint from the list of all your deployed endpoints. Scroll to Endpoint Configuration Settings, which lists all the model versions deployed at the endpoint. You see an additional hyperlink to the model training job that created that model in the first place.

Using the AWS SDK for Amazon SageMaker

You can also use few simple one-line API calls to quickly trace the lineage of a model.

#first get the endpoint config for the relevant endpoint
endpoint_config = smclient.describe_endpoint_config(EndpointConfigName=endpointName)

#now get the model name for the model deployed at the endpoint. 
model_name = endpoint_config['ProductionVariants'][0]['ModelName']

#now look up the S3 URI of the model artifacts
model = smclient.describe_model(ModelName=model_name)
modelURI = model['PrimaryContainer']['ModelDataUrl']

#search for the training job that created the model artifacts at above S3 URI location
search_params={
   "MaxResults": 1,
   "Resource": "TrainingJob",
   "SearchExpression": { 
      "Filters": [ 
         { 
            "Name": "ModelArtifacts.S3ModelArtifacts",
            "Operator": "Equals",
            "Value": modelURI
         }]}
}
results = smclient.search(**search_params)

Get started with more examples and developer support

Now that you have seen examples of how to efficiently manage the ML experimentation process and trace a model’s lineage, you can try out a sample notebook in the amazon-sagemaker-examples GitHub repo. For more examples, see our developer guide, or post your questions on the Amazon SageMaker forum. Happy experimenting!


About the Author

Sumit Thakur is a Senior Product Manager for AWS Machine Learning Platforms where he loves working on products that make it easy for customers to get started with machine learning on cloud. In his spare time, he likes connecting with nature and watching sci-fi TV series.

 

 

This post was originally published November 28, 2018. Last updated August 2, 2019.

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.