Function deployment tutorial: Send image data from a web app to a model

This tutorial guides you through creating and deploying a Python function in IBM Watson Machine Learning that sends payload data to a model trained to recognize handwritten digits.

 

Set up

  1. Create a Deep Learning project in IBM Watson Studio.

    Creating this type of project will set up IBM Cloud Object Storage and Watson Machine Learning.

  2. Create a notebook

    You can create a notebook to work through this tutorial in two ways:

    • Option 1: Create a blank notebook in your Watson Studio project:

      1. Click Add to project, and then choose "NOTEBOOK"
      2. Specify a name for the notebook
      3. Accept the default language and runtime
      4. Click Create Notebook
    • Option 2: Add a copy of this sample notebook to your project from the Community:

      Sample notebook: function deployment (MNIST tutorial) external link

 

Steps:

  1. Install packages and import libraries
  2. Instantiate a Watson Machine Learning client object
  3. Get a model deployment endpoint URL
  4. Explore sample payload data
  5. Create a deployable function
  6. Store and deploy the function
  7. Test the function deployment

 

Step 1: Install packages and import libraries

Install wget (for downloading a sample model and sample canvas data) by running this code in a cell in the notebook:

!pip install --upgrade wget

Import several libraries:

  • os and wget for downloading a sample model and sample canvas data
  • json for working with payload data and model and function results
  • WatsonMachineLearningAPIClient for interacting with your Watson Machine Learning service
  • numpy and matplotlib.pyplot for viewing sample canvas data
  • requests for testing the function deployment using the Watson Machine Learning REST API

Import these libraries by running this code in a cell in the notebook:

import os, wget, json
from watson_machine_learning_client import WatsonMachineLearningAPIClient
import numpy as np
import matplotlib.pyplot as plt
import requests

 

Step 2: Instantiate a Watson Machine Learning client object

Look up your Watson Machine Learning credentials, and then run this code in a cell in the notebook:

wml_credentials = {
    "instance_id" : "",
    "password"    : "",
    "url"         : "",
    "username"    : ""
}
client = WatsonMachineLearningAPIClient( wml_credentials )

See: Looking up credentials

 

Step 3: Get a model deployment endpoint URL

The deployable function in this tutorial is designed to send data to a model deployment for analysis. You can use an existing model deployment or deploy a sample model

 

Option 1: Use an existing model deployment

If you already built and deployed the TensorFlow model from one of these MNIST tutorials, you could use that deployment:

See: Looking up an online deployment endpoint URL

 

Option 2: Deploy a sample model

Download a sample model to the notebook working directory:

sample_saved_model_filename = 'mnist-tf-hpo-saved-model.tar.gz'
url = 'https://github.com/pmservice/wml-sample-models/raw/master/tensorflow/function-deployments-samples/' + sample_saved_model_filename
if not os.path.isfile( sample_saved_model_filename ): wget.download( url )

Store the sample model in your Watson Machine Learning repository:

metadata = {
    client.repository.ModelMetaNames.NAME              : 'Saved MNIST model',
    client.repository.ModelMetaNames.FRAMEWORK_NAME    : 'tensorflow',
    client.repository.ModelMetaNames.FRAMEWORK_VERSION : '1.5',
    client.repository.ModelMetaNames.RUNTIME_NAME      : 'python',
    client.repository.ModelMetaNames.RUNTIME_VERSION   : '3.5'
}
model_details = client.repository.store_model( sample_saved_model_filename, meta_props=metadata, training_data=None )

Deploy the stored model:

model_id = model_details["metadata"]["guid"]
model_deployment_details = client.deployments.create( artifact_uid=model_id, name="MNIST saved model deployment" )

Get the endpoint URL of the model deployment just created:

model_endpoint_url = client.deployments.get_scoring_url( model_deployment_details )
model_endpoint_url

 

Step 4: Explore sample payload data

The Node.js MNIST sample app and the Python Flask MNIST sample app both have the same web interface: an HTML canvas object in which digits can be hand drawn using a mouse.

The function deployment created in this tutorial will accept data from the canvas object in those sample apps in this format:

{ "height" : h, "data" : [ n1, n2, ..., n( h * h * 4 ) ] }

Where:

  • h is the height, in pixels, of a square bounding box containing a hand-drawn digit.

    The value of h will vary, depending on how large the hand-draw digit is on the canvas.

  • "data" is a 1 x ( h * h * 4 ) array of 8-bit unsigned integers representing red, green, blue, and alpha (RGBA) pixels values.

    For example, if h is 124 pixels, "data" would be a 1 x 61504 array.

 

4.1 Download sample canvas data

In your notebook, run these cells to download the sample canvas data and read the data:

sample_canvas_data_filename = 'mnist-html-canvas-image-data.json'
url = 'https://raw.githubusercontent.com/pmservice/wml-sample-models/master/tensorflow/function-deployments-samples/' + sample_canvas_data_filename
if not os.path.isfile( sample_canvas_data_filename ): wget.download( url )
with open( sample_canvas_data_filename ) as data_file: sample_cavas_data = json.load( data_file )

 

4.2 View sample canvas data

Run this cells to visualize the sample canvas data:

print( "Height (n): " + str( sample_cavas_data["height"] ) + " pixels\n" )
print( "Num image data entries: " + str( len( sample_cavas_data["data"] ) ) + " - (n * n * 4) elements - RGBA values\n"  )
print( json.dumps( sample_cavas_data, indent=3 )[:75] + "...\n" + json.dumps( sample_cavas_data, indent=3 )[-50:] )

Example output

Canvas data JSON

rgba_arr = np.asarray( sample_cavas_data["data"] ).astype('uint8')
n = sample_cavas_data["height"]
plt.figure()
plt.imshow( rgba_arr.reshape( n, n, 4 ) )
plt.xticks([])
plt.yticks([])
plt.show()

Example output

Canvas data plotted

 

Step 5: Create a deployable function

The basics of creating and deploying functions in Watson Machine Learning are given here:

 

5.1 Define a closure with an inner function named "score"

The inner function is what gets called when data is sent as payload to your function deployment.

def my_deployable_function():

    def score( function_payload ):

        return {}

    return score

 

5.2 Convert the canvas data to model deployment payload format

The canvas data that the web apps send as payload to your function deployment must be processed (reshaped and normalized) before it can be sent to the model deployment:

  • Canvas data format: The data coming from the canvas object is in this format:

      { "height" : h, "data" : [ n1, n2, ..., n( h * h * 4 ) ] }
    

    Where:

    • h is the height, in pixels, of a square bounding box containing a hand-drawn digit
    • "data" is a 1 x ( h * h * 4 ) array of 8-bit unsigned integers
  • Model deployment payload format: The sample TensorFlow model you deployed as a prerequisite for this tutorial expects input payload that is in this format:

      { "values" : [ [ n1, n2, ..., n784 ] ] }
    

    Where:

    • n1 to n784 are floating-point numbers ranging from 0 to 1

Add that preprocessing to the closure:

def my_deployable_function():

    def getRGBAArr( canvas_data ):
        import numpy as np
        dimension = canvas_data["height"]
        rgba_data = canvas_data["data"]
        rgba_arr  = np.asarray( rgba_data ).astype('uint8')
        return rgba_arr.reshape( dimension, dimension, 4 )

    def getNormAlphaList( img ):
        import numpy as np
        alpha_arr       = np.array( img.split()[-1] )
        norm_alpha_arr  = alpha_arr / 255
        norm_alpha_list = norm_alpha_arr.reshape( 1, 784 ).tolist()
        return norm_alpha_list

    def score( function_payload ):

        from PIL import Image
        canvas_data   = function_payload["values"][0]           # Read the payload received by the function
        rgba_arr      = getRGBAArr( canvas_data )               # Create an array object with the required shape
        img           = Image.fromarray( rgba_arr, 'RGBA' )     # Create an image object that can be resized
        sm_img        = img.resize( ( 28, 28 ), Image.LANCZOS ) # Resize the image to 28 x 28 pixels
        alpha_list    = getNormAlphaList( sm_img )              # Create a 1 x 784 array of values between 0 and 1
        model_payload = { "values" : alpha_list }               # Create a payload to be sent to the model

        return {}

    return score

 

5.3 Install the package required to perform the preprocessing

To use the Image library in the score function, you need to install the Pillow package. Performing the install in the outer function will cause that environment to be saved with the deployed function.

You can perform the install by using the subprocess library to run the pip install command:

def my_deployable_function():

    import subprocess
    subprocess.check_output( "pip install Pillow --user", stderr=subprocess.STDOUT, shell=True )

...

 

5.4 Add default parameters

You can store your Watson Machine Learning credentials and model deployment endpoint URL with the function by adding default parameters to the outer function:

ai_parms = { "wml_credentials" : wml_credentials, "model_endpoint_url" : model_endpoint_url }

def my_deployable_function( parms=ai_parms ):

...

 

5.5 Import the Watson Machine Learning Python client library

You can import the WatsonMachineLearningAPIClient library external link, then pass a payload to your model deployment (from step 3):

  • Use the credentials in the default parameters to instantiate a client object
  • Use the model deployment endpoint URL to send the processed canvas data to the model deployment
ai_parms = { "wml_credentials" : wml_credentials, "model_endpoint_url" : model_endpoint_url }

def my_deployable_function( parms=ai_parms ):

...

    def score( function_payload ):

        from PIL import Image
        canvas_data   = function_payload["values"][0]           # Read the payload received by the function
        rgba_arr      = getRGBAArr( canvas_data )               # Create an array object with the required shape
        img           = Image.fromarray( rgba_arr, 'RGBA' )     # Create an image object that can be resized
        sm_img        = img.resize( ( 28, 28 ), Image.LANCZOS ) # Resize the image to 28 x 28 pixels
        alpha_list    = getNormAlphaList( sm_img )              # Create a 1 x 784 array of values between 0 and 1
        model_payload = { "values" : alpha_list }               # Create a payload to be sent to the model

        from watson_machine_learning_client import WatsonMachineLearningAPIClient
        client       = WatsonMachineLearningAPIClient( parms["wml_credentials"] )
        model_result = client.deployments.score( parms["model_endpoint_url"], model_payload )
        return {}

    return score

 

5.6 Postprocess the result from the model deployment

The result returned from the model deployment is in this format:

{
  "fields": [
    "prediction"
  ],
  "values": [
    [
      4
    ]
  ]
}

The web app displays only the class that the digit image from the canvas object most closely matches (in the case, the number "4".)

Add postprocessing in your score function to get the class from the model deployment result and then return just the class from your function deployment:

...

    def score( function_payload ):

        ...

        from watson_machine_learning_client import WatsonMachineLearningAPIClient
        client       = WatsonMachineLearningAPIClient( parms["wml_credentials"] )
        model_result = client.deployments.score( parms["model_endpoint_url"], model_payload )
        digit_class  = model_result["values"][0]
        return { "class" : digit_class }

 

5.7 Add error handling

If an error happens while installing the Pillow library, deploying the function should fail with enough details in the error message to troubleshoot the problem.

If an error happens while importing libraries, instantiating the IBM Watson Machine Learning client, preprocessing the canvas data, sending data to the model deployment, or postprocessing results from the model deployment, the function should cleanly return the error to the web app.

Add error handling in the outer function and the score function, as well as some debugging print statements to complete the function definition:

ai_parms = { "wml_credentials" : wml_credentials, "model_endpoint_url" : model_endpoint_url }

def my_deployable_function( parms=ai_parms ):

    try:

        import subprocess
        subprocess.check_output( "pip install Pillow --user", stderr=subprocess.STDOUT, shell=True )

    except subprocess.CalledProcessError as e:

        install_err = "subprocess.CalledProcessError:\n\n" + "cmd:\n" + e.cmd + "\n\noutput:\n" + e.output.decode()
        raise Exception( "Installing failed:\n" + install_err )

    def getRGBAArr( canvas_data ):
        import numpy as np
        dimension = canvas_data["height"]
        rgba_data = canvas_data["data"]
        rgba_arr  = np.asarray( rgba_data ).astype('uint8')
        return rgba_arr.reshape( dimension, dimension, 4 )

    def getNormAlphaList( img ):
        import numpy as np
        alpha_arr       = np.array( img.split()[-1] )
        norm_alpha_arr  = alpha_arr / 255
        norm_alpha_list = norm_alpha_arr.reshape( 1, 784 ).tolist()
        return norm_alpha_list

    def score( function_payload ):

        try:

            from PIL import Image
            canvas_data   = function_payload["values"][0]           # Read the payload received by the function
            rgba_arr      = getRGBAArr( canvas_data )               # Create an array object with the required shape
            img           = Image.fromarray( rgba_arr, 'RGBA' )     # Create an image object that can be resized
            sm_img        = img.resize( ( 28, 28 ), Image.LANCZOS ) # Resize the image to 28 x 28 pixels
            alpha_list    = getNormAlphaList( sm_img )              # Create a 1 x 784 array of values between 0 and 1
            model_payload = { "values" : alpha_list }               # Create a payload to be sent to the model

            print( "Payload for model:" ) # For debugging purposes
            print( model_payload )        # For debugging purposes

            from watson_machine_learning_client import WatsonMachineLearningAPIClient
            client       = WatsonMachineLearningAPIClient( parms["wml_credentials"] )
            model_result = client.deployments.score( parms["model_endpoint_url"], model_payload )
            digit_class  = model_result["values"][0]

            return { "class" : digit_class }

        except Exception as e:

            return { "error" : repr( e ) }

    return score

 

5.8 Test the function locally

Before deploying your function in Watson Machine Learning, you can test your function locally in the notebook using this syntax:

function_result = my_deployable_function()( { "values" : [ sample_cavas_data ] } )

Where sample_canvas_data is the data you downloaded and viewed in step 5 previously.

Example output

Executing the function causes the debugging print statements to appear:

Function debugging output

You can view the function result output separately:

Function output

 

Step 6: Store and deploy the function

6.1: Store the function in the Watson Machine Learning repository

Run this cell to store the enclosing function in your Watson Machine Learning repository:

meta_data = { client.repository.FunctionMetaNames.NAME : 'MNIST function' }
function_details = client.repository.store_function( meta_props=meta_data, function=my_deployable_function )

 

6.2: Deploy the function in Watson Machine Learning

Run this cell to deploy the stored function to Watson Machine Learning:

function_id = function_details["metadata"]["guid"]
function_deployment_details = client.deployments.create( artifact_uid=function_id, name='MNIST function deployment' )

 

Step 7: Test the function deployment

You can use the Watson Machine Learning Python client external link or REST API external link to send data to your function deployment for processing in exactly the same way you send data to model deployments for processing.

Run this cell to test the function deployment using the Python client:

function_endpoint_url = function_deployment_details["entity"]["scoring_url"]
payload = { "values" : [ sample_cavas_data ] }
client.deployments.score( function_endpoint_url, payload )

Run this cell to test the function deployment using the REST API:

import requests, json

# Get a bearer token
url = wml_credentials["url"] + "/v3/identity/token"
response = requests.get( url, auth=( wml_credentials["username"], wml_credentials["password"] ) )
mltoken = json.loads( response.text )["token"]

# Send sample canvas data to function deployment for processing
function_endpoint_url = function_deployment_details["entity"]["scoring_url"]
payload = { "values" : [ sample_cavas_data ] }
header = { 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + mltoken }
response = requests.post( function_endpoint_url, json=payload, headers=header )
print ( response.text )

 

Next steps

Now that you have a model deployment and a function deployment, explore the sample apps that can use them: