ibm-watson-machine-learning
¶Create and deploy a function that receives HTML canvas image data from a web app and then processes and sends that data to a model trained to recognize handwritten digits.
See: MNIST function deployment tutorial
This notebook runs on Python.
The learning goals of this notebook are:
This notebook contains the following parts:
Before you use the sample code in this notebook, you must perform the following setup tasks:
Authenticate the Watson Machine Learning service on IBM Cloud. You need to provide platform api_key
and instance location
.
You can use IBM Cloud CLI to retrieve platform API Key and instance location.
API Key can be generated in the following way:
ibmcloud login
ibmcloud iam api-key-create API_KEY_NAME
In result, get the value of api_key
from the output.
Location of your WML instance can be retrieved in the following way:
ibmcloud login --apikey API_KEY -a https://cloud.ibm.com
ibmcloud resource service-instance WML_INSTANCE_NAME
In result, get the value of location
from the output.
Tip: Your Cloud API key
can be generated by going to the Users section of the Cloud console. From that page, click your name, scroll down to the API Keys section, and click Create an IBM Cloud API key. Give your key a name and click Create, then copy the created key and paste it below. You can also get a service specific url by going to the Endpoint URLs section of the Watson Machine Learning docs. You can check your instance location in your Watson Machine Learning (WML) Service instance details.
You can also get service specific apikey by going to the Service IDs section of the Cloud Console. From that page, click Create, then copy the created key and paste it below.
Action: Enter your api_key
and location
in the following cell.
api_key = 'PASTE YOUR API KEY HERE'
location = 'us-south'
wml_credentials = {
"apikey": api_key,
"url": 'https://' + location + '.ml.cloud.ibm.com'
}
!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use Deployment Spaces Dashboard to create one.
space_id
and paste it belowTip: You can also use SDK to prepare the space for your work. More information can be found here.
Action: Assign space ID below
You can use list
method to print all existing spaces.
space_id = 'PASTE YOUR SPACE ID HERE'
client.spaces.list(limit=10)
To be able to interact with all resources available in Watson Machine Learning, you need to set space which you will be using.
client.set.default_space(space_id)
The deployed function created in this notebook is designed to send payload data to a TensorFlow model created in the MNIST tutorials.
!pip install wget
import os, wget, json
import numpy as np
import matplotlib.pyplot as plt
import requests
If you already deployed a model while working through one of the following MNIST tutorials, you can use that model deployment:
Paste the model deployment ID in the following cell.
model_deployment_id = ""
You can deployed a sample model and get its deployment ID by running the code in the following four cells.
# Download a sample model to the notebook working directory
sample_saved_model_filename = 'mnist-tf-hpo-saved-model.tar.gz'
url = 'https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/models/tensorflow/mnist/' + sample_saved_model_filename
if not os.path.isfile(sample_saved_model_filename): wget.download(url)
# Look up software specification for the MNIST model
sofware_spec_uid = client.software_specifications.get_id_by_name("runtime-23.1-py3.10")
# Store the sample model in your Watson Machine Learning repository
metadata = {
client.repository.ModelMetaNames.NAME: 'Saved MNIST model',
client.repository.ModelMetaNames.TYPE: 'tensorflow_2.12',
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sofware_spec_uid
}
model_details = client.repository.store_model(
model=sample_saved_model_filename,
meta_props=metadata
)
# Get published model ID
published_model_uid = client.repository.get_model_id(model_details)
# Deploy the stored model
metadata = {
client.deployments.ConfigurationMetaNames.NAME: "MNIST saved model deployment",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
model_deployment_details = client.deployments.create(published_model_uid, meta_props=metadata)
# Get the ID of the model deployment just created
model_deployment_id = client.deployments.get_uid(model_deployment_details)
print(model_deployment_id)
The deployed function created in this notebook is designed to accept RGBA image data from an HTML canvas object in one of these sample apps:
Run the following cells to download and view sample canvas data for testing the deployed function.
# Download the file containing the sample data
sample_canvas_data_filename = 'mnist-html-canvas-image-data.json'
url = 'https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/data/mnist/' + sample_canvas_data_filename
if not os.path.isfile(sample_canvas_data_filename): wget.download(url)
# Load the sample data from the file into a variable
with open(sample_canvas_data_filename) as data_file:
sample_cavas_data = json.load(data_file)
# View the raw contents of the sample data
print("Height (n): " + str(sample_cavas_data["height"]) + " pixels\n")
print("Num image data entries: " + str(len( sample_cavas_data["data"])) + " - (n * n * 4) elements - RGBA values\n")
print(json.dumps(sample_cavas_data, indent=3)[:75] + "...\n" + json.dumps(sample_cavas_data, indent=3)[-50:])
# See what hand-drawn digit the sample data represents
rgba_arr = np.asarray(sample_cavas_data["data"]).astype('uint8')
n = sample_cavas_data["height"]
plt.figure()
plt.imshow( rgba_arr.reshape(n, n, 4))
plt.xticks([])
plt.yticks([])
plt.show()
The basics of creating and deploying functions in Watson Machine Learning are given here:
ai_parms = {"wml_credentials": wml_credentials, "space_id": space_id, "model_deployment_id": model_deployment_id}
def my_deployable_function( parms=ai_parms ):
try:
import subprocess
subprocess.check_output("pip install pillow --user", stderr=subprocess.STDOUT, shell=True)
except subprocess.CalledProcessError as e:
install_err = "subprocess.CalledProcessError:\n\n" + "cmd:\n" + e.cmd + "\n\noutput:\n" + e.output.decode()
raise Exception("Installing failed:\n" + install_err)
def getRGBAArr(canvas_data):
import numpy as np
dimension = canvas_data["height"]
rgba_data = canvas_data["data"]
rgba_arr = np.asarray(rgba_data).astype('uint8')
return rgba_arr.reshape(dimension, dimension, 4)
def getNormAlphaList(img):
import numpy as np
alpha_arr = np.array(img.split()[-1])
norm_alpha_arr = alpha_arr / 255
norm_alpha_list = norm_alpha_arr.reshape(1, 784).tolist()
return norm_alpha_list
def score(function_payload):
try:
from PIL import Image
canvas_data = function_payload["input_data"][0]["values"][0] # Read the payload received by the function
rgba_arr = getRGBAArr(canvas_data) # Create an array object with the required shape
img = Image.fromarray(rgba_arr, 'RGBA') # Create an image object that can be resized
sm_img = img.resize((28, 28), Image.LANCZOS) # Resize the image to 28 x 28 pixels
alpha_list = getNormAlphaList(sm_img) # Create a 1 x 784 array of values between 0 and 1
model_payload = {"input_data": [{"values" : alpha_list}]} # Create a payload to be sent to the model
#print( "Payload for model:" ) # For debugging purposes
#print( model_payload ) # For debugging purposes
from ibm_watson_machine_learning import APIClient
client = APIClient(parms["wml_credentials"])
client.set.default_space(parms["space_id"])
model_result = client.deployments.score(parms["model_deployment_id"], model_payload)
digit_class = model_result["predictions"][0]["values"][0]
return model_result
except Exception as e:
return {'predictions': [{'values': [repr(e)]}]}
#return {"error" : repr(e)}
return score
You can test your function in the notebook before deploying the function.
To see debugging info:
# Pass the sample canvas data to the function as a test
func_result = my_deployable_function()({"input_data": [{"values": [sample_cavas_data]}]})
print(func_result)
Before you can deploy the function, you must store the function in your Watson Machine Learning repository.
# Look up software specification for the deployable function
sofware_spec_uid = client.software_specifications.get_id_by_name("runtime-23.1-py3.10")
# Store the deployable function in your Watson Machine Learning repository
meta_data = {
client.repository.FunctionMetaNames.NAME: 'MNIST deployable function',
client.repository.FunctionMetaNames.SOFTWARE_SPEC_UID: sofware_spec_uid
}
function_details = client.repository.store_function(meta_props=meta_data, function=my_deployable_function)
# Get published function ID
function_uid = client.repository.get_function_uid(function_details)
# Deploy the stored function
metadata = {
client.deployments.ConfigurationMetaNames.NAME: "MNIST function deployment",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
function_deployment_details = client.deployments.create(function_uid, meta_props=metadata)
You can use the Watson Machine Learning Python client or REST API to send data to your function deployment for processing in exactly the same way you send data to model deployments for processing.
# Get the endpoint URL of the function deployment just created
function_deployment_id = client.deployments.get_uid(function_deployment_details)
function_deployment_endpoint_url = client.deployments.get_scoring_href(function_deployment_details)
print(function_deployment_id)
print(function_deployment_endpoint_url)
payload = {"input_data": [{"values": [sample_cavas_data]}]}
result = client.deployments.score(function_deployment_id, payload)
if "error" in result:
print(result["error"])
else:
print(result)
# Get an IAM token from IBM Cloud
url = "https://iam.bluemix.net/oidc/token"
headers = {"Content-Type": "application/x-www-form-urlencoded"}
data = "apikey=" + wml_credentials["apikey"] + "&grant_type=urn:ibm:params:oauth:grant-type:apikey"
IBM_cloud_iam_uid = "bx"
IBM_cloud_iam_pwd = "bx"
response = requests.post(url, headers=headers, data=data, auth=(IBM_cloud_iam_uid, IBM_cloud_iam_pwd))
if 200 != response.status_code:
print(response.status_code)
print(response.reason)
else:
iam_token = response.json()["access_token"]
# Send data to deployment for processing
headers = {"Content-Type" : "application/json",
"Authorization" : "Bearer " + iam_token}
params = {"version": "2020-08-01"}
response = requests.post(function_deployment_endpoint_url, json=payload, params=params, headers=headers)
print(response.text)
If you want to clean up all created assets:
please follow up this sample notebook.
In this notebook, you created a Python function that receives HTML canvas image data and then processes and sends that data to a model trained to recognize handwritten digits.
To learn how you can use this deployed function in a web app, see:
Sarah Packowski is a member of the IBM Data & AI Content Design team in Canada.