ibm-watson-machine-learning
¶This notebook contains steps to deploy a sample SPSS stream and start batch scoring new data.
Some familiarity with bash is helpful. This notebook uses Python 3.8.
You will use a data set, Telco Customer Churn, which details anonymous customer data from a telecommunication company. Use the details of this data set to predict customer churn. This is critical to business, as it's easier to retain existing customers than acquire new ones.
The learning goals of this notebook are:
This notebook contains the following parts:
Before you use the sample code in this notebook, create a Watson Machine Learning (WML) Service instance (a free plan is offered and information about how to create the instance can be found here).
Authenticate the Watson Machine Learning service on IBM Cloud. You need to provide platform api_key
and instance location
.
You can use IBM Cloud CLI to retrieve platform API Key and instance location.
API Key can be generated in the following way:
ibmcloud login
ibmcloud iam api-key-create API_KEY_NAME
In result, get the value of api_key
from the output.
Location of your WML instance can be retrieved in the following way:
ibmcloud login --apikey API_KEY -a https://cloud.ibm.com
ibmcloud resource service-instance WML_INSTANCE_NAME
In result, get the value of location
from the output.
Tip: You can generate your Cloud API key
by going to the Users section of the Cloud console. From that page, click your name, scroll down to the API Keys section, and click Create an IBM Cloud API key. Give your key a name and click Create, then copy the created key and paste it below.
You can also get the service-specific url by going to the Endpoint URLs section of the Watson Machine Learning docs. You can check your instance location in your Watson Machine Learning (WML) Service instance details.
You can also get service specific apikey by going to the Service IDs section of the Cloud Console. From that page, click Create, then copy the created key and paste it below.
Action: Enter your api_key
and location
in the following cell.
api_key = 'PASTE YOUR PLATFORM API KEY HERE'
location = 'PASTE YOUR INSTANCE LOCATION HERE'
wml_credentials = {
"apikey": api_key,
"url": 'https://' + location + '.ml.cloud.ibm.com'
}
!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
First, create a space that will be used for your work. If you do not have a space, you can use Deployment Spaces Dashboard to create one.
space_id
and paste it belowTip: You can also use SDK to prepare the space for your work. More information can be found here.
Action: Assign space ID below
space_id = 'PASTE YOUR SPACE ID HERE'
You can use list
method to print all existing spaces.
client.spaces.list(limit=10)
To be able to interact with all resources available in Watson Machine Learning, you need to set space which you will be using.
client.set.default_space(space_id)
'SUCCESS'
In this section you will learn how to upload the model to the Cloud.
Action: Download sample SPSS model from git project using wget.
import os
from wget import download
sample_dir = 'spss_sample_model'
if not os.path.isdir(sample_dir):
os.mkdir(sample_dir)
filename=os.path.join(sample_dir, 'db2-customer-satisfaction-prediction.str')
if not os.path.isfile(filename):
filename = download('https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/models/spss/db2_customer_satisfaction/model/db2-customer-satisfaction-prediction.str',\
out=sample_dir)
print(filename)
Store SPSS sample model in your Watson Machine Learning instance.
client.software_specifications.list()
sw_spec_uid = client.software_specifications.get_uid_by_name("spss-modeler_18.2")
model_meta_props = {
client.repository.ModelMetaNames.NAME: "SPSS customer satisfaction model",
client.repository.ModelMetaNames.TYPE: "spss-modeler_18.2",
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid
}
model_details = client.repository.store_model(filename, model_meta_props)
Note: You can see that model is successfully stored in Watson Machine Learning Service.
client.repository.list_models()
You can use commands below to create a db2 connection and required data assets to perform batch scoring.
schema_name = 'PUT YOUR SCHEMA NAME HERE'
db_name = 'db2'
input_table_1 = 'CUSTOMER'
input_table_2 = 'CUSTOMER_2'
output_table = 'OUTPUT'
db_credentials = {
"db": "***",
"host": "***",
"https_url": "***",
"password": "***",
"port": "***",
"username": "***"
}
db2_data_source_type_id = client.connections.get_datasource_type_uid_by_name(db_name)
db2_conn_meta_props= {
client.connections.ConfigurationMetaNames.NAME: "conn_db2",
client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: db2_data_source_type_id,
client.connections.ConfigurationMetaNames.DESCRIPTION: "Connection using DB2",
client.connections.ConfigurationMetaNames.PROPERTIES: {
"database": db_credentials["db"],
"port": db_credentials["port"],
"host": db_credentials["host"],
"password": db_credentials["password"],
"username": db_credentials["username"]
}
}
db2_conn_details = client.connections.create(meta_props=db2_conn_meta_props)
Creating connections... SUCCESS
db2_conn_id = client.connections.get_uid(db2_conn_details)
db2_asset_meta_props={
client.data_assets.ConfigurationMetaNames.NAME: "INPUT_TABLE_1",
client.data_assets.ConfigurationMetaNames.CONNECTION_ID: db2_conn_id,
client.data_assets.ConfigurationMetaNames.DESCRIPTION: "db2 table",
client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: input_table_1 }
db2_conn_input_asset_details = client.data_assets.store(db2_asset_meta_props)
input_data_1_href = client.data_assets.get_href(db2_conn_input_asset_details)
Creating data asset... SUCCESS
db2_asset_meta_props={
client.data_assets.ConfigurationMetaNames.NAME: "INPUT_TABLE_2",
client.data_assets.ConfigurationMetaNames.CONNECTION_ID: db2_conn_id,
client.data_assets.ConfigurationMetaNames.DESCRIPTION: "db2 table",
client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: input_table_2 }
db2_conn_input_asset_details = client.data_assets.store(db2_asset_meta_props)
input_data_2_href = client.data_assets.get_href(db2_conn_input_asset_details)
Creating data asset... SUCCESS
db2_asset_meta_props={
client.data_assets.ConfigurationMetaNames.NAME: "OUTPUT_TABLE",
client.data_assets.ConfigurationMetaNames.CONNECTION_ID: db2_conn_id,
client.data_assets.ConfigurationMetaNames.DESCRIPTION: "db2 table",
client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: output_table }
db2_conn_output_asset_details = client.data_assets.store(db2_asset_meta_props)
output_data_href = client.data_assets.get_href(db2_conn_output_asset_details)
Creating data asset... SUCCESS
model_uid = client.repository.get_model_uid(model_details)
deployment = client.deployments.create(
artifact_uid=model_uid,
meta_props={
client.deployments.ConfigurationMetaNames.NAME: "SPSS BATCH customer satisfaction",
client.deployments.ConfigurationMetaNames.BATCH: {},
client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: {
"name": "S",
"num_nodes": 1
}
}
)
####################################################################################### Synchronous deployment creation for uid: 'b5f984cb-9b46-4884-aa9b-1541efab1faf' started ####################################################################################### ready. ------------------------------------------------------------------------------------------------ Successfully finished deployment creation, deployment_uid='7ca37c54-19ab-4632-a191-625b3da3c898' ------------------------------------------------------------------------------------------------
data_asset
pointing to the DB2.¶job_payload_ref = {
client.deployments.ScoringMetaNames.INPUT_DATA_REFERENCES: [
{
"id": "conn_db2",
"name": "input_data_1_href",
"type": "data_asset",
"connection": {},
"location": {
"href": input_data_1_href
}
},
{
"id": "conn_db2",
"name": "input_data_2_href",
"type": "data_asset",
"connection": {},
"location": {
"href": input_data_2_href
}
}
],
client.deployments.ScoringMetaNames.OUTPUT_DATA_REFERENCE: {
"type": "data_asset",
"connection": {},
"location": {
"href": output_data_href
}
}
}
deployment_uid = client.deployments.get_uid(deployment)
job = client.deployments.create_job(deployment_uid, meta_props=job_payload_ref)
You can retrive job ID.
job_id = client.deployments.get_job_uid(job)
Here you can check the status of your batch scoring. When batch job is completed the results will be written to a Db2 table.
import time
elapsed_time = 0
while client.deployments.get_job_status(job_id).get('state') != 'completed' and elapsed_time < 300:
print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}")
elapsed_time += 10
time.sleep(10)
if client.deployments.get_job_status(job_id).get('state') == 'completed':
print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}")
job_details_do = client.deployments.get_job_details(job_id)
print(job_details_do)
else:
print("Job hasn't completed successfully in 5 minutes.")
Current state: queued Current state: running Current state: completed {'entity': {'deployment': {'id': 'ac0dfe9a-6c98-4eeb-967b-120f9234b0e5'}, 'platform_job': {'job_id': 'bfcf37c5-d190-49bd-abca-fb22116323f5', 'run_id': '361b151b-c5f6-48dc-ae1d-776642c9a692'}, 'scoring': {'input_data_references': [{'connection': {}, 'id': 'conn_db2', 'location': {'href': '/v2/assets/04f74392-0b89-4ca6-baa1-157e4295bd8b?space_id=680a7515-620c-461f-9c6f-1f4c535bfc47'}, 'type': 'data_asset'}, {'connection': {}, 'id': 'conn_db2', 'location': {'href': '/v2/assets/d1e1c2eb-d0eb-430f-8707-f948c0b37ba2?space_id=680a7515-620c-461f-9c6f-1f4c535bfc47'}, 'type': 'data_asset'}], 'output_data_reference': {'connection': {}, 'location': {'href': '/v2/assets/8009a976-1218-42b7-81fc-39b77ec225cd?space_id=680a7515-620c-461f-9c6f-1f4c535bfc47'}, 'type': 'data_asset'}, 'status': {'completed_at': '2021-05-13T08:26:06.058Z', 'running_at': '2021-05-13T08:25:58.434Z', 'state': 'completed'}}}, 'metadata': {'created_at': '2021-05-13T08:25:47.071Z', 'id': 'a5d0e0b3-62c5-41f4-879b-41702e16b185', 'modified_at': '2021-05-13T08:26:06.191Z', 'name': 'name_d68b05fd-7866-4a48-af56-235eb3f03809', 'space_id': '680a7515-620c-461f-9c6f-1f4c535bfc47'}}
connection_asset
poiniting to the DB2¶job_payload_ref = {
client.deployments.ScoringMetaNames.INPUT_DATA_REFERENCES: [
{
"id": "conn_db2",
"name": "input_table_1",
"type": "connection_asset",
"connection": {
"id": db2_conn_id
},
"location": {
"schema_name": schema_name,
"file_name": input_table_1
}
},
{
"id": "conn_db2",
"name": "input_table_2",
"type": "connection_asset",
"connection": {
"id": db2_conn_id
},
"location": {
"schema_name": schema_name,
"file_name": input_table_2
}
}
],
client.deployments.ScoringMetaNames.OUTPUT_DATA_REFERENCE: {
"id": "conn_db2",
"name": "output_table",
"type": "connection_asset",
"connection": {
"id": db2_conn_id
},
"location": {
"schema_name": schema_name,
"file_name": output_table
}
}
}
deployment_uid = client.deployments.get_uid(deployment)
job = client.deployments.create_job(deployment_uid, meta_props=job_payload_ref)
Retrive job ID.
job_id = client.deployments.get_job_uid(job)
import time
elapsed_time = 0
while client.deployments.get_job_status(job_id).get('state') != 'completed' and elapsed_time < 300:
print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}")
elapsed_time += 10
time.sleep(10)
if client.deployments.get_job_status(job_id).get('state') == 'completed':
print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}")
job_details_do = client.deployments.get_job_details(job_id)
print(job_details_do)
else:
print("Job hasn't completed successfully in 5 minutes.")
Current state: queued Current state: running Current state: completed {'entity': {'deployment': {'id': 'ac0dfe9a-6c98-4eeb-967b-120f9234b0e5'}, 'platform_job': {'job_id': 'bfcf37c5-d190-49bd-abca-fb22116323f5', 'run_id': '4d83ddcb-88ca-45ca-8209-446698936b8d'}, 'scoring': {'input_data_references': [{'connection': {'id': 'ae2a374b-85fd-4823-b6f5-95f11662324f'}, 'id': 'conn_db2', 'location': {'file_name': 'CUSTOMER', 'schema_name': 'PUT YOUR SCHEMA NAME HERE'}, 'type': 'connection_asset'}, {'connection': {'id': 'ae2a374b-85fd-4823-b6f5-95f11662324f'}, 'id': 'conn_db2', 'location': {'file_name': 'CUSTOMER_2', 'schema_name': 'PUT YOUR SCHEMA NAME HERE'}, 'type': 'connection_asset'}], 'output_data_reference': {'connection': {'id': 'ae2a374b-85fd-4823-b6f5-95f11662324f'}, 'id': 'conn_db2', 'location': {'file_name': 'OUTPUT', 'schema_name': 'PUT YOUR SCHEMA NAME HERE'}, 'type': 'connection_asset'}, 'status': {'completed_at': '2021-05-13T08:26:43.422Z', 'running_at': '2021-05-13T08:26:35.279Z', 'state': 'completed'}}}, 'metadata': {'created_at': '2021-05-13T08:26:23.663Z', 'id': '77973aa5-fbd0-41b8-9012-a4e2016a57dc', 'modified_at': '2021-05-13T08:26:43.622Z', 'name': 'name_3dc38b68-0227-4422-b7eb-b9bdd3f72d95', 'space_id': '680a7515-620c-461f-9c6f-1f4c535bfc47'}}
In this subsection you will load scored data.
Tip: To install requests
execute the following command: !pip install requests
import requests
host = db_credentials["https_url"] + "/dbapi/v3"
url = host + "/auth/tokens"
token = requests.post(url, json={
"userid": db_credentials["username"],
"password": db_credentials["password"]}).json()['token']
auth_header = {
"Authorization": f"Bearer {token}"
}
sql_command = {
"commands": "SELECT * FROM OUTPUT",
"limit": 100,
"separator": ",",
"stop_on_error": "yes"
}
url = host + "/sql_jobs"
jobid = requests.post(url, headers=auth_header, json=sql_command).json()['id']
resp = requests.get(f"{url}/{jobid}", headers=auth_header)
results = resp.json()["results"][0]
columns = results["columns"]
rows = results["rows"]
Tip: To install pandas
execute following command: !pip install pandas
import pandas as pd
pd.DataFrame(data=rows, columns=columns)
customerID | Churn | Predicted Churn | Probability of Churn | |
---|---|---|---|---|
0 | 9237-HQITU | Yes | Yes | 0.8829830706957551 |
1 | 3638-WEABW | No | No | 0.0526309571556145 |
2 | 8665-UTDHZ | Yes | No | 0.17411004057470159 |
3 | 8773-HHUOZ | Yes | No | 0.48432324905415836 |
4 | 4080-IIARD | No | No | 0.0920141258229612 |
5 | 6575-SUVOI | No | No | 0.0920919392791626 |
6 | 7495-OOKFY | Yes | Yes | 0.9721495250458333 |
7 | 0731-EBJQB | No | No | 0.09059837844121355 |
8 | 1891-QRQSA | No | No | 0.09210273951918213 |
9 | 5919-TMRGD | Yes | Yes | 0.8942276923073484 |
10 | 9364-YKUVW | No | No | 0.08573913673013789 |
11 | 7563-BIUPC | Yes | Yes | 0.878469673954569 |
12 | 4183-MYFRB | No | No | 0.41029784048995666 |
13 | 5729-KLZAR | Yes | Yes | 0.9589057770843653 |
14 | 9102-OXKFY | No | No | 0.0920128511710483 |
15 | 6078-VESFR | No | No | 0.25027748128974314 |
16 | 9979-RGMZT | No | No | 0.08438119760083618 |
17 | 1965-DDBWU | Yes | No | 0.24591107604601636 |
18 | 7159-NOKYQ | No | No | 0.008693015730152798 |
19 | 6778-YSNIH | No | No | 0.04730024365943297 |
If you want to clean up all created assets:
see the steps in this sample notebook.
You successfully completed this notebook! You learned how to use Watson Machine Learning for SPSS model deployment and scoring. Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.
Jan Sołtysik Intern in Watson Machine Learning.
Copyright © 2020, 2021, 2022 IBM. This notebook and its source code are released under the terms of the MIT License.