This notebook should be run in a Watson Studio project, using the latest Python runtime environment. It requires service credentials for the following Cloud services:
The notebook will train, create and deploy a German Credit Risk model.
Before you use the sample code in this notebook, you must perform the following setup tasks:
Authenticate the Watson Machine Learning service on IBM Cloud. You need to provide platform api_key
and instance location
.
You can use IBM Cloud CLI to retrieve platform API Key and instance location.
API Key can be generated in the following way:
ibmcloud login
ibmcloud iam api-key-create API_KEY_NAME
In result, get the value of api_key
from the output.
Location of your WML instance can be retrieved in the following way:
ibmcloud login --apikey API_KEY -a https://cloud.ibm.com
ibmcloud resource service-instance WML_INSTANCE_NAME
In result, get the value of location
from the output.
Tip: Your Cloud API key
can be generated by going to the Users section of the Cloud console. From that page, click your name, scroll down to the API Keys section, and click Create an IBM Cloud API key. Give your key a name and click Create, then copy the created key and paste it below. You can also get a service specific url by going to the Endpoint URLs section of the Watson Machine Learning docs. You can check your instance location in your Watson Machine Learning (WML) Service instance details.
You can also get service specific apikey by going to the Service IDs section of the Cloud Console. From that page, click Create, then copy the created key and paste it below.
Action: Enter your api_key
and location
in the following cell.
api_key = 'PASTE YOUR PLATFORM API KEY HERE'
location = 'PASTE YOUR INSTANCE LOCATION HERE'
wml_credentials = {
"apikey": api_key,
"url": 'https://' + location + '.ml.cloud.ibm.com'
}
!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use Deployment Spaces Dashboard to create one.
space_id
and paste it belowTip: You can also use SDK to prepare the space for your work. More information can be found here.
Action: Assign space ID below
space_id = 'PASTE YOUR SPACE ID HERE'
You can use list
method to print all existing spaces.
client.spaces.list(limit=10)
To be able to interact with all resources available in Watson Machine Learning, you need to set space which you will be using.
client.set.default_space(space_id)
In next cells, you will need to paste some credentials to Cloud Object Storage. If you haven't worked with COS yet please visit getting started with COS tutorial.
You can find COS_API_KEY_ID
and COS_RESOURCE_CRN
variables in Service Credentials in menu of your COS instance. Used COS Service Credentials must be created with Role parameter set as Writer. Later training data file will be loaded to the bucket of your instance and used as training refecence in subsription.
COS_ENDPOINT
variable can be found in Endpoint field of the menu.
cos_credentials = client.spaces.get_details(space_id=space_id)['entity']['storage']['properties']
datasource_name = 'bluemixcloudobjectstorage'
bucketname = cos_credentials['bucket_name']
conn_meta_props= {
client.connections.ConfigurationMetaNames.NAME: f"Connection to Database - {datasource_name} ",
client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: client.connections.get_datasource_type_uid_by_name(datasource_name),
client.connections.ConfigurationMetaNames.DESCRIPTION: "Connection to external Database",
client.connections.ConfigurationMetaNames.PROPERTIES: {
'bucket': bucketname,
'access_key': cos_credentials['credentials']['editor']['access_key_id'],
'secret_key': cos_credentials['credentials']['editor']['secret_access_key'],
'iam_url': 'https://iam.cloud.ibm.com/identity/token',
'url': cos_credentials['endpoint_url']
}
}
conn_details = client.connections.create(meta_props=conn_meta_props)
Note: The above connection can be initialized alternatively with api_key
and resource_instance_id
.
The above cell can be replaced with:
conn_meta_props= {
client.connections.ConfigurationMetaNames.NAME: f"Connection to Database - {db_name} ",
client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: client.connections.get_datasource_type_uid_by_name(db_name),
client.connections.ConfigurationMetaNames.DESCRIPTION: "Connection to external Database",
client.connections.ConfigurationMetaNames.PROPERTIES: {
'bucket': bucket_name,
'api_key': cos_credentials['apikey'],
'resource_instance_id': cos_credentials['resource_instance_id'],
'iam_url': 'https://iam.cloud.ibm.com/identity/token',
'url': 'https://s3.us.cloud-object-storage.appdomain.cloud'
}
}
conn_details = client.connections.create(meta_props=conn_meta_props)
connection_id = client.connections.get_uid(conn_details)
At this point, the notebook is ready to run. You can either run the cells one at a time, or click the Kernel option above and select Restart and Run All to run all the cells.
In this section you will learn how to train Scikit-learn model and next deploy it as web-service using Watson Machine Learning service.
!rm german_credit_data_biased_training.csv
!wget https://raw.githubusercontent.com/pmservice/ai-openscale-tutorials/master/assets/historical_data/german_credit_risk/wml/german_credit_data_biased_training.csv
import numpy as np
import pandas as pd
training_data_file_name = "german_credit_data_biased_training.csv"
data_df = pd.read_csv(training_data_file_name)
data_df.head()
print('Columns: ', list(data_df.columns))
print('Number of columns: ', len(data_df.columns))
As you can see, the data contains twenty one fields. Risk
field is the one you would like to predict using feedback data.
print('Number of records: ', data_df.Risk.count())
target_count = data_df.groupby('Risk')['Risk'].count()
target_count
target_count.plot.pie(figsize=(8, 8));
from ibm_watson_machine_learning.helpers import DataConnection, S3Location
data_connections = []
data_connection = DataConnection(
connection_asset_id=connection_id,
location=S3Location(bucket=cos_credentials['bucket_name'],
path=training_data_file_name)
)
data_connection.set_client(client)
data_connection.write(data=training_data_file_name, remote_name=training_data_file_name)
data_connections.append(data_connection)
In this section you will learn how to:
MODEL_NAME = "Scikit German Risk Model WML V4"
DEPLOYMENT_NAME = "Scikit German Risk Deployment WML V4"
from sklearn.linear_model import SGDClassifier
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from sklearn.decomposition import TruncatedSVD
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
train_data, test_data = train_test_split(data_df, test_size=0.2)
features_idx = np.s_[0:-1]
all_records_idx = np.s_[:]
first_record_idx = np.s_[0]
In this step you will encode target column labels into numeric values. You can use inverse_transform
to decode numeric predictions into labels.
string_fields = [type(fld) is str for fld in train_data.iloc[first_record_idx, features_idx]]
ct = ColumnTransformer([("ohe", OneHotEncoder(), list(np.array(train_data.columns)[features_idx][string_fields]))])
clf_linear = SGDClassifier(loss='log', penalty='l2', max_iter=1000, tol=1e-5)
pipeline_linear = Pipeline([('ct', ct), ('clf_linear', clf_linear)])
risk_model = pipeline_linear.fit(train_data.drop('Risk', axis=1), train_data.Risk)
from sklearn.metrics import roc_auc_score
predictions = risk_model.predict(test_data.drop('Risk', axis=1))
indexed_preds = [0 if prediction=='No Risk' else 1 for prediction in predictions]
real_observations = test_data.Risk.replace('Risk', 1)
real_observations = real_observations.replace('No Risk', 0).values
auc = roc_auc_score(real_observations, indexed_preds)
print(auc)
In this section, the notebook uses the supplied Watson Machine Learning credentials to save the model (including the pipeline) to the WML instance. Previous versions of the model are removed so that the notebook can be run again, resetting all data for another demo.
import json
software_spec_uid = client.software_specifications.get_id_by_name("runtime-23.1-py3.10")
print("Software Specification ID: {}".format(software_spec_uid))
model_props = {
client.repository.ModelMetaNames.NAME: "{}".format(MODEL_NAME),
client.repository.ModelMetaNames.TYPE: 'scikit-learn_1.1',
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: software_spec_uid
}
print("Storing model ...")
published_model_details = client.repository.store_model(model=risk_model, meta_props=model_props, training_data=data_df.drop(["Risk"], axis=1), training_target=data_df.Risk)
model_uid = client.repository.get_model_id(published_model_details)
print("Done")
print("Model ID: {}".format(model_uid))
The next section of the notebook deploys the model as a RESTful web service in Watson Machine Learning. The deployed model will have a scoring URL you can use to send data to the model for predictions.
print("Deploying model...")
metadata = {
client.deployments.ConfigurationMetaNames.NAME: DEPLOYMENT_NAME,
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
deployment = client.deployments.create(model_uid, meta_props=metadata)
deployment_uid = client.deployments.get_uid(deployment)
print("Model id: {}".format(model_uid))
print("Deployment id: {}".format(deployment_uid))
fields = ["CheckingStatus", "LoanDuration", "CreditHistory", "LoanPurpose", "LoanAmount", "ExistingSavings",
"EmploymentDuration", "InstallmentPercent", "Sex", "OthersOnLoan", "CurrentResidenceDuration",
"OwnsProperty", "Age", "InstallmentPlans", "Housing", "ExistingCreditsCount", "Job", "Dependents",
"Telephone", "ForeignWorker"]
values = [
["no_checking", 13, "credits_paid_to_date", "car_new", 1343, "100_to_500", "1_to_4", 2, "female", "none", 3,
"savings_insurance", 46, "none", "own", 2, "skilled", 1, "none", "yes"],
["no_checking", 24, "prior_payments_delayed", "furniture", 4567, "500_to_1000", "1_to_4", 4, "male", "none",
4, "savings_insurance", 36, "none", "free", 2, "management_self-employed", 1, "none", "yes"],
]
scoring_payload = {"input_data": [{"fields": fields, "values": values}]}
predictions = client.deployments.score(deployment_uid, scoring_payload)
predictions
If you want to clean up all created assets:
please follow up this sample notebook.
You successfully completed this notebook!
You have finished the hands-on lab for IBM Watson Machine Learning. You created, published and deployed Scikit-Learn german credit risk model.
Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.
You can now run the model monitoring notebook. You need to pass deployed model id in mentioned notebook
Lukasz Cmielowski, PhD, is an Automation Architect and Data Scientist at IBM with a track record of developing enterprise-level applications that substantially increases clients' ability to turn data into actionable knowledge.
Szymon Kucharczyk, Software Engineer at IBM Watson Machine Learning.
Copyright © 2020 IBM. This notebook and its source code are released under the terms of the MIT License.