Deploying using the Python client

Starting with a trained machine learning model, save the model to IBM Watson Machine Learning using the Python client, then deploy and score it.

You will also learn how to list all of the deployments for given space:

Watson Machine Learning Python client library reference

You can access a reference to all of the Python commands for Watson Machine Learning here: Watson Machine Learning Python client library

Before you can deploy, you must save the model to a deployment space on Watson Machine Learning.

Note: After you save a model to a deployment space, you can view it from the space and create a deployment in the user interface. For details, see Creating a deployment.

This topic steps you through the process of saving, then deploying, a sample model.

Save a model to the repository

  1. Add a notebook to your project by clicking Add to project and selecting Notebook.

  2. Authenticate with the Python client, following the instructions in Authentication.

  3. Initialize the client with the credentials:

     from ibm_watson_machine_learning import APIClient
     wml_client = APIClient(wml_credentials)
    
  4. (Optional) Create a new deployment space. To use an existing deployment space, skip this step and enter the name of the space in the next step, entering the credentials for your Cloud Object Storage.

     metadata = {            
         client.spaces.ConfigurationMetaNames.NAME: 'YOUR DEPLOYMENT SPACE NAME,         
         client.spaces.ConfigurationMetaNames.DESCRIPTION:  description',            
         client.spaces.ConfigurationMetaNames.STORAGE: {
                 "type": "bmcos_object_storage",
                 "resource_crn": 'PROVIDE COS RESOURCE CRN '
             },
             client.spaces.ConfigurationMetaNames.COMPUTE: {
                          "name": 'INSTANCE NAME,
                          "crn": 'PROVIDE THE INSTANCE CRN' 
             }
         }
    
     space_details = client.spaces.store(meta_props=metadata)
    
  5. Get the ID for the deployment space:
     def guid_from_space_name(client, space_name):
     instance_details = client.service_instance.get_details()
     space = client.spaces.get_details()
     return(next(item for item in space['resources'] if item['entity']["name"] == space_name)['metadata']['guid'])
    
  6. Enter the details for the deployment space, putting the name of your deployment space in place of ‘YOUR DEPLOYMENT SPACE’.
     space_uid = guid_from_space_name(client, 'YOUR DEPLOYMENT SPACE')
     print("Space UID = " + space_uid)
    

    Out: Space UID = b8eb6ec0-dcc7-425c-8280-30a1d7a9c58a

  7. Set the default deployment space to work.

     client.set.default_space(space_uid)
    

Get the software specification

Your function requires a software specification to run.

  1. To view the list of predefined specifications:

     client.software_specifications.list()
    
  2. Find the id of the software specification environment that the function will be using :

     software_spec_id =  client.software_specifications.get_id_by_name('spss-modeler_18.1')
     print(software_spec_id)
    

Store the model

  1. Store the trained model to the repository and get the model ID. To do so, enter the absolute path of the trained model file, as well as the model name, model type and model runtime. Note that the model name cannot contain characters such as [ ] { } | \ ” % ~ # < > that conflict with forming a valid HTTP request.

     model_details = client.repository.store_model(model="<Trained Model file>",meta_props={
     client.repository.ModelMetaNames.NAME:"<Model Name>",
     client.repository.ModelMetaNames.TYPE:"<model type>",
     client.repository.ModelMetaNames.SOFTWARE_SPEC_UID:software_spec_id }
                                              )
     model_id = client.repository.get_model_uid(model_details)
    

    For example, a trained SPSS model might have metadata like this:

     model_details = client.repository.store_model(model="example.com/my_spss_model",meta_props={
     client.repository.ModelMetaNames.NAME:"my_spss_model",
     client.repository.ModelMetaNames.TYPE:"spss-modeler_18.1",
     client.repository.ModelMetaNames.SOFTWARE_SPEC_UID:software_spec_id }
                                              )
     model_id = client.repository.get_model_id(model_details)
    
  2. Print the model ID:

     print(model_id)
    

    Out: 8a8e68a6-038c-4e13-90d6-729bee9a99cd

Create an online deployment

Follow these steps to create an online deployment where you can get realtime scores for your model.

  1. To select the hardware runtime environment to deploy the function, first view available hardware configurations:

     client.hardware_specifications.list() 
    
  2. Select a hardware configuration:

     hardware_spec_id = client.hardware_specifications.get_id_by_name('NAME OF THE HARDWARE SPECIFICATION')
    

    for example :

     #hard_ware_spec_id = client.hardware_specifications.get_id_by_name('M')}
    
  3. Create and name the online deployment for the model you persisted.

     dep_details = client.deployments.create(artifact_uid=model_id,meta_props={
         client.deployments.ConfigurationMetaNames.NAME:"<ONLINE_DEPLOYMENT_NAME>",
         client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "id": hardware_spec_id},
         client.deployments.ConfigurationMetaNames.ONLINE:{}})
    

    Out: Successfully finished deployment creation, deployment_uid=’9a095a83-3c91-4d10-8a5c-f967a7702902’

  4. Get the deployment ID.

     dep_id = client.deployments.get_uid(dep_details)
    
  5. To score the deployed model construct the model payload, following the schema of the model.

     fields = ["Age","Sex","BP","Cholesterol","Na","K"]
     values = [[23,"F","HIGH","HIGH",0.792535,0.031258]]
    
     scoring_payload = {
     client.deployments.ScoringMetaNames.INPUT_DATA: [{  
            "fields": fields, 
            "values": values
         }]
     }
    
  6. Run the score function to generate the prediction.

     client.deployments.score(deployment_id=dep_id,meta_props=scoring_payload)
    

    List of saved models

  7. Finally, delete the deployment.

     client.deployments.delete(dep_id)
    

Create a batch deployment

Create a batch deployment to submit a data asset containing multiple payloads, and write the resulting predictions to an output file.

  1. Create and name the batch deployment.

     dep_details = client.deployments.create(artifact_uid=model_id,meta_props={
     client.deployments.ConfigurationMetaNames.NAME:"<BATCH_DEPLOYMENT_NAME>",
     client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "id": hardware_spec_id},
     client.deployments.ConfigurationMetaNames.BATCH:{},
     client.deployments.ConfigurationMetaNames.COMPUTE:{"name":"S","nodes":1}
                                                                          })
    

    Out: Successfully finished deployment creation, deployment_uid=’e6936222-4efb-4f03-997e-e1cf0ce29991’

  2. Get the deployment ID.

     dep_id = client.deployments.get_uid(dep_details)
     print(dep_id)
    
  3. Create the data asset for the batch deployment, entering the input data asset file name and input data asset name. Make sure you use a supported data asset file formats for the runtime type.

     asset_details = client.data_assets.create(name="<INPUT_DATA_ASSET_NAME>",file_path="<Input Data Asset File>")
    
  4. Get the asset ID.

     asset_id = client.data_assets.get_uid(asset_details)
    
  5. Get the Href for the input data asset.

     asset_href = client.data_assets.get_href(asset_details)
    
  6. Print the asset ID and asset Href.

     print(asset_id)
     print(asset_href)
    

    Out: 8c11502f-e047-4822-bc6e-7d6c93a24336 /v2/assets/8c11502f-e047-4822-bc6e-7d6c93a24336?space_id=62748c9a-5353-46e7-8fa2-c3ec971eccf2

Create and run the batch deployment job

  1. Construct the deployment job payload. Enter the input data asset Href and name, and the output data asset name and description.

    job_payload_ref = {
     client.deployments.ScoringMetaNames.INPUT_DATA_REFERENCES: [{
         "id": "input_data",
         "name": "<INPUT_DATA_ASSET_NAME>",
         "type": "data_asset",
         "connection": {},
         "location": {
             "href": "<INPUT_DATA_ASSET_HREF>"
         },
     }],
     client.deployments.ScoringMetaNames.OUTPUT_DATA_REFERENCE: {
             "type": "data_asset",
             "connection": {},
             "location": {
                 "name": "<OUTPUT_DATA_ASSET_NAME>_{}.csv".format(dep_id),
                 "description": "<DESCRIPTION>"
             }
         }
     }
    
  2. Enter the batch job and supply the deployment ID.

     job = client.deployments.create_job(deployment_id=dep_id,meta_props=job_payload_ref)
    
  3. Get the job ID.

     job_id = client.deployments.get_job_uid(job)
    
  4. Execute the job until the state of Get Job Status changes to ‘completed’, ‘failed’ or ‘cancelled’

     client.deployments.get_job_status(job_id)
    

    Out: {‘state’: ‘completed’, ‘running_at’: ‘2019-12-06T13:26:03.699Z’, ‘completed_at’: ‘2019-12-06T13:26:18.515Z’}

  5. Get the job details by ID.

     client.deployments.get_job_details(job_id)
    

    Results in:
    Job details by ID output

  6. Download the prediction results to the output file.

     client.data_assets.download(<OUTPUT_DATA_ASSET_ID>, <PREDICTION_RESULT_FILENAME>)
    
  7. Finally, clean up the deployment by deleting the batch deployment, the saved model, and the input data asset.

     client.deployments.delete(dep_id)
     client.repository.delete(model_id)
     client.data_assets.delete(asset_id)
    

Listing deployment jobs for given space

To list all of the deployment jobs currently available for a site, use the list_jobs method with a specified space id:

client.deployment.list_jobs(<deployment_id>)

Note that only deployment jobs are listed. Deleting a deployment job means it will no longer be listed for the space.