Migrating Python code for Decision Optimization with Machine Learning-v2 instances

Use the latest version of the Watson Machine Learning Python client for Decision Optimization.

Data Scientists can easily deploy and integrate Decision Optimization models in Watson Machine Learning using the Watson Machine Learning Python client with the Machine Learning-v2 (which use the V4 APIs - see the API documentation for the V4 API).

New package

A new package called ibm-watson-machine-learning is now available which works with the new Watson Machine Learning V2 instances.

First uninstall your existing package, for example:

pip uninstall watson-machine-learning-V4

Then install the latest one:

pip install ibm-watson-machine-learning

New client class

The Machine Learning-v2 instances use your user apikey instead of Watson Machine Learning specific credentials,, so you must adapt the creation of the client, as follows:

from ibm_watson_machine_learning import APIClient
# THIS IS THE USER CREDENTIALS
wml_credentials = {
      "apikey": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
      "url": "https://us-south.ml.cloud.ibm.com"
}
client = APIClient(wml_credentials)

Deployment space

As assets and deployments are now grouped in deployment spaces, set the deployment space to be used:

client.set.default_space(space_id)

The deployment space can be created from the user interface or via Python code, as follows, using the Cloud Object Storage and Watson Machine Learning instance CRNs:

cos_resource_crn = 'XXXXXXXXXXXX'
instance_crn = 'XXXXXXXXXXX'

metadata = {
    client.spaces.ConfigurationMetaNames.NAME: space_name,
    client.spaces.ConfigurationMetaNames.DESCRIPTION: space_name + ' description',
    client.spaces.ConfigurationMetaNames.STORAGE: {
        "type": "bmcos_object_storage",
        "resource_crn": cos_resource_crn
    },
    client.spaces.ConfigurationMetaNames.COMPUTE: {
        "name": "existing_instance_id",
        "crn": instance_crn
    }
}
space = client.spaces.store(meta_props=metadata)
space_id = client.spaces.get_id(space)

Specifications

It is now possible to set detailed configurations for software and hardware, and the payload to create model and deployment has to be modified accordingly.

Software specifications in model creation.

model_metadata = {
    client.repository.ModelMetaNames.NAME: model_name,
    client.repository.ModelMetaNames.DESCRIPTION: model_name,
    client.repository.ModelMetaNames.TYPE: "do-opl_12.10",
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: client.software_specifications.get_uid_by_name("do_12.10")
}

model_details = client.repository.store_model(model='./model.tar.gz', meta_props=model_metadata)

Hardware specifications in deployment creation.

deployment_props = {
    client.deployments.ConfigurationMetaNames.NAME: deployment_name,
    client.deployments.ConfigurationMetaNames.DESCRIPTION: deployment_name,
    client.deployments.ConfigurationMetaNames.BATCH: {},
    client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: {'name': 'S', 'nodes': 1}
}

deployment_details = client.deployments.create(model_uid, meta_props=deployment_props)

Inline data

In addition to tabular data (fields and values), it is now possible to provide non-tabular data (such as an OPL .dat file or an .lp file) inline in the job creation instead of referencing external storage assets (for example, Cloud Object Storage assets).

This can be done by providing the binary-encoded string content as follows:

client.deployments.DecisionOptimizationMetaNames.INPUT_DATA: [
    {
        "id": dat_file,
        "content": getfileasdata(dat_file)
    }
],

Here is an example function to binary-encode a file:

import base64
def getfileasdata(filename):
    with open(filename, 'r') as file:
        data = file.read();

    data = data.encode("UTF-8")
    data = base64.b64encode(data)
    data = data.decode("UTF-8")

    return data

Example

In the gallery, the notebook to deploy and run a model has been updated.