Migrating from Watson Machine Learning API V4 Beta

As the Watson Machine Learning V4 API now released, if you previously used the Watson Machine Learning V4 Beta API for Decision Optimization, read here about some changes and improvements.

Important changes

See also the API documentation for the new V4 API.

Note that the root name space for this API is now /ml/v4/.

API keys and tokens

Authorization has now been simplified. Obtaining bearer tokens from IAM is now performed using a generic user apikey instead of a Watson Machine Learning specific apikey. It is no longer necessary to create specific credentials on the Watson Machine Learning instance.

Deployment spaces

In the Beta V4 API, all Watson Machine Learning usage was linked to some specific Watson Machine Learning instance. In the final API release, in addition to the Watson Machine Learning instance, deployment spaces are required. Spaces in Watson Machine Learning deployment are similar to projects in Watson Studio development.

Spaces allow you to group together all types of assets (data, models, deployments) which are related to the same problem or task all in one "bucket". To create a deployment space you also need to create a Cloud Object Storage (COS) instance. All of this can be done using the https://dataplatform.cloud.ibm.com user interface. You can create a deployment space,. Then view it and copy your Space ID from the settings tab. You can also do all this using the APIs (see Spaces).

Then in all API calls, you must pass the Space ID instead of the Instance ID, as shown in the following examples.

Software specifications

Another important improvement is the ability to better configure the software used when running your optimization models. Previously, you could only choose among predefined and non-configurable runtimes. Now software specifications allow you to precisely define not only the CPLEX version to be used, but also include additional extensions (such as using conda .yml files or custom libraries).

You can use the default specifications using their names do_12.9 or do_12.10.

For example, the following shows the payload to create a model:
  "name": "Diet",
  "description": "Diet model",
  "type": "do-docplex_12.10",
  "software_spec": {
    "name": "do_12.10"
  "space_id": "SPACE-ID-HERE"

For more advanced usage, and to configure additional software which will be available for your model execution, you can derive existing software specifications and add some package extensions.

Hardware specifications

After the model is created and the formulation uploaded, one or several deployments are created that can be called from a production application. Deployments used to be configured with a "T-shirt size" parameter and the number of nodes parameter. More configuration capability is now provided through hardware specifications. Similarly to software specifications, you can list the existing ones or simply update your code to use the one of the default ones.

For example, the following shows the payload to deploy a model:
    "asset": {
        "id": "ASSET-ID-HERE"
        "num_nodes": 1

For more information, see REST API example.

Other API changes

  • When uploading a model formulation, you must now set the URL parameter content_format to native. For example:
    curl --location --request PUT \
      "https://us-south.ml.cloud.ibm.com/ml/v4/models/MODEL-ID-HERE/content?version=2020-08-01&space_id=SPACE-ID-HERE&content_format=native" \
      -H "Authorization: bearer TOKEN-HERE" \
      -H "Content-Type: application/gzip" \
      --data-binary '@diet.zip'
  • The jobs endpoint is now deployment_jobs.
  • In most queries, the version parameter must be included, for example: ?version=2020-08-07 .
  • In some of the returned payloads (for example, when creating a model or a deployment) the id should be accessed using id instead of the guid as previously.

What has not changed

After you have deployed your model and created your deployment, you create jobs with input and output data has not changed. The payload for this has not changed, except that you need to pass the space_id instead of the instance_id.

If you are using the Python WatsonMachineLearningClient package, you should not have to change your code, but just use the new credentials of the new instance and space. See also Python client examples.

Summary of the new flow

Remember that the root name space for this API is now /ml/v4/.

  1. Create a Watson Machine Learning instance and COS instance, and create deployment space, using either the Watson Studio user interface or the API.
  2. Create your model in Watson Machine Learning, using software_spec and your space_id.
  3. Upload your model formulation setting content_format to native.
  4. Create your deployment in Watson Machine Learning, using hardware_spec and your space_id .

For more information, see REST API example.