Model deployment

IBM Watson Machine Learning enables you to deploy your Decision Optimization prescriptive model and associated master data once and then submit job requests to this deployment with only the related transactional data. This can be achieved using the Watson Machine Learning REST API or using the Watson Machine Learning Python client.

Overview

The steps to deploy and submit jobs for a Decision Optimization model are as follows. These steps are detailed in later sections.

  1. Create a Watson Machine Learning service instance.
  2. Authenticate your Watson Machine Learning service instance.
  3. Deploy your model with master data.
  4. Create and monitor jobs to this deployed model.

Creating a Watson Machine Learning service instance

  1. Log in to IBM Cloud. (This takes you to your IBM Cloud dashboard.)
  2. In your IBM Cloud dashboard, click Create a resource. (This takes you to the IBM Catalog.)
  3. Select Machine Learning in the AI category.
  4. Select a region/location to deploy in, for example choose Dallas.
  5. Select a pricing plan. For example, choose the Lite free version and click Create. (Your Watson Machine Learning service opens).
  6. Click Service credentials tab.
  7. If there are no service credentials yet, click the New credential button, and then click Add in the pane that opens.
  8. Under the ACTIONS menu, click "View credentials". You will need these credentials for the Authentication that follows.

Watson Machine Learning Authentication

  1. Look up your Watson Machine Learning service instance credentials in Watson Studio (see Service Credentials from IBM Cloud) and copy your API key, instance id and URL.
  2. Using your API key, generate the IAM access token and use this, together with your Watson Machine Learning instance id, as headers in all subsequent API calls.

Model deployment

In this phase you first package your Decision Optimization model with master data (optional) ready for deployment as a tar.gz or zip file. This includes the following optional files:
  1. your model files
  2. settings (see Solve parameters for more information)
  3. master data
When registering your model in WML, you target a particular Decision Optimization runtime version:
  • do_12.9 runtime currently based on CPLEX V.12.9

the model type:

  • opl (do-opl_12.9)
  • cplex (do-cplex_12.9)
  • cpo (do-cpo_12.9)
  • docplex (do-docplex_12.9) using Python V.3.6

and upload the associated model archive if needed.

This Watson Machine Learning model can then be used in one or multiple deployments.

In summary, to deploy your model:

  1. Choose your desired Decision Optimization runtime.
  2. Package your Decision Optimization model with master data (optional) ready for deployment as a tar.gz or zip file.
  3. Upload your model archive (tar.gz or zip file) on Watson Machine Learning. See Model input and output data file formats for information about input file types. You obtain a model-URL.
  4. Deploy your model using the model-URL and obtain a deployment-id.
  5. Monitor the deployment using the deployment-id. Deployment states can be: initializing, updating, ready, or failed.

Model execution

Once your model is deployed, you can submit Decision Optimization jobs to this deployment specifying the:

  • input data: the transaction data used as input by the model. This can be inline or referenced
  • output data: to define how the output data is generated by model. This is returned as inline or referenced data.
  • solve parameters: to customize the behavior of the solution engine
For more information see Model input and output data adaptation.

After submitting a job, you can use the job-id to poll the job status to collect the:

  • Job execution status or error message
  • Solve execution status, progress and log tail
  • Inline or referenced output data

Job states can be : queued, running, completed, failed, canceled.

Model input and output data file formats

With your Decision Optimization model you can use the following input and output data identifiers and extension combinations.

This table shows the supported file type combinations for Decision Optimization in Watson Machine Learning:
Model type Input file type Output file type Comments
cplex
.lp
.mps
.sav
.feasibility
.prm
.xml
.json
The output format can be specified using the API.

Files of type .lp, .mps, and .sav can be compressed using gzip or bzip2, and uploaded as, for example, .lp.gz or .sav.bz2.

The schemas for the CPLEX formats for solutions, conflicts, and feasibility files are available for you to download. Click here to download them in a .zip archive.

cpo .cpo
.xml
.json
The output format can be specified using the solve parameter.

The native file format for CPO models is documented in the Knowledge Center: CP Optimizer file format syntax.

opl
.mod
.dat
.oplproject
.xls
.json
.csv
.xml
.json
.txt
.csv
The output format is consistent with the input type but can be specified using the solve parameter if needed. To take advantage of data connectors, use the .csv format.
docplex
.py
*.* (input data)
Any output file type, specified in the model Any format can be used in your Python code, but to take advantage of data connectors, use the .csv format.

Data identifier restrictions

A file name

  • is limited to 255 characters;
  • can include only ASCII characters;
  • cannot include the characters /\?%*:|"<>, the space character, or the null character; and
  • cannot include _ as the first character.

Model input and output data adaptation

When submitting your job you can include your data inline or reference your data in your request. This data will be mapped to a file named with data identifier and used by the model. The data identifier extension will define the format of the file used.

The following adaptations are supported:
  • Inline data to embed your data in your request. For example:
    "input_data": [{
         "id":"diet_food.csv",
         "fields" : ["name","unit_cost","qmin","qmax"],
         "values" : [
    	 	["Roasted Chicken", 0.84, 0, 10]
         ]
    }]
    
    This will generate the corresponding diet_food.csv file that is used as the model input file. Only csv adaptation is currently supported.
  • Db2 referenced data allowing you to reference data on an “IBM Db2 on Cloud service” instance. For example:
    "input_data_references": [{
    "id":"diet_food.csv",
    	"type": "db2",
    	"connection": {
    		"host": " XXXXXXXXX",
    		"db": " XXXXXXXXX",
    		"username": "XXXXXXXXX",
    		"password": "XXXXXXXXX"
    },
    	"location": {
    		"schemaname": " XXXXXXXXX",
    		"tablename": "diet_food"
    	}
    }]
    
    This will generate the corresponding diet_food.csv file that is used as the model input file. Only csv adaptation is currently supported. You can find connection information about accessing service credentials section of your “IBM Db2 on Cloud service” instance details page.
  • COS/S3 referenced data allowing you to reference files stored in an “IBM Cloud Object Service” instance. For example:
    "input_data_references": [{
    "type": "s3",
                "id": "diet_food.csv",
                "connection": {
                	"endpoint_url": " XXXXXXXXX",
                       "access_key_id": " XXXXXXXXX",
                       "secret_access_key": " XXXXXXXXX"
          	},
                "location": {
                	"bucket": "XXXXXXXXX",
                            "path": "diet_food.csv"
                }
    }]
    
    This will copy the corresponding diet_food.csv file that is used as the model input file. You can find connection information about accessing service credentials section of your “IBM Cloud Object Service” instance details page. Your service credential entry must be created with the inline configuration parameter: {"HMAC":true}. This configuration parameter will add the section below to instance credentials that are used in connection fields,
    "cos_hmac_keys": {
         "access_key_id": " XXXXXXXXX ",
         "secret_access_key": " XXXXXXXXX "
    }
    The endpoint URL is located on your bucket configuration page, corresponding to your bucket regional endpoint ( for example: https://s3-api.us-geo.objectstorage.softlayer.net)
  • URL referenced data allowing you to reference files stored at a particular URL or REST data service. For example:
    "input_data_references": {
       "type": "url",
       "id": "diet_food.csv",
       "type": "url",
       "connection": {
          "verb": "GET",
          "url": "https://myserver.com/diet_food.csv",
          "headers": {
             "Content-Type": "application/x-www-form-urlencoded"
          }
       },
       "location": {}
    }] 
    This will copy the corresponding diet_food.csv file that is used as the model input file.

You can combine different adaptations in the same request.

Output data definition

When submitting your job you can define what output data you want and how you collect it (as either inline or referenced data). For example:
  • To collect solution.csv output as inline data:
    "output_data": [{
         "id":"solution.csv"
    }]
  • Regexp can be also used as an identifier. For example to collect all csv output files as inline data:
    "output_data": [{
         "id":”.*\\.csv"
    }]
  • Similarly for reference data, to collect all csv files in COS/S3 in job specific folder, you can combine regexp and ${job_id} and ${ attachment_name } place holder
    "output_data_references": [{
         "id":".*\\.csv",
         "type": "s3",
         "connection": {
                  "endpoint_url": " XXXXXXXXX ",
                  "access_key_id": " XXXXXXXXX ",
                  "secret_access_key": " XXXXXXXXX "
         },
         "location": {
    		"bucket": "XXXXXXXXX",
    		"path": "${job_id}/${attachment_name}"					}
    }]
    For example, here if you have a job with identifier <XXXXXXXXX> to generate a solution.csv file, you will have in your COS/S3 bucket, a XXXXXXXXX / solution.csv file.

Solve parameters

To control solve behavior, you can specify solve parameters in you request as named value pairs. For example:
"solve_parameters" : {
     "oaas.logAttachmentName":"log.txt",
     "oaas.logTailEnabled":"true"
}
This will allow you to collect the engine log tail during the solve and the whole engine log as output at the end of the solve.

You can use these parameters in your request.

Name Type Description
oaas.timeLimit Number You can use this parameter to set a time limit in milliseconds.
oaas.resultsFormat Enum
  • JSON
  • XML
  • TEST
  • XLSX
Specifies the format for returned results. JSON is the default format. Other formats might or might not be supported by each application type.
oaas.oplRunConfig String Specifies the name of the OPL run configuration to be executed.
oaas.logTailEnabled Boolean Allows you to include the log tail in the solve status.
oaas.logAttachmentName String If defined, this allows you to attach engine logs as a job output attachment.
oaas.engineLogLevel Enum
  • OFF
  • SEVERE
  • WARNING
  • INFO
  • CONFIG
  • FINE
  • FINER
  • FINEST
You can use this to define the level of detail provided by engine log. The default value is INFO.
oaas.logLimit number Maximum log-size limit in number of characters.
oaas.dumpZipName can be viewed as Boolean (see Description) If defined, a job dump (input and outputs) zip file is provided with this name as a job output attachment. Name can contain placeholder ${job_id}. If defined with no value, dump_${job_id}.zip attachmentName is used. If not defined, by default, no job dump zip file is attached.
oaas.dumpZipRules String If defined this generates a zip file according to specific job rules (RFC 1960-based Filter). It must be used in conjunction with the {@link DUMP_ZIP_NAME} parameter. Filters can be defined on the duration and the following {@link com.ibm.optim.executionservice.model.solve.SolveState} properties:
  • duration
  • solveState.executionStatus
  • solveState.interruptionStatus
  • solveState.solveStatus
  • solveState.failureInfo.type
Example:
(duration>=1000) or (&(duration<1000)(!(solveState.solveStatus=OPTIMAL_SOLUTION))) or (|(solveState.interruptionStatus=OUT_OF_MEMORY)(solveState.failureInfo.type=INFRASTRUCTURE))