0 / 0
Batch deployment input details for Decision optimization models
Batch deployment input details for Decision optimization models

Batch deployment input details for Decision optimization models

Follow these rules when specifying input details for batch deployments of Decision optimization models.

Data type summary table:

Data Description
Type inline and data references
File formats Refer to Model input and output data file formats.

Data Sources:

Input/output inline data:

  • Inline input data is converted to CSV files and used by engine.
  • CSV output data is converted to output inline data.
  • Base64-encoded raw data is supported as input and output.

Input/output data references:

  • Tabular data is loaded from CSV, XLS, XLSX, JSON files or database datasources supported by the WDP connection Library, converted to CSV files, and used by engine.
  • CSV output data is converted to tabular data and saved to CSV, XLS, XLSX, JSON files or database datasources supported by the WDP connection Library.
  • Raw data can be loaded and saved from/to any file datasources supported by the WDP connection Library.
  • No support for ZIP files.
  • The environment variables parameter of deployment jobs is not applicable.

If you are specifying input/output data references programmatically:

  • Data source reference type depends on the asset type. Refer to the Data source reference types section in Adding data assets to a deployment space.
  • For S3 or Db2, connection details must be specified in input_data_references.connection parameter, in the deployment job’s payload.
  • For S3 or Db2, location details such as table name, bucket name or path must be specified in input_data_references.location.path parameter, in the deployment job’s payload.
  • For data_asset, a managed asset can be updated or created. In case of creation you can set the name and description for the created asset.
  • You can use a pattern in id or connection properties. For example:

    • To collect all output CSV as inline data:

      "output_data": [ { "id":".*\\.csv"}]
      
    • To collect job output in a particular S3 folder:

      "output_data_references": [ {"id":".*", "type": "s3", "connection": {...}, "location": { "bucket": "do-wml", "path": "${job_id}/${attachment_name}" }}]
      

Note Support for s3 and db2 values for scoring.input_data_references.type and scoring.output_data_references.type is deprecated and will be removed in the future. Use connection_asset or data_asset instead. See the documentation for the Watson Machine Learning REST API or Watson Machine Learning Python client library for details and examples.

For details on deploying decision optimization solutions, refer to Model input and output data adaptation.

Parent topic: Batch deployment input details by framework