About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Last updated: Dec 19, 2024
Follow these rules when you are specifying input details for batch deployments of Decision Optimization models.
Data type summary table:
Data | Description |
---|---|
Type | inline and data references |
File formats | Refer to Model input and output data file formats. |
Data sources
Input/output inline data:
- Inline input data is converted to CSV files and used by the engine.
- CSV output data is converted to output inline data.
- Base64-encoded raw data is supported as input and output.
Input/output data references:
- Tabular data is loaded from CSV, XLS, XLSX, JSON files or database data sources supported by the WDP connection library, converted to CSV files, and used by the engine.
- CSV output data is converted to tabular data and saved to CSV, XLS, XLSX, JSON files, or database data sources supported by the WDP connection library.
- Raw data can be loaded and saved from or to any file data sources that are supported by the WDP connection library.
- No support for compressed files.
- The environment variables parameter of deployment jobs is not applicable.
If you are specifying input/output data references programmatically:
-
Data source reference
depends on the asset type. Refer to the Data source reference types section in Adding data assets to a deployment space.type
-
For S3 or Db2, connection details must be specified in the
parameter, in the deployment job’s payload.input_data_references.connection
-
For S3 or Db2, location details such as table name, bucket name, or path must be specified in the
parameter, in the deployment job’s payload.input_data_references.location.path
-
For
, a managed asset can be updated or created. For creation, you can set the name and description for the created asset.data_asset
-
You can use a pattern in ID or connection properties. For example, see the following code snippet:
- To collect all output CSV as inline data:
"output_data": [ { "id":".*\\.csv"}]
- To collect job output in a particular S3 folder:
"output_data_references": [ {"id":".*", "type": "s3", "connection": {...}, "location": { "bucket": "do-wml", "path": "${job_id}/${attachment_name}" }}]
For more information, see Input data sources and Output data sources.
Parent topic: Batch deployment input details by framework
Was the topic helpful?
0/1000