You can deploy a Decision Optimization model, create and monitor jobs, and get solutions by using the watsonx.ai Runtime Python client.
To deploy your model, see Deploying a Decision Optimization model.
For more information, see watsonx.ai Runtime Python client documentation.
- Install the watsonx.ai Runtime Python Client API.
- Create a client instance.
- Prepare your model archive.
- Upload your model.
- Create a deployment.
- Create and monitor a job with inline data for your deployed model.
- Display the solution.
- Deploying a DO model with WML
- RunDeployedModel
- ExtendWMLSoftwareSpec
The Deploying a DO model with WML sample shows you how to deploy a Decision Optimization model, create and monitor jobs, and get solutions by using the watsonx.ai Runtime Python client. This notebook uses the diet sample for the Decision Optimization model and takes you through the whole procedure without using the Decision Optimization experiment UI.
The RunDeployedModel shows you how to run jobs and get solutions from an existing deployed model. This notebook uses a model that is saved for deployment from a Decision Optimization experiment UI scenario.
yourpackage-1.0.4.tar.gz
yourpackage-1.0.4.zip
yourproject-1.2.3-py33-none-any.whl
Thus, for a package that is named
yourpackage-1.0.4.tgz
, the following code shows how to create the package
extension. You must use the same package name and version in the NAME
field.meta_prop_pkg_ext = {
client.package_extensions.ConfigurationMetaNames.NAME: "yourpackage-1.0.4.tgz",
client.package_extensions.ConfigurationMetaNames.DESCRIPTION: "Pkg extension for custom lib",
client.package_extensions.ConfigurationMetaNames.TYPE: "pip_zip"
}
You can also find in the samples several notebooks for deploying various models, for example CPLEX, DOcplex and OPL models with different types of data.