Select a watsonx.ai Runtime instance from the list of
AI/Machine Learning services in the IBM Cloud Resource list
view.
Copy the Name, GUID, and
CRN from the information pane for your watsonx.ai Runtime instance. (To open the information pane, click
anywhere in the row next to your watsonx.ai Runtime service
name, but not on the name itself. The information pane then opens in the same window.)
Select a deployment space from the list of Deployments.
Copy the Space GUID from the Manage >
General tab. For more information, see Deployment
spaces.
About this task
Copy link to section
The following steps show you how deploy a Decision
Optimization model using the
watsonx.ai Runtime REST API. The REST API example uses curl, a
command line tool and library for transferring data with URL syntax. You can download curl and read
more about it at http://curl.haxx.se. For more information about the REST APIs relevant for Decision
Optimization, see the following sections:
For Windows users, use ^ instead of \ for the multi-line separator and double
quotation marks " throughout these code examples. Windows users also need to use indentation of at
least one character space in the header lines.
For clarity, some code examples in this
procedure have been placed in a json file to make the commands more readable
and easier to use.
Once you have created a deployment using the REST API, you can also view it
and send jobs to it from the Deployments > Spaces page in the https://dataplatform.cloud.ibm.com user
interface.
Use the obtained token (access_token value) prepended by the word Bearer in
the Authorization header, and the watsonx.ai Runtime service GUID in the
ML-Instance-ID header, in all API
calls.
Optional: If you have not obtained
your SPACE-ID from the user interface as described previously, you can create a space using
the REST API as follows. Use the previously obtained token prepended by the word bearer
in the Authorization header in all API
calls.
All API requests require a version parameter that takes a date in the format
version=YYYY-MM-DD. This code example posts a model that uses the file
create_model.json. The URL will vary according to the chosen region/location for
your watsonx.ai Runtime service. See
Endpoint
URLs.
The Python version is stated explicitly here in a custom block. This is
optional. Without it your model will use the default version which is currently Python 3.11. As the default version will evolve over time, stating the Python version
explicitly enables you to easily change it later or to keep using an older supported version when
the default version is updated. Currently supported versions are 3.11 and 3.10
(deprecated).
If you want to be able to run jobs for this model from the user interface, instead of only
using the REST API , you must define the schema for the input and output data. If you do not
define the schema when you create the model, you can only run jobs using the REST API and not from
the user interface.
You can also use the schema specified for input and output in your optimization model:
When you post a model you provide information about its model type and the software
specification to be used.
Model types can be, for example:
do-opl_22.1 for OPL models
do-cplex_22.1 for CPLEX models
do-cpo_22.1 for CP models
do-docplex_22.1 for Python models
Version 20.1 can also be used for these model types.
For
the software specification, you can
use
the default specifications using their names do_22.1
or do_20.1. See also Extend software specification
notebook which shows you how to extend the Decision
Optimization software
specification (runtimes with additional Python libraries for DOcplex models).
A
MODEL-ID is returned in id field in the metadata.
You can download this example and other models from the DO-samples. Select the relevant
product and version subfolder.
Deploy your model
Create a reference to your model.
Use the SPACE-ID, the MODEL-ID obtained when you created your model ready for
deployment and the hardware specification. For example:
You can then Submit jobs for your deployed model defining the input data and the
output (results of the optimization solve) and the log file.
For example, the following
shows the contents of a file called myjob.json. It contains (inline) input data,
some solve parameters, and specifies that the output will be a .csv file. For more information see
Decision Optimization batch deployment and model execution.
If you delete a job using the API, it will still be displayed in the user
interface.
Optional: You can delete deployments as follows:
If you delete a deployment that contains jobs using the API, the jobs will still
be displayed in the deployment space in the user interface.
Results
Copy link to section
Once your model has been deployed and job executed, the solution results are provided either
inline or in the file and location that you specified, for example using an S3 reference. You can
post new jobs using the deployment-ID without having to redeploy your model.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.