0 / 0
Running jobs
Last updated: Oct 09, 2024
Decision Optimization running jobs

Decision Optimization uses Watson Machine Learning asynchronous APIs to enable jobs to be run in parallel.

In order to solve a problem, you can create a new job from the model deployment and associate data to it. See Deployment steps and the REST API example. There is no charge for deploying a model, only the solving of a model with some data is charged, based on the running time.

To solve more than one job at a time, specify more than one node when you create your deployment. For example in this REST API example, increment the number of the nodes by changing the value of the nodes property: "nodes" : 1.

PODs (nodes)

When a job is created and submitted, the way it gets executed depends on the current configuration and jobs running for the Watson Machine Learning instance is shown in the following diagram.

Job workflor showing job queue, existing pod and new pod.
  1. The new job is sent to the queue.
  2. If a POD is started but idle (not running a job) it immediately begins processing this job.
  3. Otherwise, if the maximum number of nodes has not been reached, a new POD is started (this can take a few seconds), and the job is assigned to be processed by this new POD.
  4. Otherwise, the job waits in the queue until one of the running PODs has finished and can pick up the waiting job

The configuration of PODs of each size is as follows:

Table 1. T-shirt sizes for Decision Optimization
Definition Name Description
2 vCPU and 8 GB S Small
4 vCPU and 16 GB M Medium
8 vCPU and 32 GB L Large
16 vCPU and 64 GB XL Extra Large

For all configurations, 1 vCPU and 512 MB are reserved for internal use.

In addition to the solving time, the pricing will depend on the selected size through a multiplier.

In the deployment configuration you can also set the maximal number of nodes to be used.

Idle PODs are automatically stopped after some timeout. If a new job is submitted when no PODs are up, it will take some time (approximately 30 seconds) for the POD to restart.

Running time based pricing (CUH)

Only the job solving time is charged: the idle time for PODs is not charged.

Depending on the size of the POD used, a different multiplier will be used to compute the number of Capacity Units Hours (CUH) consumed.

REST API example

For the full procedure of deploying a model and links to the Swagger documentation, see REST API example.

Python API example

In addition to the REST APIs, a Python API is provided with Watson Machine Learning which allows you to easily create, deploy and use a Decision Optimization model from a Python notebook.

For more information, see Python client example.

An example notebook describing and documenting all steps is available from the Samples.

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more