Use Decision Optimization to plan your diet with ibm-watsonx-ai¶

This notebook facilitates Decision Optimization and Watson Machine Learning services. It contains steps and code to work with ibm-watsonx-ai library available in PyPI repository. It also introduces commands for getting model and training data, persisting model, deploying model and scoring it.

Some familiarity with Python is helpful.

Learning goals¶

The learning goals of this notebook are:

  • Load a Decision Optimization model into a Watson Machine Learning repository.
  • Prepare data for training and evaluation.
  • Create a Watson Machine Learning job.
  • Persist a Decision Optimization model in a Watson Machine Learning repository.
  • Deploy a model for batch scoring using watsonx.ai API.

Contents¶

This notebook contains the following parts:

  1. Set up the environment
  2. Download externally created Decision Optimization model and data
  3. Persist externally created Decision Optimization model
  4. Deploy in a Cloud
  5. Create job
  6. Clean up
  7. Summary and next steps

1. Set up the environment¶

Before you use the sample code in this notebook, you must:

  • create a Watson Machine Learning (WML) Service instance. A free plan is offered and information about how to create the instance can be found at https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html?context=cpdaas.

Install and then import the watsonx.ai client library.

Note: ibm-watsonx-ai documentation can be found here.

In [1]:
# Install watsonx.ai client API

!pip install ibm-watsonx-ai
Requirement already satisfied: ibm-watsonx-ai in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (1.0.11)
Requirement already satisfied: requests in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from ibm-watsonx-ai) (2.31.0)
Requirement already satisfied: urllib3 in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from ibm-watsonx-ai) (1.26.19)
Requirement already satisfied: pandas<2.2.0,>=0.24.2 in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from ibm-watsonx-ai) (1.5.3)
Requirement already satisfied: certifi in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from ibm-watsonx-ai) (2024.7.4)
Requirement already satisfied: lomond in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from ibm-watsonx-ai) (0.3.3)
Requirement already satisfied: tabulate in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from ibm-watsonx-ai) (0.8.10)
Requirement already satisfied: packaging in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from ibm-watsonx-ai) (23.0)
Requirement already satisfied: ibm-cos-sdk<2.14.0,>=2.12.0 in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from ibm-watsonx-ai) (2.12.0)
Requirement already satisfied: importlib-metadata in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from ibm-watsonx-ai) (6.0.0)
Requirement already satisfied: ibm-cos-sdk-core==2.12.0 in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from ibm-cos-sdk<2.14.0,>=2.12.0->ibm-watsonx-ai) (2.12.0)
Requirement already satisfied: ibm-cos-sdk-s3transfer==2.12.0 in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from ibm-cos-sdk<2.14.0,>=2.12.0->ibm-watsonx-ai) (2.12.0)
Requirement already satisfied: jmespath<1.0.0,>=0.10.0 in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from ibm-cos-sdk<2.14.0,>=2.12.0->ibm-watsonx-ai) (0.10.0)
Requirement already satisfied: python-dateutil<3.0.0,>=2.8.2 in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from ibm-cos-sdk-core==2.12.0->ibm-cos-sdk<2.14.0,>=2.12.0->ibm-watsonx-ai) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from pandas<2.2.0,>=0.24.2->ibm-watsonx-ai) (2022.7)
Requirement already satisfied: numpy>=1.21.0 in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from pandas<2.2.0,>=0.24.2->ibm-watsonx-ai) (1.23.5)
Requirement already satisfied: charset-normalizer<4,>=2 in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from requests->ibm-watsonx-ai) (2.0.4)
Requirement already satisfied: idna<4,>=2.5 in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from requests->ibm-watsonx-ai) (3.7)
Requirement already satisfied: zipp>=0.5 in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from importlib-metadata->ibm-watsonx-ai) (3.11.0)
Requirement already satisfied: six>=1.10.0 in /opt/conda/envs/Python-RT23.1/lib/python3.10/site-packages (from lomond->ibm-watsonx-ai) (1.16.0)
In [2]:
!pip install -U wget
Collecting wget
  Downloading wget-3.2.zip (10 kB)
  Preparing metadata (setup.py) ... done
Building wheels for collected packages: wget
  Building wheel for wget (setup.py) ... done
  Created wheel for wget: filename=wget-3.2-py3-none-any.whl size=9656 sha256=94781e98f20ff3486dfb46693a806619de1491bbc8b0b421145dc3332bbc2fc8
  Stored in directory: /tmp/wsuser/.cache/pip/wheels/8b/f1/7f/5c94f0a7a505ca1c81cd1d9208ae2064675d97582078e6c769
Successfully built wget
Installing collected packages: wget
Successfully installed wget-3.2
In [3]:
from ibm_watsonx_ai import APIClient
from ibm_watsonx_ai import Credentials

Create a client instance¶

Use your IBM Cloud API key. You can find information on how to get your API key here and the instance URL here.

In [4]:
# Instantiate a client using credentials
credentials = Credentials(
      api_key = "<API_key>",
      url = "<instance_url>"
)

client = APIClient(credentials)

Working with spaces¶

First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use Deployment Spaces Dashboard to create one.

  • Click New deployment space
  • Create an empty space
  • Select Cloud Object Storage
  • Select Watson Machine Learning instance and press Create

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

In [5]:
# Find the space ID

space_name = "<space_name>"

space_id = [x['metadata']['id'] for x in client.spaces.get_details()['resources'] if x['entity']['name'] == space_name][0]

client = APIClient(credentials, space_id = space_id)

2. Download externally created Decision Optimization model and data¶

In this section, you will download externally created Decision Optimization model and data used for training it.

Action: Get your Decision Optimization model.

In [6]:
import os
import wget
model_path = 'do-model.tar.gz'
if not os.path.isfile(model_path):
    wget.download("https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/models/decision_optimization/do-model.tar.gz")

3. Persist externally created Decision Optimization model¶

In this section, you will learn how to store your model in Watson Machine Learning repository by using the watsonx.ai Client.

3.1 Publish model¶

Publish model in Watson Machine Learning repository on Cloud.¶

Define model name, autor name and email.

Get software specification for Decision Optimization model

In [7]:
sofware_spec_id = client.software_specifications.get_id_by_name("do_22.1")

Output data schema for storing model in WML repository

In [8]:
output_data_schema = [{'id': 'stest',
                       'type': 'list',
                       'fields': [{'name': 'age', 'type': 'float'},
                                  {'name': 'sex', 'type': 'float'},
                                  {'name': 'cp', 'type': 'float'},
                                  {'name': 'restbp', 'type': 'float'},
                                  {'name': 'chol', 'type': 'float'},
                                  {'name': 'fbs', 'type': 'float'},
                                  {'name': 'restecg', 'type': 'float'},
                                  {'name': 'thalach', 'type': 'float'},
                                  {'name': 'exang', 'type': 'float'},
                                  {'name': 'oldpeak', 'type': 'float'},
                                  {'name': 'slope', 'type': 'float'},
                                  {'name': 'ca', 'type': 'float'},
                                  {'name': 'thal', 'type': 'float'}]
                      },
                      {'id': 'teste2',
                       'type': 'test',
                       'fields': [{'name': 'age', 'type': 'float'},
                                  {'name': 'sex', 'type': 'float'},
                                  {'name': 'cp', 'type': 'float'},
                                  {'name': 'restbp', 'type': 'float'},
                                  {'name': 'chol', 'type': 'float'},
                                  {'name': 'fbs', 'type': 'float'},
                                  {'name': 'restecg', 'type': 'float'},
                                  {'name': 'thalach', 'type': 'float'},
                                  {'name': 'exang', 'type': 'float'},
                                  {'name': 'oldpeak', 'type': 'float'},
                                  {'name': 'slope', 'type': 'float'},
                                  {'name': 'ca', 'type': 'float'},
                                  {'name': 'thal', 'type': 'float'}]
                      }]
In [9]:
model_meta_props = {
                        client.repository.ModelMetaNames.NAME: "LOCALLY created DO model",
                        client.repository.ModelMetaNames.TYPE: "do-docplex_22.1",
                        client.repository.ModelMetaNames.SOFTWARE_SPEC_ID: sofware_spec_id,
                        client.repository.ModelMetaNames.OUTPUT_DATA_SCHEMA: output_data_schema
                    }
published_model = client.repository.store_model(model=model_path, meta_props=model_meta_props)

Note: You can see that model is successfully stored in Watson Machine Learning Service.

3.2 Get model details¶

In [10]:
import json

published_model_uid = client.repository.get_model_id(published_model)
model_details = client.repository.get_details(published_model_uid)
print(json.dumps(model_details, indent=2))
{
  "entity": {
    "hybrid_pipeline_software_specs": [],
    "schemas": {
      "input": [],
      "output": [
        {
          "fields": [
            {
              "name": "age",
              "type": "float"
            },
            {
              "name": "sex",
              "type": "float"
            },
            {
              "name": "cp",
              "type": "float"
            },
            {
              "name": "restbp",
              "type": "float"
            },
            {
              "name": "chol",
              "type": "float"
            },
            {
              "name": "fbs",
              "type": "float"
            },
            {
              "name": "restecg",
              "type": "float"
            },
            {
              "name": "thalach",
              "type": "float"
            },
            {
              "name": "exang",
              "type": "float"
            },
            {
              "name": "oldpeak",
              "type": "float"
            },
            {
              "name": "slope",
              "type": "float"
            },
            {
              "name": "ca",
              "type": "float"
            },
            {
              "name": "thal",
              "type": "float"
            }
          ],
          "id": "stest",
          "type": "list"
        },
        {
          "fields": [
            {
              "name": "age",
              "type": "float"
            },
            {
              "name": "sex",
              "type": "float"
            },
            {
              "name": "cp",
              "type": "float"
            },
            {
              "name": "restbp",
              "type": "float"
            },
            {
              "name": "chol",
              "type": "float"
            },
            {
              "name": "fbs",
              "type": "float"
            },
            {
              "name": "restecg",
              "type": "float"
            },
            {
              "name": "thalach",
              "type": "float"
            },
            {
              "name": "exang",
              "type": "float"
            },
            {
              "name": "oldpeak",
              "type": "float"
            },
            {
              "name": "slope",
              "type": "float"
            },
            {
              "name": "ca",
              "type": "float"
            },
            {
              "name": "thal",
              "type": "float"
            }
          ],
          "id": "teste2",
          "type": "test"
        }
      ]
    },
    "software_spec": {
      "id": "e51999ba-6452-5f1f-8287-17228b88b652",
      "name": "do_22.1"
    },
    "type": "do-docplex_22.1"
  },
  "metadata": {
    "created_at": "2024-09-02T13:21:53.097Z",
    "id": "515fa384-5c14-4c30-ab74-d62893058997",
    "modified_at": "2024-09-02T13:21:55.396Z",
    "name": "LOCALLY created DO model",
    "owner": "IBMid-270006YQEG",
    "resource_key": "ebb51133-5460-4b9d-8c08-46d3dfc31c94",
    "space_id": "b7bdf976-c858-49d4-8016-294f73aec947"
  },
  "system": {
    "warnings": []
  }
}

3.3 Get all models¶

In [11]:
client.repository.list_models()
Out[11]:
ID NAME CREATED TYPE SPEC_STATE SPEC_REPLACEMENT
0 515fa384-5c14-4c30-ab74-d62893058997 LOCALLY created DO model 2024-09-02T13:21:53.002Z do-docplex_22.1 supported

4. Deploy in a Cloud¶

In this section you will learn how to create batch deployment to create job using the watsonx.ai Client.

You can use commands bellow to create batch deployment for stored model (web service).

4.1 Create model deployment¶

In [12]:
meta_data = {
    client.deployments.ConfigurationMetaNames.NAME: "deployment_DO",
    client.deployments.ConfigurationMetaNames.BATCH: {},
    client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: {"name": "S", "num_nodes": 1}

}
deployment_details = client.deployments.create(published_model_uid, meta_props=meta_data)

######################################################################################

Synchronous deployment creation for id: '515fa384-5c14-4c30-ab74-d62893058997' started

######################################################################################


ready.


-----------------------------------------------------------------------------------------------
Successfully finished deployment creation, deployment_id='4b549e44-6d12-4885-b19e-78136c0058a2'
-----------------------------------------------------------------------------------------------


Note: Here we use deployment url saved in published_model object. In next section, we show how to retrive deployment url from Watson Machine Learning instance.

In [13]:
deployment_uid = client.deployments.get_id(deployment_details)

Now, you can list all deployments.

In [14]:
client.deployments.list()
Out[14]:
ID NAME STATE CREATED ARTIFACT_TYPE SPEC_STATE SPEC_REPLACEMENT
0 4b549e44-6d12-4885-b19e-78136c0058a2 deployment_DO ready 2024-09-02T13:21:57.126Z do supported

4.2 Get deployment details¶

In [15]:
client.deployments.get_details(deployment_uid)
Out[15]:
{'entity': {'asset': {'id': '515fa384-5c14-4c30-ab74-d62893058997'},
  'batch': {},
  'custom': {},
  'deployed_asset_type': 'do',
  'hardware_spec': {'id': 'e7ed1d6c-2e89-42d7-aed5-863b972c1d2b',
   'name': 'S',
   'num_nodes': 1},
  'name': 'deployment_DO',
  'space_id': 'b7bdf976-c858-49d4-8016-294f73aec947',
  'status': {'state': 'ready'}},
 'metadata': {'created_at': '2024-09-02T13:21:57.126Z',
  'id': '4b549e44-6d12-4885-b19e-78136c0058a2',
  'modified_at': '2024-09-02T13:21:57.126Z',
  'name': 'deployment_DO',
  'owner': 'IBMid-270006YQEG',
  'space_id': 'b7bdf976-c858-49d4-8016-294f73aec947'}}

5. Create job¶

You can create job to web-service deployment using create_job method.

Prepare test data

In [16]:
# Import pandas library 
import pandas as pd 
  
# Initialize list of lists 
diet_food = pd.DataFrame([ ["Roasted Chicken", 0.84, 0, 10],
                ["Spaghetti W/ Sauce", 0.78, 0, 10],
                ["Tomato,Red,Ripe,Raw", 0.27, 0, 10],
                ["Apple,Raw,W/Skin", 0.24, 0, 10],
                ["Grapes", 0.32, 0, 10],
                ["Chocolate Chip Cookies", 0.03, 0, 10],
                ["Lowfat Milk", 0.23, 0, 10],
                ["Raisin Brn", 0.34, 0, 10],
                ["Hotdog", 0.31, 0, 10]] , columns = ["name","unit_cost","qmin","qmax"])

diet_food_nutrients = pd.DataFrame([
                ["Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2],
                ["Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2],
                ["Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1],
                ["Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3],
                ["Grapes", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2],
                ["Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9],
                ["Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1],
                ["Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4],
                ["Hotdog", 242.1, 23.5, 2.3, 0, 0, 18, 10.4 ]
            ] , columns = ["Food","Calories","Calcium","Iron","Vit_A","Dietary_Fiber","Carbohydrates","Protein"])

diet_nutrients = pd.DataFrame([
                ["Calories", 2000, 2500],
                ["Calcium", 800, 1600],
                ["Iron", 10, 30],
                ["Vit_A", 5000, 50000],
                ["Dietary_Fiber", 25, 100],
                ["Carbohydrates", 0, 300],
                ["Protein", 50, 100]
            ], columns = ["name","qmin","qmax"])
In [17]:
job_payload_ref = {
    client.deployments.DecisionOptimizationMetaNames.INPUT_DATA: [
        {
            "id": "diet_food.csv",
            "values": diet_food
        },
        {
            "id": "diet_food_nutrients.csv",
            "values": diet_food_nutrients
        },
        {
            "id": "diet_nutrients.csv",
            "values": diet_nutrients
        }
    ],
    client.deployments.DecisionOptimizationMetaNames.OUTPUT_DATA: [
        {
            "id": ".*.csv"
        }
    ]
}

Create job using watsonx.ai client

In [18]:
job = client.deployments.create_job(deployment_uid, meta_props=job_payload_ref)

Checking created job status and calculated KPI.

In [19]:
import time

job_id = client.deployments.get_job_id(job)

elapsed_time = 0
while client.deployments.get_job_status(job_id).get('state') != 'completed' and elapsed_time < 300:
    elapsed_time += 10
    time.sleep(10)
if client.deployments.get_job_status(job_id).get('state') == 'completed':
    job_details_do = client.deployments.get_job_details(job_id)
    kpi = job_details_do['entity']['decision_optimization']['solve_state']['details']['KPI.Total Calories']
    print(f"KPI: {kpi}")
else:
    print("Job hasn't completed successfully in 5 minutes.")
KPI: 2000.0

6. Clean up¶

If you want to clean up all created assets:

  • experiments
  • trainings
  • pipelines
  • model definitions
  • models
  • functions
  • deployments

follow up this sample notebook.

7. Summary and next steps¶

You've successfully completed this notebook!

You've learned how to:

  • work with the watsonx.ai client
  • upload your model on Watson Machine Learning
  • create a deployment
  • create and monitor a job with inline data for your deployed model

Check out our online documentation for more samples, tutorials and documentation:

  • IBM Cloud Pak for Data as a Service documentation
  • IBM watsonx.ai documentation

Authors¶

Wojciech Jargielo, Software Engineer


Copyright © 2020-2024 IBM. This notebook and its source code are released under the terms of the MIT License.