The interactive setup tutorial

In this tutorial, you learn how to provision the necessary IBM Cloud services, set up a project and deploy a sample model in Watson Studio, and configure monitors in Watson OpenScale.

  1. Provision IBM Cloud machine learning and storage services.
  2. Set up a Watson Studio project, and create, train, and deploy a machine learning model.
  3. Configure and explore trust, transparency, and explainability for your model.

Provision prerequisite IBM Cloud services

In addition to Watson OpenScale, to complete this tutorial, you need the following accounts and services.

For best performance, create prerequisite services in the same region as Watson OpenScale. To view available locations for Watson OpenScale, see Service availability.

  1. Log in to your IBM Cloud account with your IBMid.
  2. For each of the following services, create an instance by clicking the link, giving the service a name, selecting the Lite (free) plan, and clicking Create:

Set up a Watson Studio project

  1. Log in to your Watson Studio account and begin by creating a new project. Click Create a project.

    Watson Studio create project

  2. Click the Create an empty project tile.

Watson Studio Create an empty project tile is displayed

  1. Give your project a name and description, make sure that the IBM Cloud Object Storage service that you created is selected in the Storage list, and click Create.

Associate your IBM Cloud Services with your Watson project

  1. Open your Watson Studio project and select the Settings tab. In the Associated Services section, click Add service and then click Watson.

    Add Watson Service

  2. Click the Add link on the machine learning tile.
  3. On the Existing tab, from the Existing Service Instance list, click the service that you created previously.
  4. Click Select.

Add the Credit Risk model

  1. In Watson Studio, from your project, click the Add to project button, and click the Watson Machine Learning model tile.

    the credit risk tile is shown

  2. On the Import Model page, in the Select model type section, click the From sample radio button.
  3. Click the Credit Risk model tile, and then click Import.

    The credit risk tile is shown

Deploy the Credit Risk model

  1. From the Credit Risk model page, click the Deployments tab, and then, click Add Deployment.
  2. Type credit-risk-deploy as the name for your deployment, and select the Web service deployment type.
  3. Click Save.

Configure Watson OpenScale

Now that the machine learning model is deployed, you can configure Watson OpenScale to ensure trust and transparency with your models.

Provision Watson OpenScale

  1. Provision a new Watson OpenScale service instance

    Watson OpenScale

  2. Give your service a name, select the Lite plan, and click Create.

  3. Select the Manage tab of your Watson OpenScale instance, and click the Launch application button. The Welcome to Watson OpenScale demonstration page opens.
  4. For this tutorial, click No Thanks.

Select a database

Next, you need to choose a database. You have two options: the free database, or an existing or new database.

  1. For this tutorial, select the Use the free Lite plan database tile.

    The free database has some important limitations. It is a hosted database that does not give you separate access to it. It gives Watson OpenScale access to your database and data. It is not GDPR-compliant. See complete details about each of these options in the Specifying a database topic. The existing database can be a PostgreSQL database or a Db2 database.

    Select database

  2. Review the summary data and click Save. Confirm and, when prompted, click the Continue with Configuration button.

Connect Watson OpenScale to your machine learning model

  1. Click the Watson Machine Learning option, and then click Save.

  2. For this tutorial, select your IBM Watson Machine Learning instance from the menu and click Next.

    You also have the option to select a different Machine Learning location. See Specifying an IBM Watson Machine Learning service instance for additional information.

    Set Machine Learning instance

You are now able to select the deployed models that you want to monitor by using Watson OpenScale.

Provide a set of sample data to your model

Before you configure your monitors, you can generate a scoring request against your model to test payload logging that the monitors can process. You must provide sample data to Watson Studio in the form of a JSON file to generate a scoring request. Later in the tutorial, repeat the scoring request to provide the actual data to the Watson OpenScale monitors.

  1. Download the credit_payload_data.json file.

  2. From the Deployments tab of your Watson Studio project, click the credit-risk-deploy link, click the Test tab, and select the JSON input icon.

    JSON test

  3. Now, open the credit_payload_data.json file that you downloaded, and copy the contents to the JSON field in the Test tab. Click the Predict button to send and score training payloads to your model.

    JSON predict

Next steps

Continue with this tutorial by completing the following steps:

  1. Prepare monitors for deployment.

    To prepare monitors, you must select one of the deployed models and add it to the dashboard. From the Insights tab, click a deployment tile, or click the Add to dashboard button to select a deployed model and click Configure.

  2. Set up payload logging.

    In the Payload logging section, you must specify the type of input.

  3. Set up model details.

    In the Model details section, you must record the model details. For this tutorial, select Manually configure monitors.

  4. Configure quality monitoring.

    In the Quality section, you set the quality alert threshold and sample sizes.

  5. Configure Fairness monitoring.

    In the Fairness section, choose which features to monitor for fairness. For each feature you select, Watson OpenScale will monitor the deployed model’s propensity for a favorable outcome for one group over the other. Although features are monitored individually, debiasing corrects issues for all features together.

  6. Configure the drift detection monitor.

    In the Drift section, you set up a drift detection model.

  7. Provide a set of sample feedback data to your model.

    To enable monitoring for quality, you must provide your model with feedback data. Quality data does not appear in the dashboard until that is done. You can generate the requests all at once by adding sample feedback data to the model for scoring. For this task, download a CSV file that contains sample feedback data.

  8. Get insights.

    After you configure accuracy monitoring, the accuracy check runs after 1 hour. In a production system, the dashboard can accumulate feedback data. For the purposes of this tutorial, you’ll probably want to trigger the accuracy check manually after you add your feedback data so that you can see results in the Insights dashboard.

    To check the result immediately, from the Insights page, select a deployment, click one of the Quality metrics, and then click Check quality now.