The interactive setup tutorial
In this tutorial, you learn how to provision the necessary IBM Cloud services, set up a project and deploy a sample model in Watson Studio, and configure monitors in Watson OpenScale.
- Provision IBM Cloud machine learning and storage services.
- Set up a Watson Studio project, and create, train, and deploy a machine learning model.
- Configure and explore trust, transparency, and explainability for your model.
Provision prerequisite IBM Cloud services
In addition to Watson OpenScale, to complete this tutorial, you need the following accounts and services.
For best performance, create prerequisite services in the same region as Watson OpenScale. To view available locations for Watson OpenScale, see Service availability.
- Log in to your IBM Cloud account with your IBMid.
For each of the following services, create an instance by clicking the link, giving the service a name, selecting the Lite (free) plan, and clicking Create:
Set up a Watson Studio project
Log in to your Watson Studio account and begin by creating a new project. Click Create a project.
Click the Create an empty project tile.
- Give your project a name and description, make sure that the IBM Cloud Object Storage service that you created is selected in the Storage list, and click Create.
Associate your IBM Cloud Services with your Watson project
Open your Watson Studio project and select the Settings tab. In the Associated Services section, click Add service and then click Watson.
- Click the Add link on the machine learning tile.
- On the Existing tab, from the Existing Service Instance list, click the service that you created previously.
- Click Select.
Add the Credit Risk model
In Watson Studio, from your project, click the Add to project button, and click the Watson Machine Learning model tile.
- On the Import Model page, in the Select model type section, click the From sample radio button.
Click the Credit Risk model tile, and then click Import.
Deploy the Credit Risk model
- From the Credit Risk model page, click the Deployments tab, and then, click Add Deployment.
credit-risk-deployas the name for your deployment, and select the Web service deployment type.
- Click Save.
Configure Watson OpenScale
Now that the machine learning model is deployed, you can configure Watson OpenScale to ensure trust and transparency with your models.
Provision Watson OpenScale
Give your service a name, select the Lite plan, and click Create.
- Select the Manage tab of your Watson OpenScale instance, and click the Launch application button. The Welcome to Watson OpenScale demonstration page opens.
- For this tutorial, click No Thanks.
Select a database
Next, you need to choose a database. You have two options: the free database, or an existing or new database.
For this tutorial, select the Use the free Lite plan database tile.
The free database has some important limitations. It is a hosted database that does not give you separate access to it. It gives Watson OpenScale access to your database and data. It is not GDPR-compliant. See complete details about each of these options in the Specifying a database topic. The existing database can be a PostgreSQL database or a Db2 database.
Review the summary data and click Save. Confirm and, when prompted, click the Continue with Configuration button.
Connect Watson OpenScale to your machine learning model
Click the Watson Machine Learning option, and then click Save.
For this tutorial, select your IBM Watson Machine Learning instance from the menu and click Next.
You also have the option to select a different Machine Learning location. See Specifying an IBM Watson Machine Learning service instance for additional information.
You are now able to select the deployed models that you want to monitor by using Watson OpenScale.
Provide a set of sample data to your model
Before you configure your monitors, you can generate a scoring request against your model to test payload logging that the monitors can process. You must provide sample data to Watson Studio in the form of a JSON file to generate a scoring request. Later in the tutorial, repeat the scoring request to provide the actual data to the Watson OpenScale monitors.
Download the credit_payload_data.json file.
From the Deployments tab of your Watson Studio project, click the credit-risk-deploy link, click the Test tab, and select the JSON input icon.
Now, open the
credit_payload_data.jsonfile that you downloaded, and copy the contents to the JSON field in the Test tab. Click the Predict button to send and score training payloads to your model.
Continue with this tutorial by completing the following steps:
To prepare monitors, you must select one of the deployed models and add it to the dashboard. From the Insights tab, click a deployment tile, or click the Add to dashboard button to select a deployed model and click Configure.
In the Payload logging section, you must specify the type of input.
In the Model details section, you must record the model details. For this tutorial, select Manually configure monitors.
In the Quality section, you set the quality alert threshold and sample sizes.
In the Fairness section, choose which features to monitor for fairness. For each feature you select, Watson OpenScale will monitor the deployed model’s propensity for a favorable outcome for one group over the other. Although features are monitored individually, debiasing corrects issues for all features together.
In the Drift section, you set up a drift detection model.
To enable monitoring for quality, you must provide your model with feedback data. Quality data does not appear in the dashboard until that is done. You can generate the requests all at once by adding sample feedback data to the model for scoring. For this task, download a CSV file that contains sample feedback data.
After you configure accuracy monitoring, the accuracy check runs after 1 hour. In a production system, the dashboard can accumulate feedback data. For the purposes of this tutorial, you’ll probably want to trigger the accuracy check manually after you add your feedback data so that you can see results in the Insights dashboard.
To check the result immediately, from the Insights page, select a deployment, click one of the Quality metrics, and then click Check quality now.