Automating payload logging

Automatic payload logging exists between IBM Watson OpenScale and IBM Watson Machine Learning when they are provisioned in the same IBM Cloud account, or for IBM Watson OpenScale for IBM Cloud Pak for Data in the same cluster. You can also automate payload logging for other machine learning providers, or for an IBM Watson Machine Learning instance that is not in the same account, by using one of the following cases and options:

Automatic payload logging is enabled only for providers whose Environment type is set to Production. Pre-production deployments only receive payload via the manually uploaded test payload.

Case 1: Keep the original format of scoring input and output (different than one required by Watson OpenScale)

If your applications use an original payload format that cannot be changed, choose one of the following options:

  • Option 1: Custom machine learning engine scoring endpoint should accept both payload formats.

    Depending on the payload format: Watson OpenScale (IBM Watson Machine Learning-like) or user’s one return the output in corresponding format. If the format is user’s one convert it to Watson OpenScale one and store as payload record in payload logging table. If the scoring input format is Watson OpenScale one, do not store the payload (this payload is coming from Watson OpenScale not from user).

    For more information, see Using two payload formats.

  • Option 2: If for some reasons embedding such logic in a single REST API endpoint is not possible, you can define two endpoints.

    One is used by your application, however, you must add payload logging and convert it to the expected format. The second endpoint is used by Watson OpenScale to make required calculations, such as bias and explainability. No payload logging is required for this enpoint. During Watson OpenScale configuration, the second endpoint should be pointed to the one with compatible formats.

    For more information, see Using two endpoints.

  • Option 3: Move the payload logging module to the original endpoint or downstream application.

    If your application supports this option, only one endpoint on the custom machine learning engine needs to be developed: the one for Watson OpenScale.

Case 2: Work with the format of payload required by Watson OpenScale

In such case there is no need to make any conversion from user’s format (scoring input/output) to the one required by Watson OpenScale.

Because we do not want to have the internal scoring requests logged as user payload (calculations done by Watson OpenScale), either you must develop two endpoints or you must code extra logic for a single endpoint.

  • Option 1: Two endpoints. It is almost the same as option 2 in Case 1. The only difference is that there is no need to make any format conversions since those are already aligned.

  • Option 2: Single endpoint. There is a need to detect if the scoring request is coming from user’s application. That can be achieved by sending some extra information in scoring payload (aka metadata). If such metadata is detected the payload should be logged.

Using two payload formats

Let’s say that I am using XYZ offering to serve my models. XYZ is not supported by Watson OpenScale at this stage.

I have many downstream applications consuming my deployments on XYZ and I cannot change the format of scoring payload. However; I can modify the scoring endpoint logic.

Steps

  1. Develop a custom machine learning engine that wraps XYZ deployment.
  2. Implement the scoring endpoint on the custom machine learning engine to support the payload formats for both XYZ and Watson OpenScale.
  3. Add the logic in the scoring endpoint on your custom machine learning engine to detect the format of the payload.
  4. If the payload is coming from downstream apps, convert it to the Watson OpenScale format and log it as payload records in Watson OpenScale.
  5. Switch downstream apps scoring endpoints to the new one you created.

If the URL of the scoring endpoint needs to remain the same, use URL re-direction, which is also known as a proxy.

Using two endpoints

If the format of your payload cannot be changed, for example, if it would cause your downstream applications to break, you must use separate endpoints. This approach consists of the following components:

  • scoring endpoint (in the user’s format) with the original scoring endpoint using user-defined format for both input and output
  • custom ml engine with perturbation endpoint (Watson OpenScale format) and discovery endpoint. Perturbation endpoint wraps original scoring endpoint plus makes conversions from Watson OpenScale format to user’s one and from user’s output to Watson OpenScale one. This is required for Watson OpenScale to make correct scoring request and understand the output.
  • scoring endpoint wrapper with payload logging capability. This wrapper is consumed by downstream application instead of original scoring endpoint. Its logic has been extended with payload logging capability. Each time scoring request is executed the input and output are being converted to Watson OpenScale format and logged.

The following flowchart shows the custom machine learning engine solution in which the custom machine learning engine handles the perturbation and discovery endpoints and transforms it to your format:

REST API endpoints specification

To generate the drift detection model, you must use scikit-learn version 0.20.2 in notebooks.

Steps

  1. Use a notebook to create a scoring endpoint for the credit risk model (scikit-learn version 0.20.2) deployment on the Microsoft Azure Machine Learning Service. For more information, see this sample notebook.
  2. Create a custom machine learning engine and deploy it on Microsoft Azure Cloud as a Flask application. Create the perturbation and discovery endpoints.
  3. Configure Watson OpenScale to work with the custom machine learning engine.
  4. Create a scoring endpoint wrapper that automates payload logging on the Microsoft Azure Machine Learning Service. For more information, see this sample notebook.

Pay special attention to the following items:

  • Credentials: To make the tutorial easier to follow, the Watson OpenScale credentials are hard-coded within the code (scoring endpoint wrapper). You must update these values to your actual credentials.
  • Python SDK vs. REST API: The tutorial uses the Python SDK to log the payload. You could also use the REST API to do this, however you must generate or refresh the token on your own.
  • IBM Cloud vs. IBM Cloud Pak for Data: If you are using IBM Watson OpenScale for IBM Cloud Pak for Data, the credentials are in a different format here is the sample notebook. The Watson OpenScale client class is also different. It uses the APIClient4ICP client class.
  • Payload logging as separate endpoint/package — extraction of payload logging & conversion methods to separate package or endpoint. In that way it could be re-used if you would like to inject batch of payloads outside scoring endpoint wrapper.