You can use your custom machine learning framework to complete payload logging, feedback logging, and to measure performance accuracy, runtime bias detection, explainability, drift detection, and auto-debias function for model evaluations. The
custom machine learning framework must have equivalency to IBM watsonx.ai Runtime.
The following custom machine learning frameworks support model evaluations:
Framework support details
Framework
Problem type
Data type
Equivalent to IBM watsonx.ai Runtime
Classification
Structured
Equivalent to IBM watsonx.ai Runtime
Regression
Structured
For a model that is not equivalent to IBM watsonx.ai Runtime, you must create a wrapper for the custom model that exposes the required REST API endpoints. You must also and bridge the input/output between Watson OpenScale and the actual custom
machine learning engine.
When is a custom machine learning engine the best choice for me?
Copy link to section
A custom machine learning engine is the best choice when the following situations are true:
You are not using any immediately available products to serve your machine learning models. You have a system to serve your models and no direct support exists for that function for model evaluations.
The serving engine that you are using from a 3rd-party supplier is not supported for model evaluations yet. In this case, consider developing a custom machine learning engine as a wrapper to your original or native deployments.
How it works
Copy link to section
The following image shows the custom environment support:
If the input is a tensor or matrix, which is transformed from the input feature space, that model cannot be evaluated. By extension, deep learning models with text or image inputs cannot be handled for bias detection and mitigation.
Additionally, training data must be loaded to support Explainability.
For explainability on text, the full text should be one of the features. Explainability on images for a Custom model is not supported in the current release.
Output criteria for model to support monitors
Your model outputs the input feature vector alongside the prediction probabilities of various classes in that model.
In this example, "personal” and “camping” are the possible classes, and the scores in each scoring output are assigned to both classes. If the prediction probabilities are missing, bias detection works, but
auto-debias does not.
You can access the scoring output from a live scoring endpoint that you can call with the REST API for model evaluations. For CUSTOMML, Amazon SageMaker, and IBM watsonx.ai Runtime, Watson OpenScale directly connects to the native scoring
endpoints.
Custom machine learning engine
Copy link to section
A custom machine learning engine provides the infrastructure and hosting capabilities for machine learning models and web applications. Custom machine learning engines that are supported for model evaluations must conform to the following requirements:
Expose two types of REST API endpoints:
discovery endpoint (GET list of deployments and details)
scoring endpoints (online and real-time scoring)
All endpoints need to be compatible with the swagger specification to be supported.
Input payload and output to or from the deployment must be compliant with the JSON file format that is described in the specification.
To see the REST API endpoints specification, see the REST API.
Adding a custom machine learning engine
Copy link to section
You can configure model evaluations to work with a custom machine learning provider by using one of the following methods:
You can also add your machine learning provider by using the Python SDK. You must use this method if you want to have more than one provider. For more information, see Add your custom machine learning engine.
Your first step to configure model evaluations is to specify a service instance. Your service instance is where you store your AI models and deployments.
Connect your Custom service instance
Copy link to section
AI models and deployments are connected in a service instance for model evaluations. You can connect a custom service. To connect your service, go to the Configure tab, add a machine learning provider, and click the Edit icon. In addition to a name, description and specifying the Pre-production or
Production environment type, you must provide the following information that is specific to this type of service instance:
Username
Password
API endpoint that uses the format https://host:port, such as https://custom-serve-engine.example.net:8443
If you selected the Request the list of deployments tile, enter your credentials and API Endpoint, then save your configuration.
Providing individual scoring endpoints
Copy link to section
If you selected the Enter individual scoring endpoints tile, enter your credentials for the API Endpoint, then save your configuration.
You are now ready to select deployed models and configure your monitors. Your deployed models are listed on the Insights dashboard where you can click Add to dashboard. Select the deployments that you want
to monitor and click Configure.
Use the following ideas to set up your own custom machine learning engine.
Python and flask
Copy link to section
You can use Python and flask to serve scikit-learn model.
To generate the drift detection model, you must use scikit-learn version 0.20.2 in the notebook.
The app can be deployed locally for testing purposes and as an application on IBM Cloud.
Node.js
Copy link to section
You can also find an example of a custom machine learning engine that is written in Node.js here.
End2end code pattern
Copy link to section
Code pattern showing end2end example of custom engine deployment and integration with model evaluations.
Payload logging with the Custom machine learning engine
Copy link to section
To configure payload logging for a non-IBM watsonx.ai Runtime or custom machine learning engine, you must bind the ML engine as custom.
Add your Custom machine learning engine
Copy link to section
A non-watsonx.ai Runtime engine is added as custom by using metadata and no direct integration with the non-IBM watsonx.ai Runtime service exists. You can add more than one machine learning engine for model evaluations by using the wos_client.service_providers.add method.
To configure security for your custom machine learning engine, you can use IBM Cloud and IBM Cloud Pak for Data as authentication providers for your model evaluations. You can use the https://iam.cloud.ibm.com/identity/token URL
to generate an IAM token for IBM Cloud and use the https://<$hostname>/icp4d-api/v1/authorize URL to generate a token for Cloud Pak for Data.
You can use the POST /v1/deployments/{deployment_id}/online request to implement your scoring API in the following formats:
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.