About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Last updated: Nov 21, 2024
External machine learning service engines, such as Microsoft Azure ML Studio, Microsoft Azure ML Service, and Amazon SageMaker can be integrated for model evaluations.
You can use the following methods to integrate integrate 3rd-party engines:
-
Engine binding level
Ability to list deployments and get deployment details.
-
Deployment subscription level
You must score subscribed deployment in the proper format, such as the IBM watsonx.ai Runtime format and receive the output in the same compatible format. You must configure your model evaluations to process both input and output formats.
-
Payload logging
Each input and output to the deployed model triggered by a user’s application must be stored in a payload store. The format of the payload records follows the same specification as mentioned in the preceding section on deployment subscription levels.
Those records are used during model evaluations to calculate bias, explanations, and others. It is not possible to automatically store transactions that run on the user site. This method is one of the ways that proprietary information is safeguarded during model evaluations. Use the Rest API or Python SDK to work with secure data.
Steps to implement this solution
- Learn about custom machine learning engines.
- Set up payload logging.
- Set up a custom machine learning engine by using one of these Custom machine learning engine examples.
Parent topic: Supported machine learning engines, frameworks, and models
Was the topic helpful?
0/1000