You can use Amazon SageMaker to log payload and feedback data, and to measure performance accuracy, bias detection, explainability, and auto-debias function for model evaluations.
The following Amazon SageMaker frameworks are supported for model evaluations:
Table 1. Framework support details
Framework support details
Framework
Problem type
Data type
Native
Classification
Structured
Native
Regression1
Structured
1Support for regression models does not include drift magnitude.
Adding Amazon SageMaker
Copy link to section
You can configure model evaluations to work with Amazon SageMaker by using one of the following methods:
The first time that you add a machine learning provider, you can use the configuration interface. For more information, see Specifying an Amazon SageMaker instance.
You can also add your machine learning provider by using the Python SDK. You must use this method if you want to have more than one provider. For more information, see Add your Amazon SageMaker machine learning engine.
Sample Notebooks
Copy link to section
The following Notebooks show how to work with Amazon SageMaker:
Specifying an Amazon SageMaker ML service instance
Copy link to section
Your first step to configure model evaluations is to specify an Amazon SageMaker service instance. Your Amazon SageMaker service instance is where you store your AI models and deployments.
Connect your Amazon SageMaker service instance
Copy link to section
AI models and deployments in an Amazon SageMaker service instance. To connect your service, go to the Configure tab, add a machine learning provider, and
click the Edit icon. In addition to a name and description and whether the environment Pre-production or Production,
you must provide the following information that is specific to this type of service instance:
Access Key ID: Your AWS access key ID, aws_access_key_id, which verifies who you are and authenticates and authorizes calls that you make to AWS.
Secret Access Key: Your AWS secret access key, aws_secret_access_key, which is required to verify who you are and to authenticate and authorize calls that you make to AWS.
Region: Enter the region where your Access Key ID was created. Keys are stored and used in the region in which they were created and cannot be transferred to another region.
You are now ready to select deployed models and configure your monitors. Your deployed models are listed on the Insights dashboard where you can click Add to dashboard. Select the deployments that you want to
monitor and click Configure.
Payload logging with the Amazon SageMaker machine learning engine
Copy link to section
Add your Amazon SageMaker machine learning engine
Copy link to section
A non-IBM watsonx.ai Runtime engine is bound as Custom by using metadata. No direct integration with the non-IBM watsonx.ai Runtime service is possible.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.