You can use Amazon SageMaker to log payload and feedback data, and to measure performance accuracy, bias detection, explainability, and auto-debias function for model evaluations.
The following Amazon SageMaker frameworks are supported for model evaluations:
Table 1. Framework support details
Framework | Problem type | Data type |
---|---|---|
Native | Classification | Structured |
Native | Regression1 | Structured |
1Support for regression models does not include drift magnitude.
Adding Amazon SageMaker
You can configure model evaluations to work with Amazon SageMaker by using one of the following methods:
- The first time that you add a machine learning provider, you can use the configuration interface. For more information, see Specifying an Amazon SageMaker instance.
- You can also add your machine learning provider by using the Python SDK. You must use this method if you want to have more than one provider. For more information, see Add your Amazon SageMaker machine learning engine.
Sample Notebooks
The following Notebooks show how to work with Amazon SageMaker:
Specifying an Amazon SageMaker ML service instance
Your first step to configure model evaluations is to specify an Amazon SageMaker service instance. Your Amazon SageMaker service instance is where you store your AI models and deployments.
Connect your Amazon SageMaker service instance
AI models and deployments in an Amazon SageMaker service instance. To connect your service, go to the Configure tab, add a machine learning provider, and click the Edit icon. In addition to a name and description and whether the environment Pre-production or Production, you must provide the following information that is specific to this type of service instance:
- Access Key ID: Your AWS access key ID,
aws_access_key_id
, which verifies who you are and authenticates and authorizes calls that you make to AWS. - Secret Access Key: Your AWS secret access key,
aws_secret_access_key
, which is required to verify who you are and to authenticate and authorize calls that you make to AWS. - Region: Enter the region where your Access Key ID was created. Keys are stored and used in the region in which they were created and cannot be transferred to another region.
You are now ready to select deployed models and configure your monitors. Your deployed models are listed on the Insights dashboard where you can click Add to dashboard. Select the deployments that you want to monitor and click Configure.
For more information, see Configure monitors.
Payload logging with the Amazon SageMaker machine learning engine
Add your Amazon SageMaker machine learning engine
A non-IBM watsonx.ai Runtime engine is bound as Custom by using metadata. No direct integration with the non-IBM watsonx.ai Runtime service is possible.
SAGEMAKER_ENGINE_CREDENTIALS = {
'access_key_id':””,
'secret_access_key':””,
'region': '}
wos_client.service_providers.add(
name="AWS",
description="AWS Service Provider",
service_type=ServiceTypes.AMAZON_SAGEMAKER,
credentials=SageMakerCredentials(
access_key_id=SAGEMAKER_ENGINE_CREDENTIALS['access_key_id'],
secret_access_key=SAGEMAKER_ENGINE_CREDENTIALS['secret_access_key'],
region=SAGEMAKER_ENGINE_CREDENTIALS['region']
),
background_mode=False
).result
To see your service subscription, run the following code:
client.service_providers.list()
Add Amazon SageMaker ML subscription
To add the subscription, run the following code:
asset_deployment_details = wos_client.service_providers.list_assets(data_mart_id=data_mart_id, service_provider_id=service_provider_id).result
asset_deployment_details
deployment_id='684e35eee8a479470cee05983e1f9d64'
for model_asset_details in asset_deployment_details['resources']:
if model_asset_details['metadata']['guid']==deployment_id:
break
aws_asset = Asset(
asset_id=model_asset_details['entity']['asset']['asset_id'],
name=model_asset_details['entity']['asset']['name'],
url=model_asset_details['entity']['asset']['url'],
asset_type=model_asset_details['entity']['asset']['asset_type'] if 'asset_type' in model_asset_details['entity']['asset'] else 'model',
problem_type=ProblemType.BINARY_CLASSIFICATION,
input_data_type=InputDataType.STRUCTURED,
)
from ibm_watson_openscale.base_classes.watson_open_scale_v2 import ScoringEndpointRequest
deployment_scoring_endpoint = model_asset_details['entity']['scoring_endpoint']
scoring_endpoint = ScoringEndpointRequest(url = model_asset_details['entity']['scoring_endpoint']['url'] )
deployment = AssetDeploymentRequest(
deployment_id=model_asset_details['metadata']['guid'],
url=model_asset_details['metadata']['url'],
name=model_asset_details['entity']['name'],
#description="asset['entity']['description']",
deployment_type=model_asset_details['entity']['type'],
scoring_endpoint = scoring_endpoint
)
asset_properties = AssetPropertiesRequest(
label_column="Risk",
prediction_field='predicted_label',
probability_fields=['score'],
training_data_reference=training_data_reference,
training_data_schema=None,
input_data_schema=None,
output_data_schema=None,
feature_fields=feature_columns,
categorical_fields=categorical_columns
)
subscription_details = wos_client.subscriptions.add(
data_mart_id=data_mart_id,
service_provider_id=service_provider_id,
asset=aws_asset,
deployment=deployment,
asset_properties=asset_properties,
background_mode=False
).result
To get the subscription list, run the following command:
subscription_id = subscription_details.metadata.id
subscription_id
details: wos_client.subscriptions.get(subscription_id).result.to_dict()
Enable payload logging
To enable payload logging, run the following command:
request_data = {'fields': feature_columns,
'values': [[payload_values]]}
To get logging details, run the following command:
response_data = {'fields': list(result['predictions'][0]),
'values': [list(x.values()) for x in result['predictions']]}
Scoring and payload logging
- Score your model. For a full example, see the Working with SageMaker machine learning Engine Notebook.
To store the request and response in the payload logging table, run the following code:
wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord(
scoring_id=str(uuid.uuid4()),
request=request_data,
response=response_data,
response_time=460
)])
For languages other than Python, you can also log payload by using a REST API.
Parent topic: Supported machine learning engines, frameworks, and models