You can use Microsoft Azure ML Service to perform payload logging, feedback logging, and to measure performance accuracy, runtime bias detection, explainability, and auto-debias function for model evaluations.
The following Microsoft Azure Machine Learning Service frameworks are supported for model evaluations:
Table 1. Framework support details
Framework support details
Framework
Problem type
Data type
Native
Classification
Structured
scikit-learn
Classification
Structured
scikit-learn
Regression
Structured
To generate the drift detection model, you must use scikit-learn version 0.20.2 in Notebooks.
Azure Automated Machine Learning managed endpoints and Azure models are managed with notebooks for model evaluations.
Adding Microsoft Azure ML Service
Copy link to section
You can configure model evaluations to work with Microsoft Azure ML Service by using one of the following methods:
You can also add your machine learning provider by using the Python SDK. You must use this method if you want to have more than one provider. For more information, see Add your Microsoft Azure machine learning engine.
Various REST endpoints are called that are needed to interact with the Azure ML Service. To do this, you must bind the Azure Machine Learning Service.
Create an Azure Active Directory Service Principal.
Specify the credential details when you add the Azure ML Service service binding, either through the UI or the Python SDK.
Requirements for JSON request and response files
Copy link to section
For model evaluations to work with Azure ML Service, the web service deployments you create must meet certain requirements. The web service deployments that you create must accept JSON requests and return JSON responses, according to the following
requirements.
Required web service JSON request format
Copy link to section
The REST API request body must be a JSON document that contains one JSON array of JSON objects
The JSON array must be named "input".
Each JSON object can include only simple key-value pairs, where the values can be a string, a number, true, false, or null
The values cannot be a JSON object or array
Each JSON object in the array must all have the same keys (and hence number of keys) specified, regardless of whether there is a non-null value available
The following sample JSON file meets the preceding requirements and can be used as a template for creating your own JSON request files:
Make note of the following items when you create a JSON response file:
The REST API response body must be a JSON document that contains one JSON array of JSON objects
The JSON array must be named "output".
Each JSON object can include only key-value pairs, where the values can be a string, a number, true, false, null, or an array that does not contain any other JSON objects or arrays
The values cannot be a JSON object
Each JSON object in the array must all have the same keys (and number of keys) specified, regardless of whether there is a non-null value available
For classification models: the web service must return an array of probabilities for each class and the ordering of the probabilities must be consistent for each JSON object in the array
Example: suppose you have a binary classification model that predicts credit risk, where the classes are Risk or No Risk
For every result returned back in the "output" array, the objects must contain a key-value pair that includes the probabilities in fixed order, in the form:
{"output":[{"Scored Probabilities":["Risk" probability,"No Risk" probability
]},{"Scored Probabilities":["Risk" probability,"No Risk" probability
]}]
Copy to clipboardCopied to clipboard
To be consistent with Azure ML visual tools that are used in both Azure ML Studio and Service, use the following key names:
the key name "Scored Labels" for the output key that denotes the predicted value of the model
the key name "Scored Probabilities" for the output key that denotes an array of probabilities for each class
The following sample JSON file meets the preceding requirements and can be used as a template for creating your own JSON response files:
Your first step configure model evaluations is to specify a Microsoft Azure ML Service instance. Your Azure ML Service instance is where you store your AI models and deployments.
AI models and deployments are connected in an Azure ML Service instance for model evaluations. To connect your service to, go to the Configure tab, add a
machine learning provider, and click the Edit icon. In addition to a name and description and whether the environment is Pre-production or Production,
you must provide the following information:
Client ID: The actual string value of your client ID, which verifies who you are and authenticates and authorizes calls that you make to Azure Service.
Client Secret: The actual string value of the secret, which verifies who you are and authenticates and authorizes calls that you make to Azure Service.
Tenant: Your tenant ID corresponds to your organization and is a dedicated instance of Azure AD. To find the tenant ID, hover over your account name to get the directory and tenant ID, or select Azure Active Directory > Properties >
Directory ID in the Azure portal.
Subscription ID: Subscription credentials that uniquely identify your Microsoft Azure subscription. The subscription IDforms part of the URI for every service call.
You are now ready to select deployed models and configure your monitors. Your deployed models on the Insights dashboard where you can click Add to dashboard. Select the deployments that you want to monitor and
click Configure.
Payload logging with the Microsoft Azure ML Service engine
Copy link to section
Add your Microsoft Azure ML Service engine
Copy link to section
A non-IBM watsonx.ai Runtime engine is bound as Custom, and consists of metadata. There is no direct integration with the non-IBM watsonx.ai Runtime service.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.