0 / 0
Microsoft Azure ML Service frameworks

Microsoft Azure ML Service frameworks

You can use Microsoft Azure ML Service to perform payload logging, feedback logging, and to measure performance accuracy, runtime bias detection, explainability, and auto-debias function in the IBM Watson OpenScale service.

IBM Watson OpenScale fully supports the following Microsoft Azure Machine Learning Service frameworks:

Table 1. Framework support details

Framework support details
Framework Problem type Data type
Native Classification Structured
scikit-learn Classification Structured
scikit-learn Regression Structured

To generate the drift detection model, you must use scikit-learn version 0.20.2 in Notebooks.

Adding Microsoft Azure ML Service to Watson OpenScale

You can configure Watson OpenScale to work with Microsoft Azure ML Service by using one of the following methods:

Watson OpenScale calls various REST endpoints that are needed to interact with the Azure ML Service. To do this, you must bind the Azure Machine Learning Service Watson OpenScale.

  1. Create an Azure Active Directory Service Principal.
  2. Specify the credential details when you add the Azure ML Service service binding, either through the UI or the Watson OpenScale Python SDK.

Requirements for JSON request and response files

For Watson OpenScale to work with Azure ML Service, the web service deployments you create must meet certain requirements. The web service deployments that you create must accept JSON requests and return JSON responses, according to the following requirements.

Required web service JSON request format

  • The REST API request body must be a JSON document that contains one JSON array of JSON objects
  • The JSON array must be named "input".
  • Each JSON object can include only simple key-value pairs, where the values can be a string, a number, true, false, or null
  • The values cannot be a JSON object or array
  • Each JSON object in the array must all have the same keys (and hence number of keys) specified, regardless of whether there is a non-null value available

The following sample JSON file meets the preceding requirements and can be used as a template for creating your own JSON request files:

{
  "input": [
    {
      "field1": "value1",
      "field2": 31,
      "field3": true,
      "field4": 2.3
    },
    {
      "field1": "value2",
      "field2": 15,
      "field3": false,
      "field4": 0.1
    },
    {
      "field1": null,
      "field2": 5,
      "field3": true,
      "field4": 6.1
    }
  ]
}

Required web service JSON response format

Make note of the following items when you create a JSON response file:

  • The REST API response body must be a JSON document that contains one JSON array of JSON objects

  • The JSON array must be named "output".

  • Each JSON object can include only key-value pairs, where the values can be a string, a number, true, false, null, or an array that does not contain any other JSON objects or arrays

  • The values cannot be a JSON object

  • Each JSON object in the array must all have the same keys (and number of keys) specified, regardless of whether there is a non-null value available

  • For classification models: the web service must return an array of probabilities for each class and the ordering of the probabilities must be consistent for each JSON object in the array

    • Example: suppose you have a binary classification model that predicts credit risk, where the classes are Risk or No Risk
    • For every result returned back in the "output" array, the objects must contain a key-value pair that includes the probabilities in fixed order, in the form:
    {
    "output": [
      {
        "Scored Probabilities": ["Risk" probability,"No Risk" probability
        ]
        },
        {
          "Scored Probabilities": ["Risk" probability,"No Risk" probability
            ]
           }
        ]
    

To be consistent with Azure ML visual tools that are used in both Azure ML Studio and Service, use the following key names:

  • the key name "Scored Labels" for the output key that denotes the predicted value of the model
  • the key name "Scored Probabilities" for the output key that denotes an array of probabilities for each class

The following sample JSON file meets the preceding requirements and can be used as a template for creating your own JSON response files:

{
  "output": [
    {
      "Scored Labels": "No Risk",
      "Scored Probabilities": [
        0.8922524675865824,
        0.10774753241341757
      ]
    },
    {
      "Scored Labels": "No Risk",
      "Scored Probabilities": [
        0.8335192848546905,
        0.1664807151453095
      ]
    }
  ]
}

Sample Notebooks

The following Notebooks show how to work with Microsoft Azure ML Service:

Specifying a Microsoft Azure ML Service instance

Your first step in the Watson OpenScale tool is to specify a Microsoft Azure ML Service instance. Your Azure ML Service instance is where you store your AI models and deployments.

Watson OpenScale connects to AI models and deployments in an Azure ML Service instance. To connect your service to Watson OpenScale, go to the Configure The configuration tab icon tab, add a machine learning provider, and click the Edit The edit icon icon. In addition to a name and description and whether the environment is Pre-production or Production, you must provide the following information:

  • Client ID: The actual string value of your client ID, which verifies who you are and authenticates and authorizes calls that you make to Azure Service.
  • Client Secret: The actual string value of the secret, which verifies who you are and authenticates and authorizes calls that you make to Azure Service.
  • Tenant: Your tenant ID corresponds to your organization and is a dedicated instance of Azure AD. To find the tenant ID, hover over your account name to get the directory and tenant ID, or select Azure Active Directory > Properties > Directory ID in the Azure portal.
  • Subscription ID: Subscription credentials that uniquely identify your Microsoft Azure subscription. The subscription IDforms part of the URI for every service call.

See How to: Use the portal to create an Azure AD application and service principal that can access resources for instructions about how to get your Microsoft Azure credentials.

You are now ready to select deployed models and configure your monitors. Watson OpenScale lists your deployed models on the Insights dashboard where you can click Add to dashboard. Select the deployments that you want to monitor and click Configure.

For more information, see Configure monitors.

Payload logging with the Microsoft Azure ML Service engine

Add your Microsoft Azure ML Service engine

A non-IBM Watson Machine Learning engine is bound as Custom, and consists of metadata. There is no direct integration with the non-IBM Watson Machine Learning service.

service_type = "azure_machine_learning_service"
added_service_provider_result = wos_client.service_providers.add(
        name=SERVICE_PROVIDER_NAME,
        description=SERVICE_PROVIDER_DESCRIPTION,
        service_type = service_type,
        credentials=AzureCredentials(
            subscription_id= AZURE_ENGINE_CREDENTIALS['subscription_id'],
            client_id = AZURE_ENGINE_CREDENTIALS['client_id'],
            client_secret= AZURE_ENGINE_CREDENTIALS['client_secret'],
            tenant = AZURE_ENGINE_CREDENTIALS['tenant']
        ),
        background_mode=False
    ).result

You can see your service binding with the following command:

client.service_providers.list()

The sample output:

uid	                                   name	                      service_type	                   created
410e730f-8462-45fe-8b41-a029d6d6043a	My Azure ML Service engine azure_machine_learning_service2019-06-10T22:10:29.398Z

Add Microsoft Azure ML Service subscription

Add subscription

asset_deployment_details = wos_client.service_providers.list_assets(data_mart_id=data_mart_id, service_provider_id=service_provider_id).result
asset_deployment_details
 
deployment_id=''
for model_asset_details in asset_deployment_details['resources']:
    if model_asset_details['metadata']['guid']==deployment_id:
        break
 
azure_asset = Asset(
            asset_id=model_asset_details["entity"]["asset"]["asset_id"],
            name=model_asset_details["entity"]["asset"]["name"],
            url=model_asset_details["entity"]["asset"]["url"],
            asset_type=model_asset_details['entity']['asset']['asset_type'] if 'asset_type' in model_asset_details['entity']['asset'] else 'model',
            input_data_type=InputDataType.STRUCTURED,
            problem_type=ProblemType.BINARY_CLASSIFICATION
        )
 
deployment_scoring_endpoint = model_asset_details['entity']['scoring_endpoint']
scoring_endpoint = ScoringEndpointRequest(url = model_asset_details['entity']['scoring_endpoint']['url'],request_headers = model_asset_details['entity']['scoring_endpoint']['request_headers'],
                                                 credentials = None)  
 
deployment = AssetDeploymentRequest(
    deployment_id=model_asset_details['metadata']['guid'],
    url=model_asset_details['metadata']['url'],
    name=model_asset_details['entity']['name'],
    description=model_asset_details['entity']['description'],
    deployment_type=model_asset_details['entity']['type'],
    scoring_endpoint = scoring_endpoint
)
 
asset_properties = AssetPropertiesRequest(
        label_column="Risk ",
        prediction_field='Scored Labels',
        probability_fields=['Scored Probabilities'],
        training_data_reference=training_data_reference,
        training_data_schema=None,
        input_data_schema=None,
        output_data_schema=None,
    )
 
subscription_details = wos_client.subscriptions.add(
        data_mart_id=data_mart_id,
        service_provider_id=service_provider_id,
        asset=azure_asset,
        deployment=deployment,
        asset_properties=asset_properties,
        background_mode=False
).result

Get subscription list

subscription_id = subscription_details.metadata.id
subscription_id
 
details: wos_client.subscriptions.get(subscription_id).result.to_dict()

Enable payload logging

Enable payload logging in subscription

payload_data_set_id = None
payload_data_set_id = wos_client.data_sets.list(type=DataSetTypes.PAYLOAD_LOGGING, 
                                                target_target_id=subscription_id, 
                                                target_target_type=TargetTypes.SUBSCRIPTION).result.data_sets[0].metadata.id

Get logging details

subscription.payload_logging.get_details()

Scoring and payload logging

Score your model. For a full example, see the Working with Azure Machine Learning Service Engine Notebook.

Store the request and response in the payload logging table:

wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord(
           scoring_id=str(uuid.uuid4()),
           request=request_data,
           response=response_data,
           response_time=460
)])

For languages other than Python, you can also log payload by using a REST API.

Parent topic: Supported machine learning engines, frameworks, and models

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more