0 / 0
Fairness metrics overview
Fairness metrics overview

Fairness metrics overview

When you configure the IBM Watson OpenScale fairness monitor, you can generate a set of metrics to evaluate the fairness of your model. You can use the fairness metrics to determine whether your model produces biased outcomes.

The fairness monitor generates a set of metrics every hour by default. You can generate these metrics on demand by clicking Evaluate fairness now or by using the Python client.

With Watson OpenScale, you can specify the features from your model that you want the fairness monitor to evaluate for bias. To evaluate features for bias with the fairness monitor, you must also specify values for each feature that help detect bias.

Among the values that you specify, you must select a monitored group and a reference group. For example, you can set the Female value as the monitored group and the Male value as the reference group for the Sex feature when you configure the fairness monitor. You can specify any features or values when you are configuring the fairness monitor to detect bias for your model.

You must also specify the output schema for a model or function in Machine Learning to enable fairness monitoring in Watson OpenScale. You can specify the output schema by using the client.repository.ModelMetaNames.OUTPUT_DATA_SCHEMA property in the metadata section of the store_model API. For more information, see the IBM Watson Machine Learning client documentation.

How it works

On the Insights dashboard, you can view the results of the model evaluations that you enable when you configure the fairness monitor.

When you click a model deployment tile, the Fairness section displays a summary of the metrics that describe the outcomes of the evaluation. You can click the arrow navigation arrow to see a chart that provides metrics from the results of your model evaluation during specific time periods. For more information, see Viewing data for a deployment.

You can click a data point on the chart to view more details about how the fairness score was calculated. For each monitored group, you can view the calculations for the following types of data sets:

  • Balanced: This balanced calculation includes the scoring request that is received for the selected hour. The calculation also includes more records from previous hours if the minimum number of records that are required for evaluation was not met. Includes more perturbed and synthesized records that are used to test the model's response when the value of the monitored feature changes.
  • Payload: The actual scoring requests that are received by the model for the selected hour.
  • Training: The training data records that are used to train the model.
  • Debiased: The output of the debiasing algorithm after processing the runtime and perturbed data.

Do the math

The Watson OpenScale algorithm computes bias on an hourly basis by using the last N records that are present in the payload logging table and the value of N is specified when you configure the fairness monitor. The algorithm perturbs these last N records to generate additional data.

The perturbation changes the values of the feature from the reference group to the monitored group, or vice-versa. The perturbed data is then sent to the model to evaluate its behavior. The algorithm looks at the last N records in the payload table, and the behavior of the model on the perturbed data, to decide whether the model is acting in a biased manner.

A model is biased if the percentage of favorable outcomes for the monitored group is less than the percentage of favorable outcomes for the reference group, by some threshold value. This threshold value is specified when you configure the fairness monitor.

Fairness values can be more than 100%. This calculation means that the monitored group received more favorable outcomes than the reference group. In addition, if no new scoring requests are sent, then the fairness value remains constant.

Balanced data and perfect equality

For balanced data sets, the following concepts apply:

  • To determine the perfect equality value, reference group transactions are synthesized by changing the monitored feature value of every monitored group transaction to all reference group values. These new synthesized transactions are added to the set of reference group transactions and evaluated by the model.

    If the monitored feature is SEX and the monitored group is FEMALE, all FEMALE transactions are duplicated as MALE transactions. Other features values remain unchanged. These new synthesized MALE transactions are added to the set of original MALE reference group transactions.

  • The percentage of favorable outcomes is determined from the new reference group. This percentage represents perfect fairness for the monitored group.

  • The monitored group transactions are also synthesized by changing the reference feature value of every reference group transaction to the monitored group value. These new synthesized transactions are added to the set of monitored group transactions and evaluated by the model.

    If the monitored feature is SEX and the monitored group is FEMALE, all MALE transactions are duplicated as FEMALE transactions. Other features values remain unchanged. These new synthesized FEMALE transactions are added to the set of original FEMALE monitored group transactions.

The following mathematical formula is used for calculating perfect equality:

Perfect equality =   Percentage of favorable outcomes for all reference transactions, 
                     including the synthesized transactions from the monitored group

For example, if the monitored feature is SEX and the monitored group is FEMALE, the following formula shows the equation for perfect equality:

Perfect equality for `SEX` =  Percentage of favorable outcomes for `MALE` transactions, 
                                 including the synthesized transactions that were initially `FEMALE` but changed to `MALE`

Supported models

Watson OpenScale supports bias detection only for the models and Python functions that use structured data in the feature vector.

Fairness metrics are calculated based on the scoring payload data.

For proper monitoring purposes, every scoring request must be logged in Watson OpenScale. Payload data logging is automated for IBM Watson Machine Learning engines.

For other machine learning engines, the payload data can be provided either by using the Python client or the REST API.

For machine learning engines other than IBM Watson Machine Learning, fairness monitoring creates additional scoring requests on the monitored deployment.

You can review the following information with the fairness monitor:

  • Metrics values over time
  • Related details, such as favorable and unfavorable outcomes
  • Detailed transactions
  • Recommended debiased scoring endpoint

Supported fairness metrics

The following fairness metrics are supported by Watson OpenScale:

Supported fairness details

The following details for fairness metrics are supported by Watson OpenScale:

  • The favorable percentages for each of groups
  • Fairness averages for all the fairness groups
  • Distribution of the data for each of the monitored groups
  • Distribution of payload data

Parent topic: Watson OpenScale

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more