0 / 0
Disparate impact in Watson OpenScale fairness metrics

Disparate impact in Watson OpenScale fairness metrics

In Watson OpenScale, disparate impact is specified as the fairness scores for different groups. Disparate impact compares the percentage of favorable outcomes for a monitored group to the percentage of favorable outcomes for a reference group.

How it works

When you view the details of a model deployment, the Fairness section of the model summary that is displayed, provides the fairness scores for different groups that are described as metrics. The fairness scores are calculated with the disparate impact formula.

Do the math

The following formula is used for calculating disparate impact:

                     (num_positives(privileged=False) / num_instances(privileged=False))
Disparate impact =   ______________________________________________________________________

                     (num_positives(privileged=True) / num_instances(privileged=True))

The num_positives value represents the number of individuals in the group who received a positive outcome, and the num_instances value represents the total number of individuals in the group. The privileged=False label specifies unprivileged groups and the privileged=True label specifies privileged groups. In Watson OpenScale, the positive outcomes are designated as the favorable outcomes, and the negative outcomes are designated as the unfavorable outcomes. The privileged group is designated as the reference group, and the unprivileged group is designated as the monitored group.

The calculation produces a percentage that specifies how often the rate that the unprivileged group receives the positive outcome is the same rate that the privileged group receives the positive outcome. For example, if a credit risk model assigns the “no risk” prediction to 80% of unprivileged applicants and to 100% of privileged applicants, that model has a disparate impact of 80%.

Supported fairness details

The following details for fairness metrics are supported by Watson OpenScale:

  • The favorable percentages for each of groups
  • Fairness averages for all the fairness groups
  • Distribution of the data for each of the monitored groups
  • Distribution of payload data

Learn more

Reviewing fairness results

Parent topic: Fairness metrics overview

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more