Statistical parity difference evaluation metric
Last updated: Mar 14, 2025
The statistical parity difference metric compares the percentage of favorable outcomes for monitored groups to reference groups.
Metric details
Statistical parity difference is a fairness evaluation metric that can help determine whether your asset produces biased outcomes.
Scope
The statistical parity difference metric evaluates generative AI assets and machine learning models.
- Types of AI assets:
- Prompt templates
- Machine learning models
- Generative AI tasks: Text classification
- Machine learning problem type: Binary classification
Scores and values
The statistical parity difference metric score indicates the difference between the ratio of favorable outcomes in monitored and reference groups.
- Range of values: 0.0-1.0
- Best possible score: 0.0
- Ratios:
- Under 0: Higher benefits for the monitored group
- At 0: Both groups have equal benefits
- Over 0: Higher benefit for the reference group
Do the math
The following formula is used for calculating statistical parity difference:
Parent topic: Evaluation metrics
Was the topic helpful?
0/1000