About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Average odds difference metric
Last updated: Feb 13, 2025
The average odds difference metric measures the difference in false positive and false negative rates between monitored and reference groups.
Metric detailsCopy link to section
Copy link to section
Average odds difference is a fairnes evaluation metric that can help determine whether your asset produces biased outcomes.
ScopeCopy link to section
Copy link to section
The average odds difference metric evaluates generative AI assets and machine learning models.
- Types of AI assets:
- Prompt templates
- Machine learning models
- Generative AI tasks: Text classification
- Machine learning problem type: Binary classification
Scores and valuesCopy link to section
Copy link to section
The average odds difference metric score indicates the difference in false positive and false negative rates for monitored and reference groups.
- Range of values: 0.0-1.0
- Best possible score: 0.0
- Ratios:
- At 0: Both groups have equal odds
- Under 0: Biased outcomes for monitored group
- Over 0: Biased outcomes for reference group
Do the mathCopy link to section
Copy link to section
The following formula is used for calculating false positive rate (FPR):
The following formula is used for calculating true positive rate (TPR):
The following formula is used for calculating average odds difference:
Parent topic: Evaluation metrics
Was the topic helpful?
0/1000