About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
False discovery rate difference evaluation metric
Last updated: Feb 21, 2025
The false discovery rate difference metric calculates the amount of false positive transactions as a percentage of all transactions with a positive outcome.
Metric details
False discovery rate difference is a fairness evaluation metric that can help determine whether your asset produces biased outcomes.
Scope
The false discovery rate difference metric evaluates generative AI assets and machine learning models.
- Types of AI assets:
- Prompt templates
- Machine learning models
- Generative AI tasks: Text classification
- Machine learning problem type: Binary classification
Scores and values
The false discovery rate difference metric score indicates the pervasiveness of false positives among all positive transactions for monitored and reference groups.
- Range of values: 0.0-1.0
- Best possible score: 0.0
- Ratios:
- Under 0: Less false positives in monitored group
- At 0: Both groups have equal odds
- Over 0: Higher rate of false positives in monitored groups
Evaluation process
To calculate the false discovery rate difference, confusion matrices are generated for the monitored and reference groups to identify the amount of false and true positives for each group. The false and true positive values are used to calculate the false positive rate for each group. The false positive rate of the reference group is subtracted from the false positive rate of the monitored group to calculate the false discovery rate difference.
Do the math
The following formula is used for calculating the false discovery rate (FDR):
The following formula is used for calculating the false discovery rate difference:
Parent topic: Evaluation metrics
Was the topic helpful?
0/1000