0 / 0
Error rate difference evaluation metric
Last updated: Feb 13, 2025
Error rate difference evaluation metric

The error rate difference metric measures the percentage of transactions that are incorrectly scored by your model.

Metric details

Error rate difference is a fairness evaluation metric that can help determine whether your asset produces biased outcomes.

Scope

The error rate difference metric evaluates generative AI assets and machine learning models.

  • Types of AI assets:
    • Prompt templates
    • Machine learning models
  • Generative AI tasks: Text classification
  • Machine learning problem type: Binary classification

Scores and values

The error rate difference metric score indicates the difference in error rate for the monitored and reference groups.

  • Range of values: 0.0-1.0
  • Ratios:
    • At 0: Both groups have equal odds

Do the math

The following formula is used for calculating the error rate (ER):

error rate formula is displayed

The following formula is used for calculating the error rate difference:

error rate difference formula is displayed

Parent topic: Evaluation metrics