0 / 0
Macro recall evaluation metric
Last updated: Feb 26, 2025
Macro recall evaluation metric

The macro recall metric measures the average of recall scores that are calculated separately for each class.

Metric details

Macro recall is a multi-label/class metric for generative AI quality evaluations that measures how well generative AI assets perform entity extraction tasks for multi-label/multi-class predictions.

Scope

The macro recall metric evaluates generative AI assets only.

  • Types of AI assets: Prompt templates
  • Generative AI tasks: Entity extraction
  • Supported languages: English

Scores and values

The macro recall metric score indicates the average of recall scores that are calculated for each class. Higher scores indicate that predictions are more accurate.

  • Range of values: 0.0-1.0
  • Best possible score: 1.0

Settings

  • Thresholds:
    • Lower limit: 0.8
    • Upper limit: 1

Parent topic: Evaluation metrics