0 / 0
Micro recall evaluation metric
Last updated: Feb 26, 2025
Micro recall evaluation metric

The micro recall metric measures the ratio of the number of correct predictions over all classes when compared to the number of true samples.

Metric details

Micro recall is a multi-label/class metric for generative AI quality evaluations that measures how well generative AI assets perform entity extraction tasks for multi-label/multi-class predictions.

Scope

The micro recall metric evaluates generative AI assets only.

  • Types of AI assets: Prompt templates
  • Generative AI tasks: Entity extraction
  • Supported languages: English

Scores and values

The micro recall metric score indicates the ratio of the number of correct predictions over all classes when compared to the number of true samples. Higher scores indicate that predictions are more accurate.

  • Range of values: 0.0-1.0
  • Best possible score: 1.0

Settings

  • Thresholds:
    • Lower limit: 0.8
    • Upper limit: 1

Parent topic: Evaluation metrics