The micro recall metric measures the ratio of the number of correct predictions over all classes when compared to the number of true samples.
Metric details
Copy link to section
Micro recall is a multi-label/class metric for generative AI quality evaluations that measures how well generative AI assets perform entity extraction tasks for multi-label/multi-class predictions.
Scope
Copy link to section
The micro recall metric evaluates generative AI assets only.
Types of AI assets: Prompt templates
Generative AI tasks: Entity extraction
Supported languages: English
Scores and values
Copy link to section
The micro recall metric score indicates the ratio of the number of correct predictions over all classes when compared to the number of true samples. Higher scores indicate that predictions are more accurate.