0 / 0
Micro precision evaluation metric
Last updated: Feb 26, 2025
Micro precision evaluation metric

The micro precision metric measures the ratio of the number of correct predictions over all classes to the number of total predictions.

Metric details

Micro precision is a multi-label/class metric for generative AI quality evaluations that measures how well generative AI assets perform entity extraction tasks for multi-label/multi-class predictions.

Scope

The micro precision metric evaluates generative AI assets only.

  • Types of AI assets: Prompt templates
  • Generative AI tasks: Entity extraction
  • Supported languages: English

Scores and values

The micro precision metric indicates the ratio of the number of correct predictions over all classes to the number of total predictions. Higher scores indicate that predictions are more accurate.

  • Range of values: 0.0-1.0
  • Best possible score: 1.0

Settings

  • Thresholds:
    • Lower limit: 0.8
    • Upper limit: 1

Parent topic: Evaluation metrics