0 / 0
Macro precision evaluation metric
Last updated: Feb 26, 2025
Macro precision evaluation metric

The macro precision metric calculates average of precision scores that are calculated separately for each class.

Metric details

Macro precision is a multi-label/class metric for generative AI quality evaluations that measures how well generative AI assets perform entity extraction tasks for multi-label/multi-class predictions.

Scope

The macro precision metric evaluates generative AI assets only.

  • Types of AI assets: Prompt templates
  • Generative AI tasks: Entity extraction
  • Supported languages: English

Scores and values

The macro precision metric score indicates the average of precision scores that are calculated separately for each class. Higher scores indicate that predictions are more accurate.

  • Range of values: 0.0-1.0
  • Best possible score: 1.0

Settings

  • Thresholds:
    • Lower limit: 0.8
    • Upper limit: 1

Parent topic: Evaluation metrics