The macro precision metric calculates average of precision scores that are calculated separately for each class.
Metric details
Copy link to section
Macro precision is a multi-label/class metric for generative AI quality evaluations that measures how well generative AI assets perform entity extraction tasks for multi-label/multi-class predictions.
Scope
Copy link to section
The macro precision metric evaluates generative AI assets only.
Types of AI assets: Prompt templates
Generative AI tasks: Entity extraction
Supported languages: English
Scores and values
Copy link to section
The macro precision metric score indicates the average of precision scores that are calculated separately for each class. Higher scores indicate that predictions are more accurate.