0 / 0
Coverage evaluation metric
Last updated: Feb 07, 2025
Coverage evaluation metric

The coverage metric measures the extent that the foundation model output is generated from the model input by calculating the percentage of output text that is also in the input.

Metric details

Coverage is a content analysis metric for generative AI quality evalutions that evaluates your model output against your model input or context.

Scope

The coverage metric evaluates generative AI assets only.

  • Types of AI assets: Prompt templates
  • Generative AI tasks:
    • Retrieval Augmented Generation (RAG)
    • Text summarization
  • Supported languages: English

Scores and values

The coverage metric score indicates the extent that the foundation model output is generated from the model input. Higher scores indicate that a higher percentage of output words are within the input text.

Range of values: 0.0-1.0

Settings

  • Thresholds:
    • Lower limit: 0
    • Upper limit: 1

Parent topic: Evaluation metrics