0 / 0
Compression evaluation metric
Last updated: Feb 07, 2025
Compression evaluation metric

The compression metric measures how much shorter the generated summary is when compared to the input text by calculating the ratio between the number of words in the original text and the number of words in the foundation model output.

Metric details

Compression is a content analysis metric for generative AI quality evaluations that evaluates model output against model input or context. The metric is available only when you use the Python SDK to calculate evaluation metrics.

Scope

The compression metric evaluates generative AI assets only.

  • Types of AI assets: Prompt templates
  • Generative AI tasks: Text summarization
  • Supported languages: English

Scores and values

The compression metric score indicates how concisely a generated summary reduces the length of the original text. Higher scores indicate that the summary is more concise when compared to the original text.

Range of values: 0.0-1.0

  • Ratios:
    • 0.2: Highly compressed summary that might risk omitting key details
    • 0.5: Balanced compression
    • 0.9: Minimal compression that mostly retains original text

Settings

  • Thresholds: Lower limit: 0

Parent topic: Evaluation metrics