Length less than evaluation metric

Last updated: Feb 26, 2025
Length less than evaluation metric

The length less than metric measures whether the length of each row in the prediction is less than a specified maximum value.

Metric details

Length less than is a content validation metric that uses string-based functions to analyze and validate generated LLM output text. The metric is available only when you use the Python SDK to calculate evaluation metrics.

Scope

The length less than metric evaluates generative AI assets only.

  • Types of AI assets: Prompt templates
  • Generative AI tasks:
    • Text summarization
    • Content generation
    • Question answering
    • Entity extraction
    • Retrieval augmented generation (RAG)
  • Supported languages: English

Scores and values

The length less than metric score indicates whether the length of each row in the prediction is less than a specified maximum value.

  • Range of values: 0.0-1.0
  • Ratios:
    • At 0: The row length is not less than the specified value.
    • At 1: The row length in the prediction is less than the specified value.

Settings

  • Thresholds:
    • Lower limit: 0
    • Upper limit: 1

Parent topic: Evaluation metrics