Equals to evaluation metric

Last updated: Feb 13, 2025
Equals to evaluation metric

The equals to evaluation metric measures whether the rows in the prediction are equal to the specified substring.

Metric details

Equals to is a content validation metric that uses string-based functions to analyze and validate generated LLM output text. The metric is available only when you use the Python SDK to calculate evaluation metrics.

Scope

The equals to metric evaluates generative AI assets only.

  • Types of AI assets: Prompt templates
  • Generative AI tasks:
    • Text summarization
    • Content generation
    • Question answering
    • Entity extraction
    • Retrieval augmented generation (RAG)
  • Supported languages: English

Scores and values

The equals to metric score indicates whether the rows in the prediction are equal to to the specified string.

  • Range of values: 0.0-1.0
  • Ratios:
    • At 0: The prediction row is not equal to the specified substring.
    • At 1: The prediction row is equal to the specified substring.

Settings

  • Thresholds:
    • Lower limit: 0
    • Upper limit: 1

Parent topic: Evaluation metrics