0 / 0
Readability evaluation metric
Last updated: Mar 05, 2025
Readability evaluation metric

The readability metric determines how difficult the model's output is to read by measuring characteristics such as sentence length and word complexity.

Metric details

Readability is a generative AI quality evaluation metric that measures how well generative AI assets perform tasks.

Scope

The readability metric evaluates generative AI assets only.

  • Types of AI assets: Prompt templates
  • Generative AI tasks:
    • Text summarization
    • Content generation
  • Supported languages: English

Scores and values

The readability metric score indicates how easy the model's output is to read. Higher scores indicate that the model's output is easier to read.

  • Range of values: 0.0-1.0
  • Best possible score: 1.0

Settings

  • Thresholds:
    • Lower limit: 60

Parent topic: Evaluation metrics