Last updated: Feb 03, 2025
The abstractness metric measures the ratio of n-grams in the generated text output that do not appear in the source content of foundation models to identify abstractness in generated text.
Metric details
Abstractness is a content analysis metric for generative AI quality evaluations that can help evaluate generative AI model output against model input or context.
Scope
The abstractness metric evaluates generative AI assets only.
- Types of AI assets: Prompt templates
- Generative AI tasks:
- Text summarization
- Retrieval Augmented Generation (RAG)
- Supported languages: English
Scores and values
The abstractness metric score indicates the level of abstractness that is identified in the output. Higher scores indicate high levels of abstractness in the output.
- Range of values: 0.0-1.0
Settings
- Thresholds:
- Lower bound: 0
- Upper bound: 1
Parent topic: Evaluation metrics
Was the topic helpful?
0/1000