Last updated: Feb 03, 2025
The answer relevance metric measures how relevant the answer in the model output is to the question in the model input.
Metric details
Answer relevance is an answer quality metric for generative AI quality evaluations that can help measure the quality of model answers. Answer quality metrics are calculated with LLM-as-a-judge models.
Scope
The answer relevance metric evaluates generative AI assets only.
- Types of AI assets: Prompt templates
- Generative AI tasks: Retrieval Augmented Generation (RAG)
- Supported languages: English
Scores and values
The answer relevance metric score indicates how relevant the generated answer is to the model input. Higher scores indicate that the model provides relevant answers to the question.
- Range of values: 0.0-1.0
- Best possible score: 1.0
- Ratios:
- At 0: No relevance to the input query
- Over 0: Increasing relevance with higher scores
Settings
- Thresholds:
- Lower bound: 0
- Upper bound: 1
Parent topic: Evaluation metrics
Was the topic helpful?
0/1000