The ends with evaluation metric measures whether the rows in the prediction end with the specified substring.
Metric details
Copy link to section
Ends with is a content validation metric that uses string-based functions to analyze and validate generated LLM output text. The metric is available only when you use the Python SDK to calculate evaluation metrics.
Scope
Copy link to section
The ends with metric evaluates generative AI assets only.
Types of AI assets: Prompt templates
Generative AI tasks:
Text summarization
Content generation
Question answering
Entity extraction
Retrieval augmented generation (RAG)
Supported languages: English
Scores and values
Copy link to section
The ends with metric score indicates whether the rows in the prediction end with the specified substring.
Range of values: 0.0-1.0
Ratios:
At 0: The prediction rows do not end with the specified substring.
At 1: The prediction rows end with the specified substring.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.