About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Users evaluation metric
Last updated: Mar 14, 2025
The users metric calculates the number of users that send scoring requests to your model deployments.
Metric details
Users is a model health evaluation metric that can help you understand how efficiently your asset processes your transactions.
Scope
The users metric evaluates generative AI assets and machine learning models.
- Generative AI tasks:
- Text summarization
- Text classification
- Content generation
- Entity extraction
- Question answering
- Retrieval Augmented Generation (RAG)
- Machine learning problem type:
- Binary classification
- Multiclass classification
- Regression
- Supported languages: English
Evaluation process
To calculate the number of users, the
from scoring requests is used to identify the users that send the scoring requests that your model receives.user_id
For watsonx.ai Runtime deployments, the
value is automatically detected when you configure evaluations.user_id
For external and custom deployments, you must specify the
value when you send scoring requests to calculate the number of users as shown in the following example from the Python SDK:user_id
from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord
client.data_sets.store_records(
data_set_id=payload_data_set_id,
request_body=[
PayloadRecord(
scoring_id=<uuid>,
request=openscale_input,
response=openscale_output,
response_time=<response_time>,
user_id=<user_id>). --> value to be supplied by user
]
)
Parent topic: Evaluation metrics
Was the topic helpful?
0/1000