Users evaluation metric
The users metric calculates the number of users that send scoring requests to your model deployments.
Metric details
Users is a model health evaluation metric that can help you understand how efficiently your asset processes your transactions.
Scope
The users metric evaluates generative AI assets and machine learning models.
- Generative AI tasks:
- Text summarization
- Text classification
- Content generation
- Entity extraction
- Question answering
- Retrieval Augmented Generation (RAG)
- Machine learning problem type:
- Binary classification
- Multiclass classification
- Regression
- Supported languages: English
Evaluation process
To calculate the number of users, the
from scoring requests is used to identify the users that send the scoring requests that your model receives.user_id
For watsonx.ai Runtime deployments, the
value is automatically detected when you configure evaluations.user_id
For external and custom deployments, you must specify the
value when you send scoring requests to calculate the number of users as shown in the following example from the Python SDK:user_id
from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord
client.data_sets.store_records(
data_set_id=payload_data_set_id,
request_body=[
PayloadRecord(
scoring_id=<uuid>,
request=openscale_input,
response=openscale_output,
response_time=<response_time>,
user_id=<user_id>). --> value to be supplied by user
]
)
Parent topic: Evaluation metrics