Users evaluation metric

Last updated: Mar 14, 2025
Users evaluation metric

The users metric calculates the number of users that send scoring requests to your model deployments.

Metric details

Users is a model health evaluation metric that can help you understand how efficiently your asset processes your transactions.

Scope

The users metric evaluates generative AI assets and machine learning models.

  • Generative AI tasks:
    • Text summarization
    • Text classification
    • Content generation
    • Entity extraction
    • Question answering
    • Retrieval Augmented Generation (RAG)
  • Machine learning problem type:
    • Binary classification
    • Multiclass classification
    • Regression
  • Supported languages: English

Evaluation process

To calculate the number of users, the user_id from scoring requests is used to identify the users that send the scoring requests that your model receives.

For watsonx.ai Runtime deployments, the user_id value is automatically detected when you configure evaluations.

For external and custom deployments, you must specify the user_id value when you send scoring requests to calculate the number of users as shown in the following example from the Python SDK:

    from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord    
        client.data_sets.store_records(
            data_set_id=payload_data_set_id, 
            request_body=[
                PayloadRecord(
                    scoring_id=<uuid>,
                    request=openscale_input,
                    response=openscale_output,
                    response_time=<response_time>,
                    user_id=<user_id>). --> value to be supplied by user 
            ]
        ) 

Parent topic: Evaluation metrics