To manage payload data, you must send scoring requests to enable logging your model transactions in the datamart.
Payload data contains all of your model transactions. You must provide payload data to enable you to configure fairness and drift evaluations and explainability.
To log payload data, you must receive scoring requests.
Logging payload data
When you send a scoring request, your model transactions are processed to enable model evaluations. The transactions are scored and stored as records in a payload logging table within the data mart.
The data that is stored in the table must contain the same feature and prediction columns from your training data. The table can also include a prediction probability column and timestamp and ID columns to store your data as scoring records.
You can view your payload logging table by accessing the database that you specified for the data mart or by using the Python SDK as shown in the following example:
Sending payload data
If you are using IBM watsonx.ai Runtime as your machine learning provider, your payload data is logged automatically automatically when your model is scored. If you are using an external machine learning provider, you must manually log payload data with a JSON file or a payload logging endpoint to configure evaluations and explainability. For more information, see Payload logging.
After you configure evaluations, you can also use a payload logging endpoint to send scoring requests to run on-demand evaluations. For more information see, Sending model transactions. For production models, you can also upload payload data with a CSV file to send scoring requests.
Learn more
Parent topic: Managing data for model evaluations