You must provide feedback data in watsonx.governance to enable you to configure quality evaluations and determine any changes in your model predictions.
When you provide feedback data, you can regularly evaluate the accuracy of your model predictions.
Feedback logging
The feedback data that you provide is stored as records in a feedback logging table.
The feedback data that you provide must contain the same structure of feature and prediction columns from your training data. The data must also include the known model outcome. Any columns in feedback data that you provide that are not included in the training data are not processed.
You must log your feedback data in the feedback logging table to configure quality and generative AI quality evaluations. The feedback logging table contains the following columns when you evaluate prompt templates:
- Required columns:
- Prompt variable(s): Contains the values for the variables that are created for prompt templates
reference_output
: Contains the ground truth value
- Optional columns:
_original_prediction
: Contains the output that's generated by the foundation model
Generative AI quality evaluations use feedback data to generate results for the following task types when you evaluate prompt templates:
- Text summarization
- Content generation
- Question answering
- Entity extraction
Quality evaluations use feedback data to generate results for text classification tasks.
Uploading feedback data
You can use a feedback logging endpoint to upload data for quality evaluations. You can also upload feedback data with a CSV file. For more information, see Sending model transactions.
For pre-production models, you can connect to a CSV file with feedback data that is stored in Cloud Object Storage or Db2. For more information, see Reviewing evaluation results.
Parent topic: Managing data for model evaluations in Watson OpenScale