Configuring model evaluations in Watson OpenScale
Configure Watson OpenScale evaluations to generate insights about your model performance.
You can configure the following types of evaluations in Watson OpenScale:
Evaluates how well your model predicts correct outcomes that match labeled test data.
Evaluates whether your model produces biased outcomes that provide favorable results for one group over another.
Evaluates how your model changes in accuracy and data consistency by comparing recent transactions to your training data
- Drift v2
Evaluates changes in your model output, the accuracy of your predictions, and the distribution of your input data
For production model deployments, Watson OpenScale enables model health evaluations by default to help you determine how efficiently your model deployment processes transactions.
You can also create custom evaluations and metrics to generate a greater variety of insights about your model performance.
Each evaluation generates metrics that you can analyze to gain insights about your model performance. For more information see, Reviewing evaluation results.
Parent topic: Evaluating AI models with Watson OpenScale