When you evaluate models, you can analyze your results to gain insights about model performance.
You can use the Insights dashboard to help you analyze your evaluation results for machine learning models. The dashboard provides an overview of the deployments that you're evaluating, the types of evaluations that you've configured, and any metric threshold violation alerts. Each deployment tile provides a summary of the results from your last model evaluation. You can select a deployment tile to review evaluation results that provide more insights about your model.
When you review evaluation results, details are provided about how your model was measured by each metric over time. You can also review model transactions to reveal which features contributed to your model's predicted outcome for each transaction and understand what changes would result in a different outcome.
Parent topic: Evaluating AI models