0 / 0
Configuring model evaluations

Configuring model evaluations

Configure Watson OpenScale evaluations for fairness, quality, and drift to be sure your models are performing as expected. Configure explainability to explore what-if scenarios for your model. Set alerts to notify you when a model is performing below a threshold you set.

Monitors evaluate your deployments against specified metrics. Configure alerts that indicate when a threshold is crossed for a metric. Watson OpenScale evaluates your deployments based for three default evaluations:

  • Quality describes the model’s ability to provide correct outcomes based on labeled test data called Feedback data.
  • Fairness describes how evenly the model delivers favorable outcomes between groups. The Fairness monitor looks for biased outcomes in your model.
  • Drift warns you of a drop in accuracy or data consistency.
  • Explainability reveals which features contributed to the model’s predicted outcome for a transaction and suggests what changes would result in a different outcome.

You can also create Custom evaluations for your deployment.

Model evaluation metrics

Each evaluation uses metrics to evaluate the model. Default metrics are described for each evaluation. You can also create custom metrics.

Next steps

Parent topic: Evaluating AI models with Watson OpenScale