You can configure evaluations to generate insights about your model performance.
You can configure the following types of evaluations:
Quality Evaluates how well your model predicts correct outcomes that match labeled test data.
Fairness Evaluates whether your model produces biased outcomes that provide favorable results for one group over another.
DriftSupported models: machine learning models only Evaluates how your model changes in accuracy and data consistency
by comparing recent transactions to your training data.
Drift v2 Evaluates changes in your model output, the accuracy of your predictions, and the distribution of your input data.
Model health Evaluates how efficiently your model deployment processes your transactions.
Generative AI qualitySupported models: LLM models only Measures how well your foundation model performs tasks
If you're evaluating traditional machine learning models, you can also create custom evaluations and metrics to generate a greater variety of insights about your model performance.
Each evaluation generates metrics that you can analyze to gain insights about your model performance.
When you configure evaluations, you can choose to run evaluations continuously on the following default scheduled intervals: