With Watsonx.governance, you can evaluate prompt templates for generative AI models as well as machine learning models in projects and spaces.
You can track and measures outcomes from your AI models to help ensure that they are compliant with business processes no matter where your models are built or running.
- Required service
- watsonx.ai Runtime
- Training data format
- Relational: Tables in relational data sources
- Tabular: Excel files (.xls or .xlsx), CSV files
- Textual: In the supported relational tables or files
- Connected data
- Cloud Object Storage (infrastructure)
- Db2
- Data size
- Any
Enterprises use model evaluations as part of AI governance strategies to ensure that models in deployment environments meet established compliance standards regardless of the tools and frameworks that are used to build and run the models. This approach ensures that AI models are free from bias, can be easily explained and understood by business users, and are auditable in business transactions.
Watch this short video to learn more about model evaluations.
This video provides a visual method to learn the concepts and tasks in this documentation.
Try a tutorial
The Evaluate a machine learning model tutorial provides hands-on experience with configuring evaluations to monitor fairness, quality, and explainability.
Learn more
- Setup options for Watson OpenScale
- Glossary
- FAQs
- Supported machine learning engines, frameworks, and models
- Preparing to evaluate models
- Managing data for model evaluations
- Configuring model evaluations
- Configuring explainability
- Reviewing model insights
- Model risk management and model governance
Parent topic: Governing AI assets