0 / 0
Planning for AI governance
Last updated: Nov 27, 2024
Planning for AI governance

Plan how to use watsonx.governance to accelerate responsible, transparent, and explainable AI workflows with an AI governance solution that provides end-to-end monitoring for machine learning and generative AI models.

AI Governance capabilities differ depending on your deployment environment:

  • Watsonx.governance on IBM Cloud provides most AI governance capabilities. You can integrate the IBM OpenPages service to enable the Governance console. All solutions are available (licensing is required).
  • Watsonx.governance on Amazon Web Services (AWS) provides the Governance console with the Model Risk Governance solution.

Governance capabilities

Note:

To govern metadata from foundation models curated by IBM, you must have watsonx.ai provisioned.

Consider these watsonx.governance capabilities as you plan your governance strategy:

  • Collect metadata in factsheets about machine learning models and prompt templates for generative AI foundation models.
  • Monitor machine learning deployments for fairness, drift, and quality to ensure that your models are meeting specified standards.
  • Monitor generative AI assets for breaches of toxic language thresholds or detection of personal identifiable information.
  • Evaluate prompt templates with metrics designed to measure performance and to test for the presence of prohibited content, such as hateful speech.
  • Collect model health data that includes data size, latency, and throughput to help you assess performance issues and manage resource consumption.
  • Use the AI risk atlas as a guide to the challenges of AI solutions so you can plan risk mitigation and meet your compliance and regulatory goals.
  • Assign a single risk score to tracked models to indicate the relative impact of the associated model. For example, a model that predicts sensitive information such as a credit score might be assigned a high risk score.
  • Use the automated transaction analysis tools to improve transparency and explainability for your AI assets. For example, see how a feature contributes to a prediction and test what-if scenarios to explore different outcomes.
  • Optionally integrate with IBM OpenPages Model Risk Governance to collect metadata about foundation models and machine learning models to help you achieve your governance goals. You can also use OpenPages to develop workflows that support your governance processes.

Planning for governance

Consider these governance strategies:

Build a governance team

Consider the expertise that you need on your governance team. A typical governance plan might include the following roles. Sometimes, the same person might fill multiple roles. In other cases, a role might represent a team of people.

  • Model owner: The owner creates an AI use case to track a solution to a business need. The owner requests the model or prompt template, manages the approval process, and tracks the solution through the AI Lifecycle.
  • Model developer or Data scientist: The developer works with the data in a data set or a large language model (LLM) and creates the machine learning model or LLM prompt template.
  • Model validator: the validator tests the solution to determine whether it meets the goals that are stated in the AI use case.
  • Risk and compliance manager: The risk manager determines the policies and compliance thresholds for the AI use case. For example, the risk manager might determine the rules to apply for testing a solution for fairness or for screening output for hateful and abusive speech.
  • MLOps engineer: The MLOps engineer moves a solution from a pre-production (test) environment to a production environment when a solution is deemed ready to be fully deployed.
  • App developer: Following deployment, and app developer runs evaluations against the deployment to monitor how the solution performs against the metric threshold set by the risk and compliance owner. If performance drops under specified thresholds, the app developer works with the other stakeholders to address problems and update the model or prompt template.

Set up a governance structure

After you identify roles and assembling a team, plan your governance structure.

  1. Create an inventory for storing AI use cases. An inventory is where you store and view AI use cases and the factsheets that are associated with the governed assets. Depending on your governance requirements, store all use cases in a single inventory, or create multiple inventories for your governance efforts.
  2. Create projects for collaboration. If you are using IBM tools, create a watsonx.ai Studio project. The project can hold the data that is required to train or test the AI solution and the model or prompt template that is governed. Use the access control to restrict access to the approved collaborators.
  3. Create a pre-production deployment space. Use the space to test your model or prompt template by using test data. Like a project, a space provides access control features so you can include the required collaborators.
  4. Configure test and validation evaluations. Provide the model or prompt template details and configure a set of evaluations to test the performance of your solution. For example, you might test a machine learning model for dimensions such as fairness, quality, and drift, and test a prompt template against metrics such as perplexity (how accurate the output is), or toxicity (whether the output contains hateful or abusive speech). By testing on known (labeled) data, you can evaluate the performance before you move a solution to production.
  5. Configure a production space. When the model or prompt template is ready to be deployed to a production environment, move the solution and all dependencies to a production space. A production space typically has a tighter access control list.
  6. Configure evaluations for the deployed model. Provide the model details and configure evaluations for the solution. You can now test against live data rather than test data. It is important to monitor your solution so that you are alerted if thresholds are crossed, indicating a potential problem with the deployed solution.

Manage collaboration for governance

Watsonx.governance is built on a collaborative platform to allow for all approved team members to contribute to the goals of solving business problems.

To plan for collaboration, consider how to manage access to the inventories, projects, and spaces you use for governance.

Use roles along with access control features to ensure that your team has appropriate access to meet goals.

Develop a communication plan

Some of the workflow around defining an AI use case and moving assets through the lifecycle rely on effective communication. Decide how your team will communicate and establish the details. For example:

  • Will you use email for decision-making or a messaging tool such as Slack?
  • Is there a formal process for adding comments to an asset as it moves through a workflow?

Create your communication plan and share it with your team.

Implement a simple governance solution

As you roll out your governance strategy, start with a simple implementation, then consider how to build incrementally to a more comprehensive solution. The simplest implementation requires an AI use case in an inventory, with an asset moving from request to production.

For the most straightforward implementation of AI governance, create an AI use case for tracking assets. An AI use case in an inventory consists of a set of factsheets that contain lineage, history, and other relevant information about a model's lifecycle. Add data scientists, data engineers, and other users as collaborators.

The following figure shows how AI use case owners can request and track assets:

  • Business users create AI use cases in the inventory to request machine-learning models or LLM prompt templates.
  • Data scientists associate the trained asset with an AI use case to track lifecycle activities.

Inventories store factsheets with metadata about governed assets

AI factsheets accumulate information about the model or prompt templates in the following ways:

  • All actions that are associated with the tracked asset are automatically saved, including deployments and evaluations.
  • All changes to input data assets are automatically saved.
  • Data scientists can add tags, supporting documentation, and other information.
  • Data scientists can associate challenger models with the AI use cases to compare model performance.

Validators and other stakeholders review AI factsheets to ensure compliance and certify asset progress from development to production. They can also generate reports from the factsheets to print, share, or archive details.

Plan for more complex solutions

You can extend your AI governance implementation at any time. Consider these options to extend governance:

  • MLOps engineers can extend model tracking to include external models that are created with third-party machine learning models.
  • MLOps engineers can add custom properties to factsheets to track more information.
  • Compliance analysts can customize the default report templates to generate tailored reports for the organization.
  • Integrate AI Factsheets with the Governance console to support an enterprise view of governance activity and to develop workflows to support governance processes.

Governing assets that are created locally or externally

Watsonx.governance provides the tools for you to govern assets you created using IBM tools, such as machine learning models created by using AutoAI or foundation model prompt templates created in a watsonx project. You can also govern machine learning models that are created by using non-IBM tools, such as Microsoft Azure or Amazon Web Services. As you develop your governance plan, consider these differences:

  • IBM assets developed with tools such as watsonx.ai Studio are available for governance earlier in the lifecycle. You can track the factsheet for a local asset from the Development phase, and have visibility into details such as the training data and creation details from an earlier stage.
  • An inventory owner or administrator must enable governance for external models.
  • When governance is enabled for external models, they can be added to an AI use case explicitly. If you track an external model in the develop phase, then lifecycle activities for the validate and operate phases are tracked automatically.

For a list of supported machine learning model providers, see Supported machine learning providers.

Next steps

To begin governance, follow the steps in Setting up watsonx.governance to start evaluating and tracking models.

Parent topic: Watsonx.governance overview

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more