Review the details that are captured for each tracked asset in a factsheet associated with an AI use case. You can also print a report to share or archive the factsheet details.
What is captured in a factsheet?
Copy link to section
From the point where you start tracking an asset in an AI use case, facts are collected in a factsheet for the asset. As the asset moves from one phase of the lifecycle to the next, the facts are added to the appropriate section.
For example, a factsheet for a prompt template collects information for these categories:
Category
Description
Governance
basic details for the governance, including the name of the use case, version number, and approach information
Foundation model
name and provider for the foundation model
Prompt template
prompt template name, description, input, and variables
Prompt parameters
options used to create the prompt template, such as decoding method
Evaluation
results of the most recent evaluation
Attachments
attached files and supporting documents
The following image shows the factsheets for a prompt template being tracked in an AI use case.
Important: The factsheet records the most recent activity in any category. For example, if you evaluate a deployed prompt template in a pre-production space, and then evaluate it in a production space, the
details from the production evaluation are captured in the factsheet, over-writing the previous data. Thus, the factsheet maintains a complete record of the current state of the asset.
Tracking details
Copy link to section
When tracking is enabled, the recording of facts is triggered by actions on the tracked asset. Similarly, changes in the AI lifecycle control where the asset displays in an AI use case, to show the progression from development to operation.
For example, when an asset is saved in a project and tracking is enabled, the asset displays in the Develop phase. When an asset is evaluated, whether in a project or space, the use case is updated to show the asset and all evaluation data
in the Validate phase. Deployments in production display in the Operate phase.
Model location
Develop
Validate
Operate
Project
✓
Preproduction space
✓
Preproduction space
✓
Production space
✓
Prompt template location
Develop
Validate
Operate
Development project
✓
Validation project
✓
Space
✓
Note: If you created AI use cases in a previous version, factsheet data that was listed in the Test phase is now listed in the Validation phase. This reflects an emphasis on better reflecting the asset lifecycle
rather than focusing on whether an evaluation was conducted in a project or space.
Viewing a factsheet for a prompt template
Copy link to section
The following image shows the lifecycle information for the prompt template.
Click a category to view details for a particular aspect of a prompt template. For example, click Parameters to view details for the settings for the prompt template.
Viewing a factsheet for a machine learning model
Copy link to section
The model factsheet displays all of the details for a machine learning model.
The details in the factsheet depend on the type of model. For example, this model trained by using AutoAI displays:
General model details such as name, model ID, type of model and software specification used. The input schema shows the details for the model structure.
Training information including the associated project, training data source, and hybrid pipeline name.
General deployment details, such as deployment name, deployment ID, and software specification used.
Evaluation details for your model, including:
Payload logging and Feedback data information
Quality metrics after evaluating the deployment
Fairness metrics that test for bias in monitored groups
Drift and drift v2 details following an evaluation
Global explainability details
Metrics information from custom monitors
Model health details about scoring requests to online deployments, including payload size, records, scoring requests, throughput and latency, and users
Note:
If integration with IBM OpenPages is enabled, these facts are shared with OpenPages.
Payload logging and Feedback data information
Quality metrics after evaluating the deployment
Fairness metrics that test for bias in monitored groups
Drift details following an evaluation
Global explainability details
Metrics information from custom monitors
Model health details about scoring requests to online deployments, including payload size, records, scoring requests, throughput and latency, and users
Next steps
Copy link to section
Click Export report to save a report of the factsheet.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.