0 / 0
Global explanation stability in Watson OpenScale explainability metrics

Global explanation stability in Watson OpenScale explainability metrics

Global explanation stability gives the degree of consistency in global explanation over time in Watson OpenScale.

How it works

Watson OpenScale generates a global explanation with the baseline data that you provide when you configure explainability evaluations. Global explanations identify the features that have the most impact on the behavior of your model. When Watson OpenScale generates new global explanations, each explanation is compared to the baseline global explanation to calculate global explanation stability. Global explanation stability uses the normalized discounted cumulative gain (NDGC) formula to determine the similarity between new global explanations and the baseline global explanation.

Global explanation stability at a glance

  • Description: Higher values indicate higher uniformity with the baseline explanation

    • At 0: The explanations are very different.
    • At 1: The explanations are very similar.

Do the math

The following formula is used for calculating global explanation stability:

                  DCGₚ
nDCGₚ  =    _____________________
                  IDCGₚ

Learn more

Explaining model transactions

Parents topic: Configuring explainability in Watson OpenScale

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more