0 / 0
Configuring explainability in Watson OpenScale

Configuring explainability in Watson OpenScale

You can configure explainability in Watson OpenScale to reveal which features contribute to the model's predicted outcome for a transaction and predict what changes would result in a different outcome.

In the Explainability section of your model configuration page, configure explainability to analyze the factors that influence your model outcomes. You can choose to configure local explanations to analyze the impact of factors for specific model transactions and configure global explanations to analyze general factors that impact model outcomes.

Configure general settings

On the General settings tab, you can configure explainability settings manually or you can run a custom notebook to generate an explainability archive. You can upload the archive to specify the settings for your evaluation.

If you configure the settings manually, you must specify the explanation methods that you want to use. The methods that you select determines the type of insights that Watson OpenScale provides for explainability. By enabling Global explanation, users can choose either SHAP (Shapley Additive explanations) or LIME (Local Interpretable Model-Agnostic explanations) as the global explanation method.
For more information, see Explaining transactions. If you do not provide training data, you will need to upload an explainability archive.

You can also choose to specify controllable features and enable language support. Controllable features are features that can be changed and have a significant impact on your model outcomes. Watson OpenScale analyzes the controllable features that you specify to identify changes that might produce different outcomes.

If you enable language support, Watson OpenScale can analyze languages that are not space-delimited to determine explainability. You can configure Watson OpenScale to automatically detect supported languages or you can manually specify any supported languages that you want analyzed. You can't configure language support for structured and image models.

Configure SHAP explanation

If you use SHAP as the local explanation method or enable SHAP global explanation, you must specify settings that determine how SHAP explanations are measured on the SHAP tab. To configure common settings, you must specify the number of perturbations that the model generates for each local explanation and select an option for using background data. Watson OpenScale uses background data to determine the influence of features on outcomes for global and local explanations.

If you enable SHAP global explanation, you must also configure settings for global explanation. You must specify the sample size of model transactions that is used to generate ongoing explanations and a schedule that determines when the explanations are generated. You must also specify a global explanation stability threshold and select an option that specifies how Watson OpenScale generates a baseline global explanation. These settings are used to calculate global explanation stability.

Limitations

  • When you configure settings for SHAP global explanations, Watson OpenScale has the following limitations:
    • The sample size that you use to configure explanations can affect the number of explanations that Watson OpenScale can generate during specific time periods. If you attempt to generate multiple explanations for large sample sizes, Watson OpenScale might fail to process your transactions.
    • If you configure explanations for multiple Watson OpenScale subscriptions, you must specify the default values for the sample size and number of perturbations settings when your deployment contains 20 features or less.
  • Watson OpenScale does not support equal signs (=) in column names in your data. The equal sign might cause an error.
  • Explainability is not supported for SPSS multiclass models that return only the winning class probability.

Learn more

Parent topic: Evaluating AI models with Watson OpenScale

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more