Configuring the explainability monitor
In IBM Watson OpenScale, the explainability monitor sets whether a feature in a classification model is controllable or not.
Regression, unstructured text, and image classification models do not support controllable features.
Requirements for controllable features
A controllable feature is one that can be changed and have a meaningful impact on the outcome. For example, a loan amount would be a controllable feature that might affect whether an applicant is approved or not. An example of an uncontrollable feature would be something inherent, such as sex or age. These characteristics are attributes that would be beyond the ability of someone to adjust in a transient manner.
To start the configuration process, from the Explainability tab, in the Controllable features box, click the Edit icon.
For each of the features, select whether it is controllable.
Enabling non-space-delimited language support
Explainability and the use of word highlighting is supported even for languages, such as Japanese, Chinese, and Korean that are not space-delimited. You have the ability to turn this feature on or off. You must enable this feature manually. Optionally, you can have the system auto-detect the language. With this feature enabled, expanations that are generated for languages without delimiters between words, such as Japanese, Chinese, or Korean properly indicate which characters influence the model’s prediction.
- From the Configure window, click Explainability.
- In the Language support panel, click the Edit icon, and then set the Word segmentation to On.
- After you enable word segementation, the Language drop-down field is enabled and the “Automatically detect” option is selected by default. To manually set the language, click the drop-down box and select the language from the list.
- Click the Save button.
After you save your changes, the tile in the Explainability configuration reflects the changed state.