Configuring explainability evaluation
Watson OpenScale's explainability evaluation reveals which features contributed to the model's predicted outcome for a transaction and suggests what changes would result in a different outcome. Explainability evaluation functions differently for online processing and batch processing. For online models, explainability evaluation sets whether a feature in a classification model is controllable or not. For batch models, use explainability evaluation to connect to the explanation data.
Note: Regression, unstructured text, and image classification models do not support controllable features.
Before you begin
Before configuring the evaluation monitor, you must upload the training data for the model.
To upload the training data and set the Model details for the explainability evaluation:
- Click Upload training data and upload a file with the labeled data.
For details, see Formatting and uploading feedback data for model evaluation.
Configuring the evaluation
To start the configuration process, from the Explainability tab, in the Controllable features box, click the Edit icon.
For each of the features, select whether it is controllable.
Online model requirements for controllable features
A controllable feature is one that can be changed and have a meaningful impact on the outcome. For example, a loan amount would be a controllable feature that might affect whether an applicant is approved or not. An example of an uncontrollable feature would be something inherent, such as sex or age. These characteristics are attributes that would be beyond the ability of someone to adjust in a transient manner.
Note: All features (controllable and non-controllable) will be analyzed to determine which features were most important in determining the model outcome.
Online model enabling non-space-delimited language support
Explainability and the use of highlighting is supported even for languages, such as Japanese, Chinese, and Korean that are not space-delimited. You can turn this feature on or off. You must enable this feature manually. Optionally, you can have the system auto-detect the language. With this feature enabled, explanations that are generated for languages without delimiters between words, such as Japanese, Chinese, or Korean properly indicate which characters influence the model's prediction.
- From the Configure window, click Explainability.
- In the Language support panel, click the Edit icon, and then set the Word segmentation to On.
- After you enable word segmentation, the Language drop-down field is enabled, and the "Automatically detect" option is selected by default. To manually set the language, click the drop-down box and select the language from the list.
- Click Save.
After you save your changes, the tile in the Explainability configuration reflects the changed state.
Next steps
Configure another monitor:
Review evaluation results:
Parent topic: Configuring model monitors