0 / 0
Visualizing evaluation data

Visualizing evaluation data

You can view a visualization chart for your fairness evaluation, that shows data points for a monitored feature at a selected hour.

From your insights dashboard, select your deployed model to view details about your configured monitors.

To see details behind a particular fairness statistic, select Fairness and then you can choose a specific time from the fairness chart. To select a different feature or time to review details, you can use filters, such as Monitored attribute, Date, and Time.

Visualization for evaluation data

Interpreting the chart

The chart provides a visual representation for the following information:

  • You can observe the population that experiences bias (for example, customers in the range 18 - 23 years old). The chart also shows the percentage of expected outcome for this population.

  • The chart shows the percentage of expected outcome for the reference population, which is the average of expected outcome across all reference populations.

  • The chart indicates the presence of bias. The ratio of percentage of expected outcomes for populations in the range to the percentage of expected outcomes for the reference population and whether it exceeds the threshold.

  • The chart also shows the distribution of the reference and monitored values for each distinct value of the attribute in the data from the payload table that was analyzed to identify bias. The distribution of the payload data is shown for each distinct value of the fairness attribute (even reference values are shown). This information can be used to correlate the bias with the amount of data that is received by the model.

  • Additionally, the chart shows the percentage of population with expected outcomes. The source of the bias is the data in this group, which skewed the results and led to an increase in the percentage of expected outcomes for the reference class. This information can be used to identify parts of the data that can then be under-sampled when you retrain the model.

  • Another important thing that the chart shows is the name of the table that contains the data that is identified for manual labeling. Whenever the algorithm detects bias in a model. It also identifies the data points that can be sent for manual labeling by humans. This manually labeled data can then be used along with the original training data to retrain the model. This retrained model is likely to not have the bias. The manual labeling table is present in the database that is associated with the Watson OpenScale instance.

Next steps

Viewing fairness results for indirect bias

Parent topic: Getting insights with Watson OpenScale

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more