0 / 0
Reviewing results from a Fairness evaluation
Last updated: Apr 28, 2023
Reviewing results from a Fairness evaluation

When you evaluate deployments for fairness, Watson OpenScale detects both direct and indirect bias bias of the transactions from the payload logging table.

Passive debiasing

Passive debiasing is the work that Watson OpenScale does by itself, automatically, every hour. It is considered passive because it happens without user intervention. When Watson OpenScale checks for bias, it also debiases the data. It analyzes the behavior of the model, and identifies the data where the model acts in a biased manner.

Watson OpenScale then builds a machine learning model to predict whether the model is likely to act in a biased manner on a given, new data point. Watson OpenScale then analyzes the data that is received by the model, on an hourly basis, and finds the data points that cause bias. For such data points, the fairness attribute is perturbed from minority to majority, and majority to minority, and the perturbed data is sent to the original model for prediction. The prediction of the perturbed record, along with the original prediction is used to calculate the bias.

Watson OpenScale debiases the identified biased records that the model receives in the past hour. It also computes the fairness for the debiased output, and displays it in the Debiased model tab.

Active debiasing

Active debiasing is a way for you to request and bring debiased results into your application through the REST API endpoint. You can actively invoke Watson OpenScale to know the debiased prediction of your model so that you can run your application in a no bias way. In active debiasing, you can use a debiasing REST API endpoint from your application. This REST API endpoint internally calls your model, and checks its behavior.

If Watson OpenScale detects that the model is acting in a biased manner, it perturbs the data, and sends it back to the original model. After internal analysis on the perturbed data point, if Watson OpenScale detects that the model is behaving in a biased manner on the data point, then the output of the original model on the perturbed data is returned as the debiased prediction.

If Watson OpenScale determines that the original model is not acting in a biased manner, then Watson OpenScale returns the original model's prediction as the debiased prediction. Thus, by using this REST API endpoint, you can ensure that your application does not base decisions on biased output.

Reviewing data for debiased values

When the fairness evaluation runs, Watson OpenScale stores the debiased values in the payload logging table of the model deployment. All scoring transactions done through this endpoint are automatically debiased, as applicable. You can access the debiased scoring endpoint just as you would the normal scoring endpoint for your deployed model. In addition to returning the response of your deployed model, it also returns the debiased_prediction and debiased_probability columns.

  • The debiased_prediction column contains the debiased prediction value.

  • The debiased_probability column represents the probability of the debiased prediction. This array of double values represents the probability of the debiased prediction that belongs to one of the prediction classes.

Next steps

Parent topic: Reviewing model insights with Watson OpenScale

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more