0 / 0
Reviewing debiased transactions

Reviewing debiased transactions

The Watson OpenScale service can detect direct and indirect bias and uses two types of debiasing: active and passive. Passive debiasing reveals bias, while active debiasing prevents bias by changing the model in real time.

The algorithm applies a method called perturbation to evaluate differences in expected outcomes in the data. For more information on how bias is computed, see Calculating fairness.

When you evaluate deployments for fairness, Watson OpenScale detects both direct and indirect bias bias of the transactions from the payload logging table.

Passive debiasing

Passive debiasing is the work that Watson OpenScale does by itself, automatically, every hour. It is considered passive because it happens without user intervention. When Watson OpenScale checks for bias, it also debiases the data. It analyzes the behavior of the model, and identifies the data where the model acts in a biased manner.

Watson OpenScale then builds a machine learning model to predict whether the model is likely to act in a biased manner on a given, new data point. Watson OpenScale then analyzes the data that is received by the model, on an hourly basis, and finds the data points that cause bias. For such data points, the fairness attribute is perturbed from minority to majority, and majority to minority, and the perturbed data is sent to the original model for prediction. The prediction of the perturbed record, along with the original prediction is used to calculate the bias.

Watson OpenScale debiases the identified biased records that the model receives in the past hour. It also computes the fairness for the debiased output, and displays it in the Debiased model tab.

Active debiasing

Active debiasing is a way for you to request and bring debiased results into your application through the REST API endpoint. You can actively invoke Watson OpenScale to know the debiased prediction of your model so that you can run your application in a no bias way. In active debiasing, you can use a debiasing REST API endpoint from your application. This REST API endpoint internally calls your model, and checks its behavior.

If Watson OpenScale detects that the model is acting in a biased manner, it perturbs the data, and sends it back to the original model. After internal analysis on the perturbed data point, if Watson OpenScale detects that the model is behaving in a biased manner on the data point, then the output of the original model on the perturbed data is returned as the debiased prediction.

If Watson OpenScale determines that the original model is not acting in a biased manner, then Watson OpenScale returns the original model's prediction as the debiased prediction. Thus, by using this REST API endpoint, you can ensure that your application does not base decisions on biased output.

Reviewing data for debiased values

When the fairness evaluation runs, Watson OpenScale stores the debiased values in the payload logging table of the model deployment. All scoring transactions done through this endpoint are automatically debiased, as applicable. You can access the debiased scoring endpoint just as you would the normal scoring endpoint for your deployed model. In addition to returning the response of your deployed model, it also returns the debiased_prediction and debiased_probability columns.

  • The debiased_prediction column contains the debiased prediction value.

  • The debiased_probability column represents the probability of the debiased prediction. This array of double values represents the probability of the debiased prediction that belongs to one of the prediction classes.

Enabling the debiasing parameter

Debiasing is disabled by default when you configure new deployments in Watson OpenScale. You can also set the perform_debias parameter to true in the parameters section of the Python SDK or specify the PERFORM_DEBIASING pod-level environment label to enable debiasingm, as shown in the following example:

wos_client.monitor_instances.update(
            monitor_instance_id=<FAIRNESS_MONITOR_INSTANCE_ID>,
            patch_document=[JsonPatchOperation(
                op=OperationTypes.ADD,
                path='/parameters/perform_debias',
                value=True
            )],update_metadata_only=True
        )

When you patch the monitor instance, the fairness monitor runs debiasing during the next evaluation.

For more information, see the Watson OpenScale Python SDK documentation.

Reviewing debiased transactions

You can use the debiased transactions endpoint to review debiased transactions for fairness evaluations. For more information, see Sending model transactions in Watson OpenScale.

Note: Ideally, you would directly call the debias endpoint from your production application instead of calling the scoring endpoint from the machine learning provider.

Because the debias endpoint deals with runtime bias, it continues to run background checks for the scoring data from the payload logging table. It also keeps updating the bias mitigation model, which debiases the scoring requests. In this way, Watson OpenScale is always up to date with the incoming data, and with its behavior to detect and mitigate bias.

You can configure a fairness threshold in Watson OpenScale to indicate when data is acceptable and unbiased.

Mitigate bias with a new version of the model:

  • You must build a new version of the model that fixes the problem. Watson OpenScale stores biased records in the manual labeling table. These biased records must be manually labeled and then the model is retrained through the additional data to build a new version of the model that is unbiased.

Extract a list of the individual biased records:

  • Connect to the manual labeling table and read the records by using standard SQL queries.
Note: Ideally, you would directly call the debias endpoint from your production application instead of calling the scoring endpoint from the machine learning provider.

Parent topic: Reviewing model transactions

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more