0 / 0
Reviewing debiased transactions
Last updated: Oct 25, 2024
Reviewing debiased transactions

You can detect direct and indirect bias with active and passive debiasing. Passive debiasing reveals bias, while active debiasing prevents bias by changing the model in real time.

The algorithm applies a method called perturbation to evaluate differences in expected outcomes in the data. For more information on how bias is computed, see Calculating fairness.

When you evaluate deployments for fairness, the direct and indirect bias of the transactions from the payload logging table is detected.

Passive debiasing

Passive debiasing is happens automatically, every hour. It is considered passive because it happens without user intervention. When bias is analyzed, it also debiases the data. It analyzes the behavior of the model, and identifies the data where the model acts in a biased manner.

A machine learning model is built to predict whether the model is likely to act in a biased manner on a given, new data point. The data that is received by the model is analyzed, on an hourly basis, and to find the data points that cause bias. For such data points, the fairness attribute is perturbed from minority to majority, and majority to minority, and the perturbed data is sent to the original model for prediction. The prediction of the perturbed record, along with the original prediction is used to calculate the bias.

The identified biased records that the model receives in the past hour are debiased. The fairness for the debiased output is also computed, and displayed in the Debiased model tab.

Active debiasing

Active debiasing is a way for you to request and bring debiased results into your application through the REST API endpoint. You can actively invoke model evaluations to know the debiased prediction of your model so that you can run your application in a no bias way. In active debiasing, you can use a debiasing REST API endpoint from your application. This REST API endpoint internally calls your model, and checks its behavior.

If the model is acting in a biased manner, the data is perturbed, and sent back to the original model. After internal analysis on the perturbed data point, if the model is behaving in a biased manner on the data point, then the output of the original model on the perturbed data is returned as the debiased prediction.

If the original model is not acting in a biased manner, then the original model's prediction is returned as the debiased prediction. Thus, by using this REST API endpoint, you can ensure that your application does not base decisions on biased output.

Reviewing data for debiased values

When the fairness evaluation runs, the debiased values are stored in the payload logging table of the model deployment. All scoring transactions done through this endpoint are automatically debiased, as applicable. You can access the debiased scoring endpoint just as you would the normal scoring endpoint for your deployed model. In addition to returning the response of your deployed model, it also returns the debiased_prediction and debiased_probability columns.

  • The debiased_prediction column contains the debiased prediction value.

  • The debiased_probability column represents the probability of the debiased prediction. This array of double values represents the probability of the debiased prediction that belongs to one of the prediction classes.

Enabling the debiasing parameter

Debiasing is disabled by default when you configure new deployments. You can also set the perform_debias parameter to true in the parameters section of the Python SDK or specify the PERFORM_DEBIASING pod-level environment label to enable debiasingm, as shown in the following example:

wos_client.monitor_instances.update(
            monitor_instance_id=<FAIRNESS_MONITOR_INSTANCE_ID>,
            patch_document=[JsonPatchOperation(
                op=OperationTypes.ADD,
                path='/parameters/perform_debias',
                value=True
            )],update_metadata_only=True
        )

When you patch the monitor instance, the fairness monitor runs debiasing during the next evaluation.

For more information, see the Python SDK documentation.

Reviewing debiased transactions

You can use the debiased transactions endpoint to review debiased transactions for fairness evaluations. For more information, see Sending model transactions.

Note: Ideally, you would directly call the debias endpoint from your production application instead of calling the scoring endpoint from the machine learning provider.

Because the debias endpoint deals with runtime bias, it continues to run background checks for the scoring data from the payload logging table. It also keeps updating the bias mitigation model, which debiases the scoring requests.

You can configure a fairness threshold to indicate when data is acceptable and unbiased.

Mitigate bias with a new version of the model:

  • You must build a new version of the model that fixes the problem. Biased records are stored in the manual labeling table. These biased records must be manually labeled and then the model is retrained through the additional data to build a new version of the model that is unbiased.

Extract a list of the individual biased records:

  • Connect to the manual labeling table and read the records by using standard SQL queries.
Note: Ideally, you would directly call the debias endpoint from your production application instead of calling the scoring endpoint from the machine learning provider.

Parent topic: Reviewing model transactions

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more