Indirect bias

Indirect bias occurs when one feature can be used to stand for another. For example, one feature in a model might approximate another feature that is a protected attribute. However, it would be illegal to discriminate based on race because race can sometimes track closely with postal code, it might be the cause of indirect bias. In like manner, if you had access to a person’s music tastes you might be able to determine a person’s age. Or, if you had access to purchase history, you might determine a person’s sex. Even if your predictive model had none of the protected attributes, such as race, age, or sex by using proxies your model might produce biased results.

Watson OpenScale analyzes indirect bias when the following conditions are met:

  • To find correlations, the data set must be sufficiently large (more than 4000 records).
  • The training data must include the meta fields. You must train the model on a subset of data fields. These additional fields, the meta fields, are for determining indirect bias. (Include the meta fields, but don’t use them in model training.)
  • Payload logging must contain meta fields and be run before the fairness monitor is configured. You must use this method to upload the meta fields to the Watson OpenScale service. Payload logging for indirect bias requires two types of input: 1) training features with values and 2) meta fields with values.
  • When you configure the fairness monitor, select the additional fields to monitor.

Typical workflow for indirect bias

You can determine indirect bias for preproduction and production models, however, the models require different columns. The test data that is used to evaluate preproduction models and the feedback data that is used to evaluate either preproduction or production models differ on the use of meta columns. Meta columns are required for the test data for preproduction and cannot be included in the feedback data that is used for preproduction or production models. A typical workflow, might include the following steps:

  1. Create training data that contains both feature columns and meta columns. The meta columns contain data that is not used to train the model.
  2. In Watson OpenScale, configure the fairness monitor with the meta columns.
  3. During preproduction, upload test data that contains both the feature columns and the meta columns. This test data must be uploaded by using the Import test data CSV option.
  4. During pre-production, you might interate on different versions of the model while using the indirect bias measures to ensure that your final model will be free of bias.
  5. After you send the model to production, the feedback data should not have any of the meta columns, only the feature columns that were used to train the model.

Sample JSON payload file with meta fields

The following sample file shows a JSON payload with the fields and values that are used to train the model. The meta fields and values that are used for the indirect bias analysis are also included. The meta fields are not used to train the model, instead they are reserved for a different kind of analysis that attempts to correlate them to bias in the model. Although the meta fields can be any type of data, they are usually protected attributes, such as sex, race, or age.

[request_data = {
    "fields": ["AGE", "SEX", "BP", "CHOLESTEROL", "NA", "K"],
    "values": [[28, "F", "LOW", "HIGH", 0.61, 0.026]]
  }

response_data = {
    "fields": ["AGE", "SEX", "BP", "CHOLESTEROL", "NA", "K", "probability", "prediction", "DRUG"],
    "values": [[28, "F", "LOW", "HIGH", 0.61, 0.026, [0.82, 0.07, 0.0, 0.05, 0.03], 0.0, "drugY"]]
  }

request_data = <put your data here>
response_data = <put your data here>

records = [PayloadRecord(request=request_data, response=response_data, response_time=18), 
                PayloadRecord(request=request_data, response=response_data, response_time=12)]

subscription.payload_logging.store(records=records)

Meta values must be in the format of an array of arrays:

"meta": {
"fields": ["age", "race", "sex"],
"values": [
[32, "Black", "Male"]
]
}

Configuring the Watson OpenScale service for indirect bias

When you set up the fairness monitor, select the fields to monitor. Include both training features and fields that are excluded from model training. If you select a field that is excluded from model training, Watson OpenScale finds correlations between values in that field and values in the training features. The correlated features are used as proxies for the fields that were excluded from model training.

Indirect bias displays

Some fields are training features. Others fields that are not training features are identified as meta fields. For the selected meta fields, Watson OpenScale checks for indirect bias.

Next steps