0 / 0
Metrics computation with the Python SDK

Metrics computation with the Python SDK

The Python SDK is a Python library where you can work directly with Watson OpenScale. You can use the Python SDK to configure a logging database, to bind your machine learning engine, and to select and monitor deployments.

The Watson OpenScale service supports the computation of the following fairness metrics and explanation algorithms:

Note:

The following metrics and algorithms can be computed in a notebook runtime environment or offloaded as Spark jobs against IBM Analytics Engine.

Starting with Watson OpenScale Python SDK version 3.0.14, Watson OpenScale supports the computation of the following fairness metrics and explanation algorithms:

Note:

The following metrics and algorithms can be computed in a notebook runtime environment or offloaded as Spark jobs against IBM Analytics Engine.

  • You can use the FairScore transformer as a post-processing bias mitigation technique. This technique transforms probability estimates or the scores of probabilistic binary classification models regarding fairness goals. To use FairScore Transformer in Watson OpenScale, you must train a Fair score transformer.

  • The individual fairness post-processor is a post-processing transformer algorithm that transforms individual scores to achieve individual fairness. You can use it with the Python SDK to support multi-class text classification. You must train this algorithm before you can use it to transform model outputs.

  • You can use the input reduction algorithm to calculate the minimum set of features that you must specify to keep model predictions consistent. The algorithm excludes the features that do not affect model predictions.

  • The likelihood compensation (LC) is a framework for explaining the deviations of the prediction of a black box model from the ground truth. With test data and the predict function of a black box model, LC can identify the anomalies in the test data and explain what caused the sample to become an anomaly. The LC explanation is provided as deltas, which when added to the original test data or anomaly, converges the prediction of the model to the ground truth. LC provides local explanations and is supported only for regression models.

  • LIME identifies which features are most important for a specific data point by analyzing up to 5000 other close-by data points. In an ideal setting, the features with high importance in LIME are the features that are most important for that specific data point. For Cloud Pak for Data 4.8.3 or later, you can generate LIME explanations for models with multimodal data that contain features with structured and unstructured data. Structured data can contain numeric and categorical data. Unstructured data can contain one text column.

  • You can use the mean individual disparity to verify whether your model generates similar predictions or scores for similar samples. This metric calculates the difference in probability estimates of multi-class classification models for similar samples.

  • You can use the multidimensional subset scanning algorithm as a general bias scan method. This method detects and identifies which subgroups of features have statistically significant predictive bias for a probabilistic binary classifier. This algorithm helps you decide which features are the protected attributes and which values of these features are the privileged group for monitor evaluations.

  • You can use the following performance measure metrics to evaluate models with a confusion matrix that is calculated using ground truth data and model predictions from sample data:

    • average_odds_difference
    • average_abs_odds_difference
    • error_rate_difference
    • error_rate_ratio
    • false_negative_rate_difference
    • false_negative_rate_ratio
    • false_positive_rate_difference
    • false_positive_rate_ratio
    • false_discovery_rate_difference
    • false_discovery_rate_ratio
    • false_omission_rate_difference
    • false_omission_rate_ratio
  • The protected attribute extraction algorithm transforms text data sets to structured data sets. The algorithm tokenizes the text data, compares the data to patterns that you specify, and extracts the protected attribute from the text to create structured data. You can use this structured data to detect bias against the protected attribute with a Watson OpenScale bias detection algorithm. The protected attribute extraction algorithm only supports gender as a protected attribute.

  • The protected attribute perturbation algorithm generates counterfactual statements by identifying protected attribute patterns in text data sets. It also tokenizes the text and perturbs the keywords in the text data to generate statements. You can use the original and perturbed data sets to detect bias against the protect attribute with a Watson OpenScale bias detection algorithm. The protected attribute perturbation algorithm only supports gender as a protected attribute.

  • The protodash explainer identifies input data from a reference set that need explanations. This method minimizes the maximum mean discrepancy (MMD) between the reference datapoints and a number of instances that are selected from the training data. To help you better understand your model predictions, the training data instances mimic a similar distribution as the reference datapoints.

    Note: Protodash explainer is supported only for structured classification models.
  • SHAP is a game-theoretic approach that explains the output of machine learning models. It connects optimal credit allocation with local explanations by using Shapley values and their related extensions.

    SHAP assigns each model feature an importance value for a particular prediction, which is called a Shapley value. The Shapley value is the average marginal contribution of a feature value across all possible groups of features. The SHAP values of the input features are the sums of the difference between baseline or expected model output and the current model output for the prediction that is being explained. The baseline model output can be based on the summary of the training data or any subset of data that explanations must be generated for.

    The Shapley values of a set of transactions can be combined to get global explanations that provide an overview of which features of a model are most important. For Cloud Pak for Data 4.8.3 or later, you can generate SHAP explanations for unstructured text models to understand how outcomes are predicted.

  • The SED is fairness metric that you can use to describe fairness for your model predictions. SED quantifies the differential in the probability of favorable and unfavorable outcomes between intersecting groups that are divided by features. All intersecting groups are equal, so there are no unprivileged or privileged groups. This calculation produces a SED value that is the minimum ratio of Dirichlet smoothed probability for favorable and unfavorable outcomes between intersecting groups in the data set. The value is in the range 0-1, excluding 0 and 1, and a larger value specifies a better outcome.

  • Statistical parity difference is a fairness metric that you can use to describe fairness for your model predictions. It is the difference between the ratio of favorable outcomes in unprivileged and privileged groups. This metric can be computed from either the input data set or the output of the data set from a classifier or predicted data set. A value of 0 implies that both groups receive equal benefit. A value less than 0 implies higher benefit for the privileged group. A value greater than 0 implies higher benefit for the unprivileged group.

You can compute these metrics and algorithms with Watson OpenScale Python SDK version 3.0.14 or later. For more information, see the Watson OpenScale Python SDK documentation.

You can also use sample notebooks to compute fairness metrics and explainability.

Parent topic: APIs, SDKs, and tutorials

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more