0 / 0
IBM machine learning frameworks
Last updated: Nov 21, 2024
IBM machine learning frameworks

You can use IBM watsonx.ai Runtime to perform payload logging, feedback logging, and to measure performance accuracy, runtime bias detection, drift detection, explainability, and auto-debias function when you evaluate machine learning models.

The following IBM watsonx.ai Runtime frameworks are supported for evaluating machine learning models:

Table 1. Framework support details

Framework support details
Framework Problem type Data type
AutoAI1 Classification (binary and multi classes) Structured (data, text)
AutoAI Regression Structured or Unstructured2 (text only)
Apache Spark MLlib Classification Structured or Unstructured2 (text only)
Apache Spark MLLib Regression Structured or Unstructured2 (text only)
Keras with TensorFlow3&4 Classification Unstructured2 (image, text)
Keras with TensorFlow3&4 Regression Unstructured2 (image, text)
Python function Classification Structured (data, text)
Python function Regression Structured (data, text)
scikit-learn5 Classification Structured (data, text)
scikit-learn Regression Structured (data, text)
XGBoost6 Classification Structured (data, text)
XGBoost Regression Structured (data, text)

1To learn more about AutoAI, see AutoAI implementation details. For models where the training data is in Cloud Object Storage, there is no support for fairness attributes of type Boolean. However, if the training data is in Db2, model evaluations support fairness attributes that are Boolean type. When using the AutoAI option, if the model prediction is a binary data type, model evaluations are not supported. You must change such models so that the data type of their prediction is a string data type.

2Fairness and drift metrics are not supported for unstructured (image or text) data types.

3Keras support does not include support for fairness.

4Explainability is supported if your model / framework outputs prediction probabilities.

5To generate the drift detection model, you must use scikit-learn version 1.3.2 in notebooks.

6For XGBoost binary and multiple class models, you must update the model to return prediction probability in the form of numerical values for binary models and a list of probabilities per class for multi-class models. Support for the XGBoost framework has the following limitations for classification problems: For binary classification, the binary:logistic logistic regression function with an output as a probability of True is supported for model evaluations. For multiclass classification, the multi:softprob function where the result contains the predicted probability of each data point belonging to each class is supported for model evaluations.

AutoAI models and training data

AutoAI automatically prepares data, applies algorithms, or estimators, and builds model pipelines that are best suited for your data and use case. Access to the training data is required to analyze the model for evaluations.

Because the training data location is not detected for an AutoAI model evaluation like it can for a regular models, you must explicitly provide the needed details to access the training data location:

  • For the online path, where you manually configuring monitors, you must provide the database details from which training data can be accessed.
  • For the custom notebook path, where you upload training data distribution, you can use the JSON file that is produced by running the notebook.

For more information, see Provide model details.

Specifying an IBM watsonx.ai Runtime service instance

Your first step for configuring model evaluations is to specify an IBM watsonx.ai Runtime instance. Your watsonx.ai Runtime instance is where you store your AI models and deployments.

Prerequisites

You should have provisioned an IBM watsonx.ai Runtime instance in the same account or cluster where the service instance for model evaluations is present. If you have provisioned a IBM watsonx.ai Runtime instance in some other account or cluster, then you cannot configure that instance with automatic payload logging for model evaluations.

Connect your watsonx.ai Runtime service instance

You can connect to AI models and deployments in an IBM watsonx.ai Runtime instance for model evaluations. To connect your service, go to the Configure The configuration tab icon tab, add a machine learning provider, and click the Edit The edit icon icon. In addition to a name and description and whether this is a Pre-production or Production environment type, you must provide the following information that is specific to this type of service instance:

  • If you have an instance of IBM watsonx.ai Runtime, the instance is detected, along with the configuration information.

Parent topic: Supported machine learning engines, frameworks, and models

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more