Import machine learning models trained outside of IBM Watson Machine Learning so that you can deploy and test the models. Review the model frameworks that are available for importing models.
Here, to import a trained model means:
- Store the trained model in your Watson Machine Learning repository
- Optional: Deploy the stored model in your Watson Machine Learning service
and repository means a COS bucket. For more information on space storage, refer to Creating deployment spaces.
You can import a model in these ways:
- Directly through the UI
- By using a path to a file
- By using a path to a directory
- Import a model object
For information on the available ways to import models, refer to Importing models by ML framework.
For additional information on importing specific model types, refer to Things to consider when importing models.
For an example of how to add a model programmatically by using the Python client, refer to this notebook:
For an example of how to add a model programmatically by using the REST API, refer to this notebook:
Available ways to import models, per framework type
This table lists the available ways to import models to Watson Machine Learning, per framework type.
Import option | Spark MLlib | Scikit-learn | XGBoost | TensorFlow | PyTorch |
---|---|---|---|---|---|
Importing a model object | ✓ | ✓ | ✓ | ||
Importing a model by using a path to a file | ✓ | ✓ | ✓ | ✓ | |
Importing a model by using a path to a directory | ✓ | ✓ | ✓ | ✓ |
Importing a model by using UI
To import a model by using UI:
- From the Assets tab of your space in Watson Machine Learning, click Import assets.
- Select
Local file
and then select Model. - Select the model file that you want to import and click Import.
The importing mechanism automatically selects a matching model type and software specification based on the version string in the .xml
file.
Importing a model object
To import a model object:
- If your model is located in a remote location, follow Downloading a model that is stored in a remote location.
- Store the model object in your Watson Machine Learning repository. For details, refer to Storing model in Watson Machine Learning repository.
Importing a model by using a path to a file
To import a model by using a path to a file:
-
If your model is located in a remote location, follow Downloading a model that is stored in a remote location to download it.
-
If your model is located locally, place it in a specific directory:
!cp <saved model> <target directory> !cd <target directory>
-
For Scikit-learn, XGBoost, Tensorflow, and PyTorch models, if the downloaded file is not a
.tar.gz
archive, make an archive:!tar -zcvf <saved model>.tar.gz <saved model>
The model file must be at the top level of the directory, for example:
assets/ <saved model> variables/ variables/variables.data-00000-of-00001 variables/variables.index
-
Use the path to the saved file to store the model file in your Watson Machine Learning repository. For details, refer to Storing model in Watson Machine Learning repository.
Importing a model by using a path to a directory
To import a model by using a path to a directory:
-
If your model is located in a remote location, refer to Downloading a model stored in a remote location.
-
If your model is located locally, place it in a specific directory:
!cp <saved model> <target directory> !cd <target directory>
For scikit-learn, XGBoost, Tensorflow, and PyTorch models, the model file must be at the top level of the directory, for example:
assets/ <saved model> variables/ variables/variables.data-00000-of-00001 variables/variables.index
-
Use the directory path to store the model file in your Watson Machine Learning repository. For details, refer to Storing model in Watson Machine Learning repository.
Downloading a model stored in a remote location
Follow this sample code to download your model from a remote location:
import os
from wget import download
target_dir = '<target directory name>'
if not os.path.isdir(target_dir):
os.mkdir(target_dir)
filename = os.path.join(target_dir, '<model name>')
if not os.path.isfile(filename):
filename = download('<url to model>', out = target_dir)
Things to consider when importing models
Refer to these sections for additional information on importing specific model types:
- Models saved in PMML format
- Spark MLlib models
- Scikit-learn models
- XGBoost models
- TensorFlow models
- PyTorch models
For more information on frameworks that you can use with Watson Machine Learning, refer to Supported frameworks
Models saved in PMML format
- The only available deployment type for models that are imported from PMML is online deployment.
- The PMML file must have the
.xml
file extension. - PMML models cannot be used in an SPSS stream flow.
- The PMML file must not contain a prolog. Depending on the library that you are using when you save your model, a prolog might be added to the beginning of the file by default. For example, if your file contains a prolog string such as
spark-mllib-lr-model-pmml.xml
, remove the string before you import the PMML file to the deployment space.
Depending on the library that you are using when you save your model, a prolog might be added to the beginning of the file by default, like in this example:
::::::::::::::
spark-mllib-lr-model-pmml.xml
::::::::::::::
You must remove that prolog before you can import the PMML file to Watson Machine Learning.
Spark MLlib models
- Only classification and regression models are available.
- Custom transformers, user-defined functions, and classes are not available.
Scikit-learn models
.pkl
and.pickle
are the available import formats.- To serialize/pickle the model, use the
joblib
package. - Only classification and regression models are available.
- Pandas Dataframe input type for
predict()
API is not available. - The only available deployment type for scikit-learn models is online deployment.
XGBoost models
.pkl
and.pickle
are the available import formats.- To serialize/pickle the model, use the
joblib
package. - Only classification and regression models are available.
- Pandas Dataframe input type for
predict()
API is not available. - The only available deployment type for XGBoost models is online deployment.
TensorFlow models
.pb
,.h5
, and.hdf5
are the available import formats.- To save/serialize a TensorFlow model, use the
tf.saved_model.save()
method. tf.estimator
is not available.- The only available deployment types for TensorFlow models are: online deployment and batch deployment.
PyTorch models
-
The only available deployment type for PyTorch models is online deployment.
-
For a Pytorch model to be importable to Watson Machine Learning, it must be previously exported to
.onnx
format. Refer to this code.torch.onnx.export(<model object>, <prediction/training input data>, "<serialized model>.onnx", verbose=True, input_names=<input tensor names>, output_names=<output tensor names>)
Storing a model in your Watson Machine Learning repository
Use this code to store your model in your Watson Machine Learning repository:
from ibm_watson_machine_learning import APIClient
client = APIClient(<your credentials>)
sw_spec_uid = client.software_specifications.get_uid_by_name("<software specification name>")
meta_props = {
client.repository.ModelMetaNames.NAME: "<your model name>",
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid,
client.repository.ModelMetaNames.TYPE: "<model type>"}
client.repository.store_model(model=<your model>, meta_props=meta_props)
Notes:
-
Depending on the model framework used,
<your model>
can be the actual model object, full path to a saved model file, or a path to a directory where the model file is located. For details, refer to Available ways to import models, per framework type. -
For a list of available software specifications to use as
<software specification name>
, use theclient.software_specifications.list()
method. -
For a list of available model types to use as
model_type
, refer to Software specifications and hardware specifications for deployments. -
When you export a Pytorch model to the
.onnx
format, specify thekeep_initializers_as_inputs=True
flag and setopset_version
to 9 (Watson Machine Learning deployments use thecaffe2
ONNX runtime that doesn't support opset versions higher than 9).torch.onnx.export(net, x, 'lin_reg1.onnx', verbose=True, keep_initializers_as_inputs=True, opset_version=9)
-
For information on how to create the
<your credentials>
dictionary, refer to Watson Machine Learning authentication.
Learn more
- For details on adding data assets to a space, refer to Adding data assets to a deployment space.
- For details on promoting data assets to a space, refer to Promoting assets to a deployment space.
Parent topic: Assets in deployment spaces