Importing trained Keras models into Watson Machine Learning
If you have a Keras model that you trained outside of IBM Watson Machine Learning, this topic describes how to import that model into your Watson Machine Learning service.
Restrictions and requirements
- Use the Keras model.save() API to save the model in HDF5 file format.
- File name of the saved model file specified in
.save()
must have extension as .h5 or .hdf5 - The saved(serialized) model file must be in the top level of the .tar.gz file that gets saved/uploaded to the Watson Machine Learning repository using
client.repository.store_model()
API. - The only supported deployment types for Keras models are: web service and batch
- See also: Supported frameworks
Example
The following notebook demonstrates importing a Keras model:
Interface options
Step 0 for interface options 1 and 2: Build, train, and save a model
The following Python code snippet demonstrates:
- Building and training a text classifier,
model
- Saving the model in a file called “message-classification-model.h5”
- Saving the .h5 file in a .tgz file called “message-classification-model.tgz”
from keras.models import Sequential from keras import layers model = Sequential() model.add( layers.Embedding( input_dim = vocab_size, output_dim = 50, input_length = max_len, trainable = True ) ) model.add( layers.Flatten() ) model.add( layers.Dense( 2, activation='sigmoid' ) ) model.add( layers.Activation( "softmax" ) ) model.compile( optimizer = "adam", loss = "binary_crossentropy", metrics = [ "accuracy" ] ) model.fit( X_train, y_train, batch_size = 10, epochs = 15, verbose = False, validation_split = 0.1 ) model.save( "message-classification-model.h5" ) !tar -zcvf message-classification-model.tgz message-classification-model.h5
Where:
vocab_size
is the number of words in the tokenizer dictionarymax_len
is the lenth of the tokenized, padded training input stringsX_train
is the tokenized, padded training input stringsy_train
is the binary-encoded labels
For the full code example, see: the sample notebook
Interface option 1: Watson Machine Learning Python client
Step 1: Store the model in your Watson Machine Learning repository
You can store the model in your Watson Machine Learning repository using the Watson Machine Learning Python client store_model
method .
Note that for deploying a Keras model, it is mandatory to pass the FRAMEWORK_LIBRARIES along with other meta properties.
Example Python code
from watson_machine_learning_client import WatsonMachineLearningAPIClient
client = WatsonMachineLearningAPIClient( <your-credentials> )
metadata = {
client.repository.ModelMetaNames.NAME: "keras model",
client.repository.ModelMetaNames.FRAMEWORK_NAME: "tensorflow",
client.repository.ModelMetaNames.FRAMEWORK_VERSION: "1.13"
client.repository.ModelMetaNames.FRAMEWORK_LIBRARIES: [{'name':'keras', 'version': '2.1.6'}]
}
model_details = client.repository.store_model( model="message-classification-model.tgz", meta_props=metadata )
Where:
- <your-credentials> contains credentials for your Watson Machine Learning service (see: Looking up credentials)
Step 2: Deploy the stored model in your Watson Machine Learning
The following example demonstrates deploying the stored model as a web service, which is the default deployment type:
model_id = model_details["metadata"]["guid"]
model_deployment_details = client.deployments.create( artifact_uid=model_id, name="My Keras model deployment" )
See: Deployments.create
Interface option 2: Watson Machine Learning CLI
Prerequisite: Set up the CLI environment.
Step 1: Store the model in your Watson Machine Learning repository
Example command and corresponding output
>ibmcloud ml store <model-filename> <manifest-filename>
Starting to store ...
OK
Model store successful. Model-ID is '145bca56-134f-7e89-3c12-0d3a7859d21f'.
Where:
- <model-filename> is the path and name of the .tgz file
- <manifest-filename> is the path and name of a manifest fest containing metadata about the model being stored. Note that metadata that specifies Keras and its version must be provided under the framework property in the manifest file used to store the model.
Sample manifest file contents
name: My Keras model
framework:
name: tensorflow
version: '1.13'
libraries: [{"name": "keras", "version": "2.1.6"}]
See: store
CLI command
Step 2: Deploy the stored model in your Watson Machine Learning
The following example demonstrates deploying the stored model as a web service, which is the default deployment type:
Example command and corresponding output
>ibmcloud ml deploy <model-id> "My Keras model deployment"
Deploying the model with MODEL-ID '145bca56-134f-7e89-3c12-0d3a7859d21f'...
DeploymentId 316a89e2-1234-6472-1390-c5432d16bf73
Scoring endpoint https://us-south.ml.cloud.ibm.com/v3/wml_instances/5da31...
Name My Keras model deployment
Type tensorflow-1.13
Runtime None Provided
Status DEPLOY_SUCCESS
Created at 2019-01-14T19:47:51.735Z
OK
Deploy model successful
Where:
- <model-id> was returned in the output from the
store
command
See: deploy
CLI command