Importing trained Keras models into Watson Machine Learning

If you have a Keras model that you trained outside of IBM Watson Machine Learning, this topic describes how to import that model into your Watson Machine Learning service.

 

Restrictions

  • The only supported deployment types for Keras models are: web service and batch
  • Only the Keras .save() API can be used to serialize the Keras model
  • File name of the saved model file specified in .save() must have extension as .h5 or .hdf5
  • See also: Supported frameworks

 

Example

The following notebook demonstrates importing a Keras model:

 

Interface options

 

Step 0 for interface options 1 and 2: Build, train, and save a model

The following Python code snippet demonstrates:

  • Building and training a text classifier, model
  • Saving the model in a file called “message-classification-model.h5”
  • Saving the .h5 file in a .tgz file called “message-classification-model.tgz”
    from keras.models import Sequential
    from keras import layers
    model = Sequential()
    model.add( layers.Embedding( input_dim = vocab_size, output_dim = 50, input_length = max_len, trainable = True ) )
    model.add( layers.Flatten() )
    model.add( layers.Dense( 2, activation='sigmoid' ) )
    model.add( layers.Activation( "softmax" ) )
    model.compile( optimizer = "adam", loss = "binary_crossentropy", metrics = [ "accuracy" ] )
    model.fit( X_train, y_train, batch_size = 10, epochs = 15, verbose = False, validation_split = 0.1 )
    model.save( "message-classification-model.h5" )
    !tar -zcvf message-classification-model.tgz message-classification-model.h5
    

Where:

  • vocab_size is the number of words in the tokenizer dictionary
  • max_len is the lenth of the tokenized, padded training input strings
  • X_train is the tokenized, padded training input strings
  • y_train is the binary-encoded labels

For the full code example, see: the sample notebook external link

 

Interface option 1: Watson Machine Learning Python client

Step 1: Store the model in your Watson Machine Learning repository

You can store the model in your Watson Machine Learning repository using the Watson Machine Learning Python client store_model method external link.

Note that for deploying a Keras model, it is mandatory to pass the FRAMEWORK_LIBRARIES along with other meta properties.

Example Python code

from watson_machine_learning_client import WatsonMachineLearningAPIClient
client = WatsonMachineLearningAPIClient( <your-credentials> )
metadata = {
    client.repository.ModelMetaNames.NAME: "keras model",
    client.repository.ModelMetaNames.FRAMEWORK_NAME: "tensorflow",
    client.repository.ModelMetaNames.FRAMEWORK_VERSION: "1.5"
    client.repository.ModelMetaNames.FRAMEWORK_LIBRARIES: [{'name':'keras', 'version': '2.1.3'}]
}
model_details = client.repository.store_model( model="message-classification-model.tgz", meta_props=metadata )

Where:

  • <your-credentials> contains credentials for your Watson Machine Learning service (see: Looking up credentials)

Step 2: Deploy the stored model in your Watson Machine Learning

The following example demonstrates deploying the stored model as a web service, which is the default deployment type:

model_id = model_details["metadata"]["guid"]
model_deployment_details = client.deployments.create( artifact_uid=model_id, name="My Keras model deployment" )

See: Deployments.create external link

 

Interface option 2: Watson Machine Learning CLI

Prerequisite: Set up the CLI environment.

Step 1: Store the model in your Watson Machine Learning repository

Example command and corresponding output

>ibmcloud ml store <model-filename> <manifest-filename>
Starting to store ...
OK
Model store successful. Model-ID is '145bca56-134f-7e89-3c12-0d3a7859d21f'.

Where:

  • <model-filename> is the path and name of the .tgz file
  • <manifest-filename> is the path and name of a manifest fest containing metadata about the model being stored. Note that metadata that specifies Keras and its version must be provided under the framework property in the manifest file used to store the model.

Sample manifest file contents

name: My Keras model
framework:
  name: tensorflow
  version: '1.5'
  libraries: [{"name": "keras", "version": "2.1.3"}]

See: store CLI command external link

Step 2: Deploy the stored model in your Watson Machine Learning

The following example demonstrates deploying the stored model as a web service, which is the default deployment type:

Example command and corresponding output

>ibmcloud ml deploy <model-id> "My Keras model deployment"
Deploying the model with MODEL-ID '145bca56-134f-7e89-3c12-0d3a7859d21f'...
DeploymentId       316a89e2-1234-6472-1390-c5432d16bf73
Scoring endpoint   https://us-south.ml.cloud.ibm.com/v3/wml_instances/5da31...
Name               My Keras model deployment
Type               tensorflow-1.5
Runtime            None Provided
Status             DEPLOY_SUCCESS
Created at         2019-01-14T19:47:51.735Z
OK
Deploy model successful

Where:

  • <model-id> was returned in the output from the store command

See: deploy CLI command external link