Importing trained TensorFlow models into Watson Machine Learning

If you have a TensorFlow model that you trained outside of IBM Watson Machine Learning, this topic describes how to import that model into your Watson Machine Learning service.

 

Restrictions and requirements

  • You must use tf.saved_model.builder.SavedModelBuilder to save/serialize a TensorFlow model.
  • The saved(serialized) model file must be in the top level of the .tar.gz file that gets saved/uploaded to the Watson Machine Learning repository using client.repository.store_model() API.
  • tf.estimator is not supported
  • The only supported deployment types for TensorFlow models are: web service and batch
  • See also: Supported frameworks

 

Example

The following notebook demonstrates importing a TensorFlow model:

 

Interface options

 

Step 0 for interface options 1 and 2: Build, train, and save a model

The following Python code snippet demonstrates:

  • Building and training a text classifier
  • Saving the trained model in a directory called “message-classification-model-dir”
  • Saving the model files in a tar.gz file called “message-classification-model.tar.gz”
import tensorflow as tf

# Build the graph
# Input: X, labels: y
X = tf.placeholder( tf.float32, shape = ( None, num_inputs ) )
y = tf.placeholder( tf.float32, shape = ( None, num_output_classes ) )
# Layer 1
w1 = tf.Variable( tf.truncated_normal( shape=[ num_inputs, num_layer1_nodes ] ) )
b1 = tf.Variable( tf.zeros( shape=[ num_layer1_nodes ] ) )
layer1_output =  tf.nn.relu( tf.matmul( X, w1 ) +  b1 ) 
# Output
w2 = tf.Variable( tf.truncated_normal( shape=[ num_layer1_nodes, num_output_classes ] ) )
b2 = tf.Variable( tf.zeros( shape=[ num_output_classes ] ) )
output = tf.nn.softmax( tf.matmul( layer1_output, w2 ) + b2 )

# Train the model
loss = tf.losses.softmax_cross_entropy( y, output )
optimizer = tf.train.AdamOptimizer( learning_rate ).minimize( loss )
for epoch in range( num_epochs ):    
    session.run( optimizer, feed_dict={ X : X_train, y : y_train } )

# Save the trained model
builder = tf.saved_model.builder.SavedModelBuilder( "message-classification-model-dir" )
builder.add_meta_graph_and_variables(
      session, [ tf.saved_model.tag_constants.SERVING ],
      signature_def_map={ "classify_message" : classification_signature, },
      main_op=tf.tables_initializer() )
builder.save()
!tar -zcvf message-classification-model.tar.gz -C message-classification-model-dir .

Where:

  • vocab_size is the number of words in the dictionary
  • X_train is the tokenized, padded training input strings
  • y_train is the binary-encoded labels

For the full code example, see: the sample notebook external link

 

Interface option 1: Watson Machine Learning Python client

Step 1: Store the model in your Watson Machine Learning repository

You can store the model in your Watson Machine Learning repository using the Watson Machine Learning Python client store_model method external link.

Format options:

  • Trained model saved in a directory:

    metadata = {
        client.repository.ModelMetaNames.NAME: "TensorFlow model (directory)",
        client.repository.ModelMetaNames.FRAMEWORK_NAME: "tensorflow",
        client.repository.ModelMetaNames.FRAMEWORK_VERSION: "1.13"
    }
    model_details_dir = client.repository.store_model( model="message-classification-model-dir", meta_props=metadata )
    
  • Trained model saved in a tar.gz file:

    metadata = {
        client.repository.ModelMetaNames.NAME: "TensorFlow model (tar.gz)",
        client.repository.ModelMetaNames.FRAMEWORK_NAME: "tensorflow",
        client.repository.ModelMetaNames.FRAMEWORK_VERSION: "1.13"
    }
    model_details_targz = client.repository.store_model( model="message-classification-model.tar.gz", meta_props=metadata )
    

Where:

  • <your-credentials> contains credentials for your Watson Machine Learning service (see: Looking up credentials)

Step 2: Deploy the stored model in your Watson Machine Learning

The following example demonstrates deploying the stored model as a web service, which is the default deployment type:

model_id_dir = model_details_dir["metadata"]["guid"]
deployment_details_dir = client.deployments.create( artifact_uid=model_id_dir, name="TensorFlow deployment (directory)" )

See: Deployments.create external link

 

Interface option 2: Watson Machine Learning CLI

Prerequisite: Set up the CLI environment.

Step 1: Store the model in your Watson Machine Learning repository

Example command and corresponding output

>ibmcloud ml store <model-filename> <manifest-filename>
Starting to store ...
OK
Model store successful. Model-ID is '145bca56-134f-7e89-3c12-0d3a7859d21f'.

Where:

  • <model-filename> is the path and name of the tar.gz file
  • <manifest-filename> is the path and name of a manifest fest containing metadata about the model being stored

Sample manifest file contents

name: My TensorFlow model
framework:
  name: tensorflow
  version: '1.13'

See: store CLI command external link

Step 2: Deploy the stored model in your Watson Machine Learning

The following example demonstrates deploying the stored model as a web service, which is the default deployment type:

Example command and corresponding output

>ibmcloud ml deploy <model-id> "My TensorFlow model deployment"
Deploying the model with MODEL-ID '145bca56-134f-7e89-3c12-0d3a7859d21f'...
DeploymentId       316a89e2-1234-6472-1390-c5432d16bf73
Scoring endpoint   https://us-south.ml.cloud.ibm.com/v3/wml_instances/5da31...
Name               My TensorFlow model deployment
Type               tensorflow-1.13
Runtime            None Provided
Status             DEPLOY_SUCCESS
Created at         2019-01-14T19:47:51.735Z
OK
Deploy model successful

Where:

  • <model-id> was returned in the output from the store command

See: deploy CLI command external link