Using NeuNetS models deployed to Watson Machine Learning

The NeuNetS tool in IBM Watson Studio synthesizes a neural network and trains it on your training data without you having to design or build anything by hand. This topic describes how to deploy your NeuNetS model to your IBM Watson Machine Learning service from the NeuNetS tool, test the deployment, and use the model deployment. NeuNetS is currently in beta.

 

Step 1: Deploy the model

After training is complete, in the NeuNetS tool interface, click Deploy model to Watson Machine Learning.

When deployment is complete, a message appears with a link to the deployment details page.

Note: You can navigate to your deployment details page from your project at any time:

  1. From your project in Watson Studio, click the Deployments tab
  2. Click on the name of the deployment to open the deployment details page

See also: Manual deployment

 

Step 2: [Optional] Test the deployment in Watson Studio

From the deployment details page, click the Test tab.

Then, in the input data box paste valid JSON-formated payload data.

See:

Example

Testing the CIFAR-10 sample model

 

Step 3: Use the deployment in your apps or processes

You can use your deployed NeuNetS model the same way you would use any model you deploy to Watson Machine Learning as a web service.

See:

Example

This example demonstrates sending a text message to a deployment of the UCI: SMS Spam Collection sample NeuNetS model for classification using the Watson Machine Learning Python client external link.

from watson_machine_learning_client import WatsonMachineLearningAPIClient
client  = WatsonMachineLearningAPIClient( wml_credentials )
payload = { "values" : [ "Haha awesome, be there in a minute" ] }
result  = client.deployments.score( deployment_endpoint_url, payload )
result

Output:

SMS model output

 

Manual deployment

If you download the model built by the NeuNetS tool so you can work with the model locally, you can also manually deploy the model to Watson Machine Learning.

Example

This example demonstrates using the Watson Machine Learning Python client to store a NeuNetS model in the Watson Machine Learning repository and then deploy the model as a web service. In the this example, the model has been downloaded to the local directory, in a file called neunets-model.tar.gz. This sample Python code can be run in a notebook in Watson Studio

See also:

# Look up credentials for your Watson Machine Learning service
wml_credentials = {
    "apikey"      : "",
    "instance_id" : "",
    "password"    : "",
    "url"         : "",
    "username"    : ""
}
# Use the Watson Machine Learning Python client to send the test payload to the deployment
from watson_machine_learning_client import WatsonMachineLearningAPIClient
client = WatsonMachineLearningAPIClient( wml_credentials )
metadata = {
   client.repository.ModelMetaNames.NAME              : "my-neunets-model", # You can choose your model name
   client.repository.ModelMetaNames.DESCRIPTION       : "nnets",
   client.repository.ModelMetaNames.AUTHOR_NAME       : "neunets",
   client.repository.ModelMetaNames.FRAMEWORK_NAME    : "tensorflow",
   client.repository.ModelMetaNames.FRAMEWORK_VERSION : "1.5",
   client.repository.ModelMetaNames.RUNTIME_NAME      : "python",
   client.repository.ModelMetaNames.RUNTIME_VERSION   : "3.5",
   'frameworkLibraries': [ { "name": "keras", "version": "2.1.5" } ]

}
model_details = client.repository.store_model( model="neunets-model.tar.gz", meta_props=metadata, training_data=None )
deployment_details = client.deployments.create( model_details["metadata"]["guid"], "my-neunets-deployment", "Online deployment of NeuNetS model.")

Restriction: Deploying NeuNetS models using the Watson Machine Learning CLI is not supported.