Deploying Python functions

This topic describes how to deploy Python functions.

You can deploy Python functions in Watson Machine Learning the same way that you can deploy models. Your tools and apps can use the Watson Machine Learning Python client or REST API to send data to your deployed functions the same way that they send data to deployed models. Deploying functions gives you the ability to hide details (such as credentials), preprocess data before passing it to models, perform error handling, and include calls to multiple models, all within the deployed function instead of in your application.

Deploying a function from a deployment space

This topic describes how to deploy a function using the Python client, but you can also deploy a function from a deployment space via the user interface. For details on creating and deploying from a deployment space, see Deployment spaces.

Watson Machine Learning Python client library reference

Watson Machine Learning Python client library external link

There are six basic steps for creating and deploying functions in Watson Machine Learning:

  1. Define the function
  2. Authenticate and define a space
  3. Store the function in the repository
  4. Get the software specification
  5. Deploy the stored function
  6. Send data to the function for processing

Step 1: Define the function

To define a function, create a Python closure with a nested function named “score”.

Example Python code

def my_deployable_function():
    def score( payload ):
        message_from_input_payload = payload.get("input_data")[0].get("values")[0][0]
        response_message = "Recieved message - {0}".format(message_from_input_payload)
        # Score using the pre-defined model
        score_response = {
            'predictions': [{'fields': ['Response_message_field'], 
                             'values': [[response_message]]
        return score_response
    return score

You could test your function like this:

input_data = { "input_data": [{ "fields": [ "message" ],
                                "values": [[ "Hello world!" ]]
function_result = my_deployable_function()( input_data )
print( function_result )

It will return the message “Hello world!”.

Python closures

To learn more about closures, see:

Requirements for the nested, “score” function

The following are requirements and usage notes for the nested function for online deployments:

  • score () must accept a single, JSON input parameter
  • The scoring input payload will be passed as value for input parameter for score() . Therefore, the value of score() input parameter must be handled accordingly inside the score().
  • The scoring input payload must match the input requirements for the concerned Python Function.
  • Additionally, the scoring input payload must include an array with the name values as shown in this example schema. Note that the input_data parameter is mandatory in the payload.
      { "input_data": [{ 
                                  "values": [[ "Hello world!" ]]
  • The output payload expected as output of score() must include the schema of the “score_response” variable described in the “Step 1: Define the function” section for status code 200. Note that the prediction parameter, which has an array of JSON objects as its value, is mandatory in the score() output.
  • The score function must return a JSON-serializable object (for example: dictionaries or lists).
  • When a Python function is saved using the Python client where a reference to the outer function is specified, only the code in the scope of the outer function (including its nested functions) are saved. Therefore, the code outside the outer function’s scope will be not be saved and thus will not be available when you deploy the function.

Step 2: Authenticate with the Python client

  1. Add a notebook to your project by clicking Add to project and selecting Notebook.

  2. Authenticate with the Python client, following the instructions in Authentication.

  3. Initialize the client with the credentials:

     from ibm_watson_machine_learning import APIClient
     wml_client = APIClient(wml_credentials)
  4. (Optional) Create a new deployment space. To use an existing deployment space, skip this step and enter the name of the space in the next step, entering the credentials for your Cloud Object Storage.

     metadata = {            
         client.spaces.ConfigurationMetaNames.NAME: 'YOUR DEPLOYMENT SPACE NAME,         
         client.spaces.ConfigurationMetaNames.DESCRIPTION:  description',            
         client.spaces.ConfigurationMetaNames.STORAGE: {
                 "type": "bmcos_object_storage",
                 "resource_crn": 'PROVIDE COS RESOURCE CRN '
             client.spaces.ConfigurationMetaNames.COMPUTE: {
                          "name": 'INSTANCE NAME,
                          "crn": 'PROVIDE THE INSTANCE CRN' 
     space_details =
  5. Get the ID for the deployment space:
     def guid_from_space_name(client, space_name):
     instance_details = client.service_instance.get_details()
     space = client.spaces.get_details()
     return(next(item for item in space['resources'] if item['entity']["name"] == space_name)['metadata']['guid'])
  6. Enter the details for the deployment space, putting the name of your deployment space in place of ‘YOUR DEPLOYMENT SPACE’.
     space_uid = guid_from_space_name(client, 'YOUR DEPLOYMENT SPACE')
     print("Space UID = " + space_uid)

    Out: Space UID = b8eb6ec0-dcc7-425c-8280-30a1d7a9c58a

  7. Set the default deployment space to work.


Step 4: Get the software specification

Your function requires a software specification to run.

  1. To view the list of predefined specifications:

  2. Find the id of the software specification environment that the function will be using :

     software_spec_id =  client.software_specifications.get_id_by_name('ai-function_0.1-py3.6')

Step 4: Save the function

  1. Create the function metadata.
     function_meta_props = {
             client.repository.FunctionMetaNames.NAME: 'sample_function_with_sw',
             client.repository.FunctionMetaNames.SOFTWARE_SPEC_ID: software_spec_id
  2. Extract the function UID from the details.
     function_artifact = client.repository.store_function(meta_props=function_meta_props, function=my_deployable_function)
     function_uid = client.repository.get_function_id(function_artifact)
     print("Function UID = " + function_id)

    Function UID = 0f263463-21ec-4d2f-a277-2a7525f64b4e

  3. Get the saved function metadata from Watson Machine Learning using the function UID.
     function_details = client.repository.get_details(function_uid)
     from pprint import pprint
  4. To confirm the function was saved, list all of the stored functions using the list_functions method.

Step 5: Deploy the stored function

  1. To select the hardware runtime environment to deploy the function, first view available hardware configurations:

  2. Select a hardware configuration:

     hardware_spec_id = client.hardware_specifications.get_id_by_name('NAME OF THE HARDWARE SPECIFICATION')

    for example :

     #hard_ware_spec_id = client.hardware_specifications.get_id_by_name('M')}
  3. Deploy the Python function to the deployment space by creating deployment metadata and using the function UID obtained in the previous section.
     deploy_meta = {
         client.deployments.ConfigurationMetaNames.NAME: "Web scraping python function deployment",
         client.deployments.ConfigurationMetaNames.ONLINE: {},
         client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "id": hardware_spec_id}
  4. Create the deployment.
     deployment_details = client.deployments.create(function_uid, meta_props=deploy_meta)

    Deployment details

  5. View the deployment details.

    Deployment details

  6. To confirm that the deployment was created successfully, list all deployments.

    List of deployments

Step 6: Send data to the function for processing

Follow these steps to score the function and return a prediction.

  1. List the function you plan to score.
  2. Prepare the scoring payload, matching the schema of the function.
     job_payload = {
         client.deployments.ScoringMetaNames.INPUT_DATA: [{
    #       "input_data": [{
             'fields': ['url'],
             'values': [

    {‘input_data’: [{‘fields’: [‘url’], ‘values’: [‘’]}]}

  3. Generate the prediction and display the results:
     job_details = client.deployments.score(deployment_uid, job_payload)

    [‘02’, ‘2018’, ‘2019’, ‘459’, ‘49’, ‘575’, ‘about’, ‘accelerate’, ‘accelerates’, ‘accelerator’]

Increasing scalability for a function

When you deploy a function from a deployment space or programmatically, a single copy of the function is deployed by default. To increase scalability, you can increase the number of replicas by editing the configuration of the deployment. Additional replicas allow for a larger volume of scoring requests.

The following example uses the Python client API to set the number of replicas to 3.

change_meta = {
                client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: {
client.deployments.update(<deployment_id>, change_meta)