Use Keras experiment to transfer style Watson Machine Learning icon

This notebook contains the steps and code required to demonstrate style transfer techique using Watson Machine Learning Service. This notebook introduces commands for getting data, training_definition persistance to Watson Machine Learning repository and model training.

Some familiarity with Python is helpful. This notebook uses Python 3.6 and Watson Studio environments.

Learning goals

In this notebook you learn to work with Watson Machine Learning experiments to train Deep Learning models (Keras).


  1. Set up
  2. Create the training definition
  3. Define the experiment
  4. Run the experiment
  5. Results
  6. Summary

1. Set up

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Create a Watson Machine Learning (WML) Service instance (a free plan is offered and information about how to create the instance is here).
  • Create a Cloud Object Storage (COS) instance (a lite plan is offered and information about how to order storage is here).
    Note: When using Watson Studio, you already have a COS instance associated with the project you are running the notebook in.
  • Create new credentials with HMAC:

    • Go to your COS dashboard.
    • In the Service credentials tab, click New Credential+.
    • Add the inline configuration parameter: {"HMAC":true}, click Add. (For more information, see HMAC.)

      This configuration parameter adds the following section to the instance credentials, (for use later in this notebook):

      "cos_hmac_keys": {
            "access_key_id": "-------",
            "secret_access_key": "-------"

In this section:

1.1 Work with Cloud Object Storage (COS)

Import the Boto library, which allows Python developers to manage COS.

In [1]:
# Import the boto library
import ibm_boto3
from ibm_botocore.client import Config
import os
import json
import warnings
import urllib
import time

Authenticate to COS and define the endpoint you will use.

  1. Enter your COS credentials in the following cell. You can find these credentials in your COS instance dashboard under the Service credentials tab as described in the set up section.

  2. Go to the Endpoint tab in the COS instance's dashboard to get the endpoint information, for example:

In [2]:
# Enter your COS credentials.
cos_credentials = {
  "apikey": "***",
  "cos_hmac_keys": {
    "access_key_id": "***",
    "secret_access_key": "***"
  "endpoints": "",
  "iam_apikey_description": "***",
  "iam_apikey_name": "***",
  "iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Writer",
  "iam_serviceid_crn": "***",
  "resource_instance_id": "***"
In [4]:
api_key = cos_credentials['apikey']
service_instance_id = cos_credentials['resource_instance_id']
auth_endpoint = ''
# Enter your Endpoint information.
service_endpoint = ''

Create the Boto resource by providing type, endpoint_url and credentials.

In [5]:
cos = ibm_boto3.resource('s3',

Create the buckets you will use to store training data and training results.

Note: Bucket names must be unique.

In [6]:
# Create two buckets, style-data-example and style-results-example
from uuid import uuid4

bucket_uid = str(uuid4())

buckets = ['style-data-example-' + bucket_uid, 'style-results-example-' + bucket_uid]
for bucket in buckets:
    if not cos.Bucket(bucket) in cos.buckets.all():
        print('Creating bucket "{}"...'.format(bucket))
        except ibm_boto3.exceptions.ibm_botocore.client.ClientError as e:
            print('Error: {}.'.format(e.response['Error']['Message']))
Creating bucket "style-data-example-34a650af-8427-4830-a449-b501b21e7555"...
Creating bucket "style-results-example-34a650af-8427-4830-a449-b501b21e7555"...

You have now created two new buckets:

  • style-data-example
  • style-results-example

Display a list of buckets for your COS instance to verify that the buckets were created.

In [ ]:
# Display the buckets

1.2 Download training data and upload it to COS buckets

Download your training data and upload them to the 'training-data' bucket. Then, create a list of links for the training dataset.

The following code snippet creates the STYLE_DATA folder and downloads the files from the links to the folder.

Tip: First, use the !pip install wget command to install the wget library:

In [ ]:
!pip install wget
In [9]:
import wget, os

# Create folder
data_dir = 'STYLE_DATA'
if not os.path.isdir(data_dir):

links = ['',

# Download the links to the folder
for i in range(len(links)):
    if 'Gogh' in links[i]: 
        filepath = os.path.join(data_dir, 'van_gogh.jpg')
    elif 'Krak' in links[i]: 
        filepath = os.path.join(data_dir, 'krakow.jpg')
    elif 'Kandinsky' in links[i]:
        filepath = os.path.join(data_dir, 'kandinsky.jpg')
        filepath = os.path.join(data_dir, os.path.join(links[i].split('/')[-1]))

    if not os.path.isfile(filepath):
        urllib.request.urlretrieve(links[i], filepath)

# List the files in the STYLE_DATA folder        
kandinsky.jpg  van_gogh.jpg
krakow.jpg     vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5

Base image: Cracow - main market square

In [10]:
from IPython.display import Image
Image(filename=os.path.join(data_dir, 'krakow.jpg'), width=1000)