You can add connections to a broad array of services (data sources) in projects and catalogs. Source connections can only be used to read data; target connections can only be used to load (save) data. When you create a target connection, be sure to use credentials that have Write permission or you won’t be able to save data to the target.
Not all connections are available for all Watson Studio and Watson Knowledge Catalog plans.
From a project, you must create a connection to a data source before you can read data from it or load data to it.
If you are connected to Watson Machine Learning Server, you can promote the connection asset from a project to a deployment space.
From a catalog, you must add a connection asset before you can add data assets for the tables you can access from the connection.
Note: This topic does not apply to streams flows. See Creating a connection in a streams flow.
- Analytics Engine HDFS (formerly known as “BigInsights HDFS”)
- Cloud Object Storage
- Cloud Object Storage (infrastructure)
When you create your Cloudant service on IBM Cloud, you must choose “Use both legacy credentials and IAM” for Available authentication methods.
- Cognos Analytics (supports source connections only)
- Compose for MySQL
- Databases for PostgreSQL
- Db2 Big SQL
- Db2 Event Store
- Db2 for i
- Db2 for z/OS
- Db2 Hosted
- Db2 on Cloud
- Db2 Warehouse
- Netezza (PureData System for Analytics)
- Planning Analytics (formerly known as “IBM TM1”)
- Amazon RDS for MySQL
- Amazon RDS for PostgreSQL
- Amazon Redshift
- Amazon S3
- Apache Cassandra
- Apache Derby
- Apache HDFS (formerly known as “Hortonworks HDFS”)
- Apache Hive (supports source connections only)
- Cloudera Impala (supports source connections only)
To obtain the application token that’s needed to configure a Dropbox connection, follow the instructions at the Dropbox OAuth guide.
- FTP (Remote file system transfer)
For Connection Details, enter either Credentials (the contents of the Google service account key JSON file) or the Credentials path (the path of the Google service account file).
The connection to Google BigQuery requires the following BigQuery permissions:
The predefined BigQuery Cloud IAM role
bigquery.adminincludes these permissions. Otherwise, a combination of two roles is needed. One role from each column in the following table.
First role Second role
For information about Google BigQuery’s permissions and roles, see Predefined roles and permissions.
- Google Cloud Storage
- HTTP (supports source connections only)
- Looker (supports source connections only)
Before you configure the connection, you’ll need to set up API3 credentials for your Looker instance. See instructions at Looker API Authentication.
- Microsoft Azure Blob Storage
- Microsoft Azure Cosmos DB
- Microsoft Azure Data Lake Store
Before you configure the connection, you must create an Azure Active Directory (Azure AD) web application, get an application ID, authentication key, and a tenant ID. Then you must assign the Azure AD application to the Azure Data Lake Store account file or folder. Follow Steps 1, 2, and 3 at Service-to-service authentication with Data Lake Store using Azure Active Directory.
- Microsoft Azure File Storage
- Microsoft Azure SQL Database
- Microsoft SQL Server
- MongoDB (supports source connections only)
- OData (supports source connections only)
- Pivotal Greenplum
- Salesforce.com (supports source connections only)
- SAP OData (supports source connections only)
- Sybase (supports source connections only)
- Sybase IQ (supports source connections only)
- Tableau (supports source connections only)
Teradata JDBC Driver 15.10 Copyright (C) 2015 - 2017 by Teradata. All rights reserved. IBM provides embedded usage of the Teradata JDBC Driver under license from Teradata solely for use as part of the IBM Watson service offering.