Migrating a JDBC connection (DataStage)
The JDBC connector does not have an equivalent connector in modern DataStage®.
When you migrate from traditional DataStage to modern DataStage, a new connector is created based on the properties that are in the JDBC connector. For example, If you use the JDBC connector to connect to a Db2® data source in traditional DataStage, a Db2 connector is created in modern DataStage.
Placeholder parameters for partitioned reads in SQL statements are not supported in modern DataStage. Example placeholder parameters are:
[[node-number]]
, [[node-number-base-one]]
, or
[[node-count]].
If you have an SQL statement that includes placeholder parameters,
you need to remove those parameters from the WHERE clause.
The target write mode for the Append, Insert, and Truncate actions work differently in the JDBC connector in traditional stage as compared to the associated connector in modern DataStage. In traditional DataStage, if the target table does not exit, the job will fail. In modern DataStage, the migrated connector will create a table if it does not exist and perform the action. For example, for an Append action, if the table does not exist, the job will create a new table with the appended rows.
The mapping of columns in user-defined SQL statements works differently in modern DataStage. In traditional DataStage, columns are mapped by the column names. In modern DataStage, columns are mapped by the column order.
In traditional DataStage when you run a flow for the JDBC connector, the columns in a user-defined SQL statement are mapped by the column names that you specify in the Columns tab.
When flows with the JDBC connector are migrated to the corresponding connectors in modern DataStage, you need to modify any user-defined SQL statements so that the order of the columns matches the order of columns that are defined in the Columns tab.