About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Last updated: Dec 09, 2024
You can migrate DataStage jobs by creating and importing ISX files that contain the job information. Complete other post-migration tasks where applicable.
Procedure
Complete the following tasks to migrate DataStage®.
After you import the ISX file, other tasks might apply, depending on the connections, stages, and
other components of your migrated jobs.
Create and import the ISX file
Create and export an ISX file by using one of the methods that are listed in the following table:
Option | Instructions |
---|---|
ISTOOL | Use ISTOOL to create an ISX file and export the file. For instructions, see Export command for InfoSphere DataStage and QualityStage assets and How to use ISTOOL for EXPORT IMPORT Information Server Components. |
MettleCI | Use MettleCI, which is a third-party service, to convert a server job design into an equivalent parallel job design, then create an ISX file and export the file to your system. For more information, see MettleCLI docs. |
InfoSphere Information Server Manager GUI client | Use the Information Server Manager GUI client to export the ISX file. For detailed instructions, see Exporting assets |
Note: Make sure that the ISX file export includes any dependencies, such as parameter sets and table
definitions. If folder support is enabled, folder structures will be re-created on import.
Note: It is
recommended that you scale up your services such as DataStage, Orchestration Pipelines, and IBM Cloud® Object Storage to the Large instance size before you
import your .ISX file. After migration, less resources are required and you can scale down. If you
are experiencing issues even with a Large instance size, you may need to
customize your configuration. For more information, see troubleshooting.html#reference_mwl_byg_wpb__largeisx.
Complete the following steps to
import the ISX file:- Open an existing project or create a new one.
- From the Assets tab of the project, click .
- Click the Local file tab, then upload the ISX file from your local
computer. Then, click Create.Note: The ISX file must exist on your desktop or network drive. Do not drag the file as an attachment from another application.
The asset import report contains status information and error messages that you can use to troubleshoot your ISX import. For information on viewing and using the report to troubleshoot, see Asset import report (DataStage).
Migrate connections
If your migrated jobs contain connections, see Migrating connections in DataStage for information.
Migrate stages
Stages | Considerations |
---|---|
Stored procedure | Stored procedures are migrated to the corresponding platform connector. All stored procedures on Db2® type connectors are migrated to the standard Db2 connector, including stored procedures for connectors like Db2 for i and Db2 for z/OS®. Manually replace the Db2 connector with the correct connector type and copy over the stored procedure call. If input and output parameters cannot be detected in a stored procedure, it's left as-is and must be updated after migration to match the new syntax. For more information, see Using stored procedures in DataStage. |
Review the parameter sets and PROJDEF values
Review your parameter sets and verify that their default values are correct after migration.
PROJDEF parameter sets are created and updated by migration. If you migrate a job with a PROJDEF parameter set, review the PROJDEF parameter set and specify default values for it. Then, within flows and job runs, any parameter value that is $PROJDEF uses the value from the PROJDEF parameter set.
If PROJDEF parameter values have been defined in the DSParams file,
use the
command to transfer those values into your
project's runtime environment. For more information, see DSParams.cpdctl dsjob create-dsparams
Update scripts that use the dsjob command line interface
If you have scripts that use dsjob to run jobs, update the script call to dsjob by completing the
following steps:
- Download cpdctl: https://github.com/IBM/cpdctl/releases/
- Create a source shell script (source.sh) to configure cpdctl. Create a text file
for your encryption key. See the following example:key.txt
#!/bin/bash export CPDCTL_ENCRYPTION_KEY_PATH=~/key.txt export DSJOB_URL=https://example.com export DSJOB_ZEN_URL=https://example.com export CPDCTL_ENABLE_DSJOB=true export CPDCTL_ENABLE_DATASTAGE=true export DSJOB_USER=admin export DSJOB_PWD=<Password> cpdctl config user set dscpserver-user --username $DSJOB_USER --password $DSJOB_PWD cpdctl config profile set dscpserver-profile --url $DSJOB_URL cpdctl config context set dscpserver-context --user dscpserver-user --profile dscpserver-profile cpdctl config context use dscpserver-context cpdctl dsjob list-projects
Change any references to
todsjob
. You might need to adjust the command-line options to fit the DataStage command-line style. See DataStage command-line tools.cpdctl dsjob
Migrate sequence jobs
You can import an ISX file to migrate a sequence job to a pipeline flow. Rewrite expressions in CEL and manually reselect values for some pipeline nodes. See the following topics for more considerations: Run flows in sequence with Orchestration Pipelines and Migrating and constructing pipeline flows for DataStage. See Migrating BASIC routines in DataStage for information on rewriting BASIC routines as scripts.
Rewrite the routine code for before-job and after-job subroutines
When you migrate before-job and after-job subroutines, the routine code is stored in a .sh script under /ds-storage/projects/<projectName>/scripts/DSU.<RoutineName>.sh. Rewrite the routine code in the same way as a BASIC routine, following the steps in Migrating BASIC routines in DataStage to retrieve the output arguments, but include an exit statement for the before/after-job subroutine. See the following example:# TODO: Update the following json string and print it as the last line of the standard output. ErrorCode=0 echo "{\"ErrorCode\":\"$ErrorCode\"}" exit $ErrorCode
Was the topic helpful?
0/1000