0 / 0
Running masking flow jobs

Running masking flow jobs

In masking flow jobs, data users define the target destination for masked data copies. Jobs can be scheduled, and upon completion of a successful job, you can view the job report summary.

There are two ways to create masking flow jobs:

  • After creating a masking flow, click Configure job.
  • Click the Options menu on an individual data asset to skip creating a masking flow and to configure a masking job directly for that data asset.
Note: During a masking flow job, errors might occur when there isn't enough memory to support the job. To avoid errors, the maximum size of data can be no larger than 12GBs.

Working with jobs

To configure a job:

  1. Enter the name of the job and add an optional description of the job.
  2. Add the target connection where you want to insert masked data copy. The source connection is used to read data.
  3. Click + to add a new connection. The schema maps the source table to the target table. Table definitions must already be configured in the target schema.
Tip: When the source asset is Apache Hive, use Apache HDFS as the target connection.
4. (Optional) Schedule a job or schedule a recurring job. 5. Review and run the job.

Learn more

Parent topic: Masking data with Masking flow

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more