Creating jobs in deployment spaces
From a deployment space, you can create, schedule, run, and manage jobs that process data for batch deployments, Python functions, and scripts.
Creating a batch deployment job
These are the steps to follow when creating a batch deployment job.
- From the Deployments tab, click on your deployment.
- Click on New job.
- Enter job name and description.
- Select hardware specification.
- (Optional) If you are deploying a Python script, then you can enter environment variables to pass parameters to the job.
- (Optional) Configure retention options.
To avoid consuming resources by retaining all job metadata, you can set thresholds for saving a set number of job runs and associated logs, or set a time threshold for saving artifacts for a specified number of days.
- (Optional) Schedule your job.
If you didn't specify a schedule, the job will run immediately.
- (Optional) Set notifications.
- In the Input pane, from the Data asset menu, select your input data type:
- To enter the payload in JSON format, select Inline data.
- To specify an input data source, select Data asset, click Select data source and then specify your asset.
- In the Output pane, from the Data asset menu select your output data type:
- To write your job results to a new output file, select Create new and then provide a name and optional description.
- To write your job results to a connected data asset, select Data asset, click Select data source and then specify your asset.
- Click Create.
- Scheduled jobs display on the Jobs tab of the deployment space. Results of the run are written to the specified output file and saved as a space asset.
- A data asset can be a data source file that you promoted to the space, a connection to a data source, or tables from databases and files from file-based datasources.
- If you exclude certain week days in your job schedule, the job might not run as you would expect. The reason is due to a discrepancy between the timezone of the user who creates the schedule, and the timezone of the master node where the job runs.
Queuing and concurrent job executions
The maximum number of concurrent jobs that can be run for each deployment is handled internally by the deployment service. For batch deployment, a maximum of two jobs can be executed concurrently. Any deployment job request for a specific batch deployment that already has two jobs under running state will be placed into a queue for execution at a later point in time. When any of the running jobs are completed, the next job in the queue will be picked up for execution. There is no upper limit on the queue size.
Retention of deployment job metadata
Job-related metadata is persisted. You can access as long as the job and its deployment are not deleted.
Parent topic: Deploying assets