Scheduling a notebook
When your notebook is complete and runs without errors, you can select to run the notebook in a job. You can schedule a notebook job if it runs in one of the following runtime environments:
- All Anaconda-based environments, except for the free default environment
- All Spark environments
- An Apache Spark service
Notebook scheduling is not supported if you run your notebook in an IBM Analytics Engine service.
You can schedule a notebook version by clicking from an opened notebook’s menu bar. Ensure that you define meaningful date and time ranges. All notebook code cells in the scheduled version are run and all output cells are updated.
If you don’t select a version, the most recently saved version of the notebook is scheduled by default. If no version of the notebook exists, a version is created for you and scheduled.
You can only schedule one job per notebook at a time. To run another version or change the schedule time, you can reschedule the job. Alternatively, you can delete the job and schedule a new one.
You can view details about the schedule for the notebook. While editing the notebook, click the Schedule icon and then choose View job details.
If your notebook is associated with an Anaconda-based or Spark runtime environment, the notebook kernel is always started in a dedicated runtime which consumes capacity unit hours (CUHs) that are tracked. The runtime is stopped when the job finishes.
If your notebook is associated with an Apache Spark instance, the schedule will not be started if 10 notebook kernels are already active in your Spark instance. If you want your schedules to always run, you must plan the number of active notebooks and schedules you have in your Spark service.