You should stop all active runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs).
Jupyter notebook runtimes are started per user and not per notebook. Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because you could have started other notebooks in the same environment. You should
only stop a notebook runtime if you are sure that no other notebook kernels are active.
Only runtimes that are started for jobs are automatically shut down after the scheduled job has completed. For example, if you schedule to run a notebook once a day for 2 months, the runtime instance will be activated every day for the duration
of the scheduled job and deactivated again after the job has finished.
Project users with Admin role can stop all runtimes in the project. Users added to the project with Editor role can stop the runtimes they started, but can't stop other project users' runtimes. Users added to
the project with the viewer role can't see the runtimes in the project.
You can stop runtimes from:
The Environment Runtimes page, which lists all active runtimes across all projects for your account, by clicking Administration > Environment runtimes from the watsonx.ai Studio navigation menu.
Under Tool runtimes on the Environments page on the Manage tab of your project, which lists the active runtimes for a specific project.
The Environments page when you click the Notebook Info icon from the notebook toolbar
in the notebook editor. You can stop the runtime under Runtime status.
Important: A runtime is started per user and not per notebook. Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because you could have started other notebooks
in the same environment. Only stop a runtime if you are sure that no kernels are active.
Spark idle timeout
Copy link to section
All Spark runtimes, for example for notebook and Data Refinery, are stopped after 3 hours of inactivity. The Default Data Refinery XS runtime that is used when you refine data in Data Refinery is stopped after an idle time of 1
hour.
Spark runtimes that are started when a job is started, for example to run a Data Refinery flow or a notebook, are stopped when the job finishes.
GPU idle timeout
Copy link to section
All GPU runtimes are automatically stopped after 3 hours of inactivity for Enterprise plan users and after 1 hour of inactivity for other paid plan users.
RStudio idle timeout
Copy link to section
An RStudio is stopped for you after an idle time of 2 hours. During this idle time, you will continue to consume CUHs for which you are billed. Long compute-intensive jobs are hard stopped after 24 hours.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.