The Jupyter and notebook environment
Notebooks for Jupyter run on Jupyter kernels in Anaconda-based environments or, if the notebooks use Spark APIs, those kernels run in a Spark environment or Spark service.
You can learn to use Spark in IBM Watson Studio by opening any of several sample notebooks, such as:
When you open a notebook in edit mode, exactly one interactive session connects to a Jupyter kernel for the notebook language and the compute runtime that you select. This kernel executes code that you send and returns the computational results. You can switch the kernel to change the notebook language.
If necessary, you can restart or reconnect to the kernel. When you restart a kernel, the kernel is stopped and then started with the same session, but all execution results are lost. When you reconnect to a kernel after losing a connection, the notebook is connected to the same kernel session, and all previous execution results which were saved are available.
The kernel remains active even if you leave the notebook or close the web browser window. When you reopen the same notebook, the notebook is connected to the same kernel. Only the output cells that were saved (auto-save happens every 2 minutes) before you left the notebook or closed the web browser window will be visible. You will not see the output for any cells which ran in the background after you left the notebook or closed the window. To see all of the output cells, you need to rerun the notebook.
The kernel remains active even if you leave the notebook or close the application window. When you reopen the same notebook, the notebook is connected to the same kernel.
If you are running notebooks in an Apache Spark service instance, only 10 notebook kernels can be active at the same time. If you are not using the notebook, you should stop the kernel to free the kernel for other project collaborators working with notebooks in the same Apache Spark instance. The Spark service automatically shuts down all kernels 12 hours after the last activity in any of its kernels. If there's activity in any kernel, all active kernels keep running.
There is no Jupyter kernel limitation for environment runtimes. Kernels are shutdown when the runtime is stopped.