The Jupyter and notebook environment

Notebooks for Jupyter run on Jupyter kernels in Anaconda-based environments or, if the notebooks use Spark APIs, those kernels run in a Spark environment or Spark service.

You can learn to use Spark in IBM Watson Studio by opening any of several sample notebooks, such as:

Jupyter kernels

When you open a notebook in edit mode, exactly one interactive session connects to a Jupyter kernel for the notebook language and the compute runtime that you select. This kernel executes code that you send and returns the computational results. You can switch the kernel to change the notebook language.

If necessary, you can restart or reconnect to the kernel. When you restart a kernel, the kernel is stopped and then started with the same session, but all execution results are lost. When you reconnect to a kernel after losing a connection, the notebook is connected to the same kernel session, and all previous execution results which were saved are available.

The kernel remains active even if you leave the notebook or close the web browser window. When you reopen the same notebook, the notebook is connected to the same kernel. Only the output cells that were saved (auto-save happens every 2 minutes) before you left the notebook or closed the web browser window will be visible. You will not see the output for any cells which ran in the background after you left the notebook or closed the window. To see all of the output cells, you need to rerun the notebook.

Learn more