0 / 0
The parts of a notebook

The parts of a notebook

You can see some information about a notebook before you open it on the Assets page of a project. When you open a notebook in edit mode, you can do much more with the notebook by using multiple menu options, toolbars, an information pane, and by editing and running the notebook cells.

You can view the following information about a notebook by clicking the Notebooks asset type in the Assets page of your project:

  • The name of the notebook
  • The date when the notebook was last modified and the person who made the change
  • The programming language of the notebook
  • Whether the notebook is currently locked

When you open a notebook in edit mode, the notebook editor includes the following features:

menu and toolbar

You can select notebook features that affect the way the notebook functions and perform the most-used operations within the notebook by clicking an icon.

Notebook action bar

Notebook action bar

You can select features that enhance notebook collaboration. From the action bar, you can:

  • Publish your notebook as a gist or on GitHub.
  • Create a permanent URL so that anyone with the link can view your notebook.
  • Create jobs in which to run your notebook. See Schedule a notebook.
  • Download your notebook.
  • Add a project token so that code can access the project resources. See Add code to set the project token.
  • Generate code snippets to add data from a data asset or a connection to a notebook cell.
  • View your notebook information. You can:
    • Change the name of your notebook by editing it in the Name field.
    • Edit the description of your notebook in the Description field.
    • View the date when the notebook was created.
    • View the environment details and runtime status; you can change the notebook runtime from here. See Notebook environments.
  • Save versions of your notebook.
  • Upload assets to the project.

The cells in a Jupyter notebook

A Jupyter notebook consists of a sequence of cells. The flow of a notebook is sequential. You enter code into an input cell, and when you run the cell, the notebook executes the code and prints the output of the computation to an output cell.

You can change the code in an input cell and re-run the cell as often as you like. In this way, the notebook follows a read-evaluate-print loop paradigm. You can choose to use tags to describe cells in a notebook.

The behavior of a cell is determined by a cell’s type. The different types of cells include:

Jupyter code cells

Where you can edit and write new code.

code cells

Jupyter markdown cells

Where you can document the computational process. You can input headings to structure your notebook hierarchically.

You can also add and edit image files as attachments to the notebook. The markdown code and images are rendered when the cell is run.

markdown cells

See Markdown for Jupyter notebooks cheatsheet.

Raw Jupyter NBConvert cells

Where you can write output directly or put code that you don’t want to run. Raw cells are not evaluated by the notebook.

raw convert cells

Spark job progress bar

When you run code in a notebook that triggers Spark jobs, it is often challenging to determine why your code is not running efficiently.

To help you better understand what your code is doing and assist you in code debugging, you can monitor the execution of the Spark jobs for a code cell.

To enable Spark monitoring for a cell in a notebook:

  • Select the code cell you want to monitor.
  • Click the Enable Spark Monitoring icon (Shows the enable Spark monitoring icon.) on the notebook toolbar.

The progress bars you see display the real time runtime progress of your jobs on the Spark cluster. Each Spark job runs on the cluster in one or more stages, where each stage is a list of tasks that can be run in parallel. The monitoring pane can become very large is the Spark job has many stages.

The job monitoring pane also displays the duration of each job and the status of the job stages. A stage can have one of the following statuses:

  • Running: Stage active and started.
  • Completed: Stage completed.
  • Skipped: The results of this stage were cached from a earlier operation and so the task doesn't have to run again.
  • Pending: Stage hasn't started yet.

Click the icon again to disable monitoring in a cell.

Note: Spark monitoring is currently only supported in notebooks that run on Python.

Parent topic: Creating notebooks

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more