0 / 0
Prompt Lab
Last updated: Oct 09, 2024
Prompt Lab

In the Prompt Lab in IBM watsonx.ai, you can experiment with prompting different foundation models, explore sample prompts, as well as save and share your best prompts.

 

This video provides a visual method to learn the concepts and tasks in this documentation.

 

Requirements

If you signed up for watsonx.ai, specified the Dallas or Frankfurt regions, and you have a sandbox project, all requirements are met and you're ready to use the Prompt Lab.

You must meet these requirements to use the Prompt Lab:

  • You must have a project.
  • You must have the Editor or Admin role in the project.
  • The project must have an associated Watson Machine Learning service instance. Otherwise, you are prompted to associate the service when you start Prompt Lab.
  • Your Watson Studio and Watson Machine Learning services must be provisioned in the Dallas or Frankfurt regions.

 

Programmatic alternative to the Prompt Lab

The Prompt Lab graphical interface is a great place to experiment and iterate with your prompts. However, you can also prompt foundation models in watsonx.ai programmatically using the Python library. See: Foundation models Python library

 

Opening the Prompt Lab

You work with the Prompt Lab in the context of a project.

To open the Prompt Lab, from the watsonx.ai home page, choose a project, and then click Experiment with foundation models and build prompts.

Prompt editor

You type your prompt in the prompt editor. The prompt editor has two modes: freeform and structured.

Freeform mode

For a plain text editing mode, click Freeform. When you click Generate in freeform mode, the prompt text is sent to the model exactly as you typed it.

Structured mode

To enter different parts of your prompt in separate text areas, click Structured.

  • Instruction: In the Set Up section, you can specify an instruction, if it makes sense for your use case. An instruction is an imperative statement, such as "Summarize the following article."
  • Examples: Also in the Set Up section, you can specify one or more pairs of example input and the corresponding desired output. If you need a specific prefix to the input or the output, you can replace the default labels, "Input:" or "Output:", with your desired labels. (Providing a few example input-output pairs in your prompt is called few-shot prompting.)
  • Test input: In the Try section, you can enter the final input of your prompt.

When you click Generate in structured mode, the text from the fields is sent to the model in a template format.

If content is classified as potentially containing harmful language, that content is replaced in the prompt editor with a generic message saying potentially harmful content has been removed.

 

Model and parameter menus

In addition to your prompt text, you must specify which model to prompt as well as parameters that control the generated result.

Model

In the Prompt Lab, you can submit your prompt to any of the models supported by watsonx. You can choose recently used models from the drop-down list. Or you can click View all foundation models to view all the supported models, filter them by task, and read high-level information about the models.

Parameters

To control how the model generates output in response to your prompt, you can specify decoding parameters and stopping criteria. For details, see: Model parameters

View code

When you click the View code button, a curl command is displayed that you can call outside the Prompt Lab to submit the current prompt and parameters to the selected model and get a generated response. In the command, there is a placeholder for an IBM Cloud IAM token. For information about generating that access token, see: Generating an IBM Cloud IAM token

AI guardrails

When you toggle AI guardrails on, harmful language is automatically removed from the input prompt text as well as the output generated by the model. Specifically, any sentence in the input or output that contains harmful language will be replaced with a message saying potentially harmful text has been removed.

 

Side menu

In a menu on the left-hand side, you can access sample prompts, your session history, and saved prompts.

Samples

A collection of sample prompts are available in the Prompt Lab. The samples demonstrate effective prompt text and model parameters for different tasks, including: classification, extraction, content generation, question answering, and summarization. When you click a sample, the prompt text loads in the editor, an appropriate model is selected, and optimal parameters are configured automatically.

Session history

As you experiment with different prompt text, model choices, and parameters, the details are captured in the session history each time you submit your prompt. To load a previous prompt, click the entry in the history and then click Restore.

Saved prompts

From the saved prompts menu, you can load any prompts that you saved to the current watsonx.ai project using the Save work button.

 

Saving your work as a project asset

When you click the Save work button, you can save your work as an asset in the current watsonx.ai project in three formats:

  • Prompt
  • Session history, complete with history and data from the current session
  • Python notebook

Saving your work as a project asset makes your work available to collaborators in the current project. For more information, see Saving your prompts and prompt sessions.

 

Learn more

Parent topic: Foundation models

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more