0 / 0
Prompt Lab
Last updated: Dec 03, 2024
Prompt Lab

In the Prompt Lab in IBM watsonx.ai, you can experiment with prompting different foundation models, explore sample prompts, and save and share your best prompts.

You use the Prompt Lab to engineer effective prompts that you submit to deployed foundation models for inferencing. You do not use the Prompt Lab to create new foundation models.

This video provides a visual method to learn the concepts and tasks in this documentation.

Requirements

If you signed up for watsonx.ai and you have a sandbox project, all requirements are met and you're ready to use the Prompt Lab.

You must meet these requirements to use the Prompt Lab:

  • You must have a project.
  • You must have the Editor or Admin role in the project.
  • The project must have an associated watsonx.ai Runtime service instance. Otherwise, you might be prompted to associate the service when you open the Prompt Lab.

Creating and running a prompt

To create and run a new prompt, complete the following steps:

  1. From the watsonx.ai home page, choose a project, and then click the New asset > Chat and build prompts with foundation models tile.

  2. Optional: Choose a different edit mode to work in, such as Freeform.

  3. Select a foundation model.

    Tip: To see all of the available foundation models, remove any search filters that are applied.
  4. Optional: Update model parameters or add prompt variables.

  5. Enter a prompt.

  6. Click the Send icon Send icon.

    In Structured or Freeform mode, click Generate.

  7. You can cancel an inference request at any time by clicking the Stop icon Stop icon.

    Tokens in your input are counted as tokens used. Any tokens that were generated by the model as output before the request was canceled are also counted.

  8. To preserve your work so that you can reuse or share a prompt with collaborators in the current project, save your work as a project asset. For more information, see Saving prompts.

To run a sample prompt, complete the following steps:

  1. From the Sample prompts menu in the Prompt Lab, select a sample prompt.

    The prompt is opened in the editor and an appropriate model is selected.

  2. Click Generate.

Prompt editing options

You type your prompt in the prompt editor. The prompt editor has the following edit modes:

Chat mode

You can chat with the foundation model to see how the model handles dialog or question-answering tasks.

Start the chat by submitting a query or request for the foundation model to answer. Alternatively, you can click a quick start sample to submit to the model. Quick start samples are sent to the Llama foundation model. If you want to work with a different foundation model, add your own prompt text.

Each subsequent turn in the conversation builds on information that was exchanged previously.

Note: You cannot make changes while a chat is in progress. Click the Clear chat icon Clear chat icon to stop and make changes.

Before you start a chat, review and adjust the model choice and parameter settings. To support long dialog exchanges, the Max tokens parameter is set to a high default value. You might want to add a stop sequence to prevent the model from generating wordy outputs, for example.

Chat templates

Predefined text called a system prompt is included at the start of the chat to establish ground rules for the conversation. To review and customize the text, click the Edit system prompt icon Edit system prompt.

Some foundation models recommend specific templates that identify different segments of the prompt, such as the prompt instruction and user input. Chat mode adjusts the syntax of your prompt input to conform to each foundation model's recommended format. You can click the View full prompt text icon View full prompt text to see the full prompt text that will be submitted to the foundation model.

Grounding prompts in facts

To help the foundation model to return factual output, add documents with relevant information to the prompt. Click the Upload documents icon Upload documents icon, and then choose Add documents. For more information, see Chatting with documents and images.

You can also add relevant data from a third-party vector store. Click the Grounding with documents icon Grounding with documents icon and select the vector index. For more information, see Adding vectorized documents for grounding foundation model prompts.

Features omitted from chat mode

The following features are omitted from chat mode:

  • The token usage count is not shown in chat mode.

    Keep in mind that the chat history is sent with each new prompt that you submit which contributes to the overall token count.

    You can check the token count yourself by using the API. Click the View full prompt text icon View full prompt text to open and copy the full prompt text, and then use the Text tokenization method to count the tokens.

  • You cannot define prompt variables in chat mode. As a consequence, you cannot govern saved chat prompt templates.

Watch this video showing Chat mode in the Prompt Lab.

This video provides a visual method to learn the concepts and tasks in this documentation.

Structured mode

Structured mode is designed to help new users create effective prompts. Text from the fields is sent to the model in a template format.

You add parts of your prompt into the appropriate fields:

  • Instruction: Add an instruction if it makes sense for your use case. An instruction is an imperative statement, such as Summarize the following article.

  • Examples: Add one or more pairs of examples that contain the input and the corresponding output that you want. Providing a few example input-and-output pairs in your prompt is called few-shot prompting.

    If you need a specific prefix to the input or the output, you can replace the default labels, "Input:" or "Output:", with the labels you want to use. For example, you might replace the default labels with custom labels that were used in training data when a foundation model was prompt-tuned.

    A space is added between the example label and the example text.

  • Test your input: In the Try area, enter the final input of your prompt.

Freeform mode

You add your prompt in plain text. Your prompt text is sent to the model exactly as you typed it.

Freeform mode is a good choice when you want to submit structured input and know how to format the prompt.

Model and prompt configuration options

You must specify which model to prompt and can optionally set parameters that control the generated result.

Model choices

In the Prompt Lab, you can submit your prompt to any of the models that are supported by watsonx.ai. You can choose recently-used models from the drop-down list. Or you can click View all foundation models to view all the supported models, filter them by task, and read high-level information about the models.

If you tuned a foundation model by using the Tuning Studio and deployed the tuned model or you deployed a custom foundation model, the tuned or custom model is also available for prompting from the Prompt Lab.

Model parameters

To control how the model generates output in response to your prompt, you can specify decoding parameters and stopping criteria. For more information, see Model parameters for prompting.

Prompt variables

To add flexibility to your prompts, you can define prompt variables. A prompt variable is a placeholder keyword that you include in the static text of your prompt at creation time and replace with text dynamically at run time. For more information, see Building reusable prompts.

View full prompt text

You might want to see the full prompt text that will be submitted to the foundation model in the following situations:

  • When prompt variables are in use, to see resolved variable values in context.
  • In chat mode, where the recommended prompt formats for different foundation models are applied automatically.
  • In structured mode, where you add parts of the prompt into separate fields.

AI guardrails

When you set the AI guardrails switcher to On, harmful language is automatically removed from the input prompt text and from the output that is generated by the model. Specifically, any sentence in the input or output that contains harmful language is replaced with a message that says that potentially harmful text was removed.

Note: This feature is supported for English-language models only. If you're working with a non-English foundation model, disable AI guardrails.

For more information, see Removing harmful content.

Prompt code

If you want to run the prompt programmatically, you can view and copy the prompt code or use the Python library.

View code

When you click the View code icon View code, a cURL command is displayed that you can call from outside the Prompt Lab to submit the current prompt and parameters to the selected model and get a generated response.

The command includes a placeholder for an IBM Cloud IAM token. For information about generating the access token, see Generating an IBM Cloud IAM token.

Programmatic alternative to the Prompt Lab

The Prompt Lab graphical interface is a great place to experiment and iterate with your prompts. However, you can also prompt foundation models in watsonx.ai programmatically by using the Python library or REST API. For details, see Coding generative AI solutions.

Available prompts

In the side panel, you can access sample prompts, your session history, and saved prompts.

Samples

A collection of sample prompts are available in the Prompt Lab. The samples demonstrate effective prompt text and model parameters for different tasks, including classification, extraction, content generation, question answering, and summarization.

When you click a sample, the prompt text loads in the editor, an appropriate model is selected, and optimal parameters are configured automatically.

History

As you experiment with different prompt text, model choices, and parameters, the details are captured in the session history each time you submit your prompt. To load a previous prompt, click the entry in the history and then click Restore.

Saved

From the Saved prompt templates menu, you can load any prompts that you saved to the current project as a prompt template asset.

When watsonx.governance is provisioned, if your prompt template includes at least one prompt variable, you can evaluate the effectiveness of model responses. For more information, see Evaluating prompt templates in projects.

Learn more


Parent topic: Developing generative AI solutions

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more