0 / 0
Quick start: Prompt a foundation model using Prompt Lab

Quick start: Prompt a foundation model using Prompt Lab

Take this tutorial to learn how to use the Prompt Lab in watsonx.ai. There are usually multiple ways to prompt a foundation model for a successful result. In the Prompt Lab, you can experiment with prompting different foundation models, explore sample prompts, as well as save and share your best prompts. See Prompt tips to help you successfully prompt most text-generating foundation models.

Required services
Watson Studio
Watson Machine Learning

Your basic workflow includes these tasks:

  1. Open a project. Projects are where you can collaborate with others to work with data.
  2. Open the Prompt Lab. The Prompt Lab lets you experiment with prompting different foundation models, explore sample prompts, as well as save and share your best prompts.
  3. Type your prompt in the prompt editor. You can type prompts in either freeform and structured mode.
  4. Select the model to use. You can submit your prompt to any of the models supported by watsonx.ai.
  5. Save your work as a projet asset. Saving your work as a project asset makes your work available to collaborators in the current project.

Read about prompting a foundation model

Foundation models are very large AI models. They have billions of parameters and are trained on terabytes of data. Foundation models can perform a variety of tasks, including text-, code-, or image generation, classification, conversation, and more. Large language models are a subset of foundation models used for text- and code-related tasks. In IBM watsonx.ai, there is a collection of deployed large language models that you can use, as well as tools for experimenting with prompts.

Read more about Prompt Lab

Watch a video about prompting a foundation model

Watch Video Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface shown in the video. The video is intended to be a companion to the written tutorial.

This video provides a visual method to learn the concepts and tasks in this documentation.


Try a tutorial to prompt a foundation model

In this tutorial, you will complete these tasks:





Tips for completing this tutorial
Here are some tips for successfully completing this tutorial.

Use the video picture-in-picture

Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.

The following animated image shows how to use the video picture-in-picture and table of contents features:

How to use picture-in-picture and chapters

Get help in the community

If you need help with this tutorial, you can ask a question or find an answer in the watsonx Community discussion forum.

Set up your browser windows

For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along.

Side-by-side tutorial and UI

Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.



Task 1: Open a project

You need a project to store Prompt Lab assets.

Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project.

This video provides a visual method to learn the concepts and tasks in this documentation.

  1. From the watsonx home screen, scroll to the Projects section. If you see any projects listed, then skip to Task 2. If you don't see any projects, then follow these steps to create a project.

  2. Click Create a sandbox project. When the project is created, you will see the sandbox project in the Projects section.

For more information or to watch a video, see Creating a project.

Checkpoint icon Check your progress

The following image shows the home screen with the sandbox listed in the Projects section. You are now ready to open the Prompt Lab.

Home screen with sandbox project listed.




Task 2: Use the Prompt Lab in Freeform mode

preview tutorial video To preview this task, watch the video beginning at 00:03.

You can type your prompt text in a freeform, plain text editor and then click Generate to send your prompt to the model. Follow these steps to use the Prompt Lab in Freeform mode:

  1. From the home screen, click the Open in Prompt Lab tile.

  2. Select each checkbox to accept the acknowledgements, and then click Skip tour.

  3. Click the Freeform tab to prompt a foundation model in Freeform mode.

  4. Click Switch mode.

  5. Copy and paste the following text in the text field, and then click Generate to see the output for the Class name: Problem.


Classify this customer message into one of two classes: question, problem.
Class name: Question
Description: The customer is asking a technical question or a how-to question about our products or services.
Class name: Problem
Description: The customer is describing a problem they are having. They might say they are trying something, but it's not working. They might say they are getting an error or unexpected results.
Message: I'm having trouble registering for a new account.
Class name:

Checkpoint icon Check your progress

The following images shows the generated output for the prompt in Freeform mode. Now you are ready to prompt a foundation model in Structured mode.

Generated output for the prompt in Freeform mode.




Task 3: Use the Prompt Lab in Structured mode

preview tutorial video To preview this task, watch the video beginning at 00:19.

You can type your prompt in a structured format. The structured format is helpful for few-shot prompting, when your prompt has multiple examples. Follow these steps to use the Prompt Lab in Structured mode:

  1. Click the Structured tab.

  2. Click Switch mode.

  3. In the Instruction field, copy and paste the following text: Given a message submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description so the chat can be routed to the correct support team.

  4. In the Setup field, copy and paste the following text in each column:

    Values for Setup field
    Input Output
    When I try to log in, I get an error. Problem
    Where can I find the plan prices? Question
    What is the difference between trial and paygo? Question
    The registration page crashed, and now I can't create a new account. Problem
    What regions are supported? Question
    I can't remember my password. Problem

  5. In the Try field, copy and paste the following text: I'm having trouble registering for a new account.

  6. Click Generate to see the output Problem.

Checkpoint icon Check your progress

The following images shows the generated output for the prompt in Structured mode. Now you are ready to try the sample prompts.

Generated output for the prompt in Structured mode




Task 4: Use the sample prompts

preview tutorial video To preview this task, watch the video beginning at 00:33. If you’re not sure how to begin, sample prompts can get your started. Follow these steps to use the sample prompts:

  1. Open the Sample prompts icon Sample prompts to display the list.
  2. Scroll through the list, and click the Marketing email generation sample prompt.
  3. View the selected model. When you load a sample prompt, an appropriate model is selected for you.
  4. Open the Model Parameters panel Model parameters. The appropriate decoding and stopping criteria parameters are set automatically too.
  5. Click Generate to submit the sample prompt to the model, and see the sample email output.

Checkpoint icon Check your progress

The following image shows the generated output from a sample prompt. Now you are ready to customize the sample prompt output by selecting a different model and parameters.

Generated output from a sample prompt




Task 5: Choose a foundation model

preview tutorial video To preview this task, watch the video beginning at 01:04. You can submit the same prompt to a different model. If you need to generate output in a specific, structured output, such as JSON, HTML, or XML, you can do that by choosing a model that has been fine-tuned for generating structured output, using few-shot examples in your prompt, or tuning a model. Follow these steps to choose a different foundation model:

  1. Click Model > View all foundation models.
  2. Click a model to learn more about a model, and see detail such as the model architecture, pretraining data, fine-tuning information, and performance against benchmarks.
  3. Click Back to return to the list of models.
  4. Select either the flan-t5-xxl-11b or mt0-xxl-13b foundation model, and click Select model.
  5. Hover over the model output column and click the X icon to delete the previous output.
  6. Click the same sample prompt, Marketing email generation, from the list.
  7. Click Generate to generate output using the new model.

Checkpoint icon Check your progress

The following image shows generated output using a different model. You are now ready to adjust the model parameters.

Generated output using a different model




Task 6: Adjust model parameters

preview tutorial video To preview this task, watch the video beginning at 01:28.

You can experiment with changing decoding or stopping criteria parameters. Follow these steps to adjust model parameters.

Note: The model parameters vary based on the currently selected model.

The following table defines the model parameters available for the flan-t5-xxl-11b foundation model.

Model parameters
Model parameters Meaning
Decoding Set decoding to Greedy to always select words with the highest probability. Set decoding to Sampling to customize the variability of word selection.
Temperature Control the creativity of generated text. Higher values will lead to more randomly generated outputs.
Top P (nucleus sampling) Set to < 1.0 to use only the smallest set of most probable tokens with probabilities that add up to top_p or higher.
Top K Set the number of highest probability vocabulary tokens to keep for top-k-filtering. Lower values make it less likely the model will go off topic.
Random seed Control the random sampling of the generated tokens when sampling is enabled. Setting the random see to the same number for each generation ensures experimental repeatability.
Repetition penalty Set a repetition penalty to counteract the model's tendency to repeat prompt text verbatim or get stuck in a loop. 1.00 indicates no penalty.
Stop sequences Set stop sequences to one ore more strings to cause the text generation to stop if or when they are produced as part of the output.
Min tokens Define the minimum number to tokens to generate. Stop sequences encountered prior to the minimum number of tokens being generated are ignored.
Max tokens Define the maximum number to tokens to generate.

  1. Change the Top K parameter to 10 to make it less likely the model will go off topic.
  2. Click X to delete the previous model output.
  3. Click the same sample prompt from the list.
  4. Click Generate to generate output using the new model parameters.
  5. Click the Session history icon Session history after submitting multiple prompts to view your session history.
  6. Click any entry to work with a previous prompt, model specification, and parameter settings, and then click Restore.
  7. Edit the prompt, change the model, or adjust decoding and stopping criteria parameters.
  8. Click Generate to generate output using the updated information.

Checkpoint icon Check your progress

The following image shows generated output using different model parameters. You are now ready to save your work.

Generated output using a different model




Task 7: Save your work

preview tutorial video To preview this task, watch the video beginning at 02:15.

You can save your work in three formats:

Asset types
Asset type Description
Prompt template Save the current prompt only, without its history.
Prompt session Save history and data from the current session.
Notebook Save the current prompt as a notebook.

Follow these steps to save your work:

  1. Click Save work > Save as.
  2. Select Prompt template.
  3. For the name, type Sample prompts.
  4. Select the View in project after saving option.
  5. Click Save.
  6. On the project's Assets tab, click the Sample prompts asset to load that prompt in the Prompt Lab and get right back to work.
  7. Click the Saved prompts Saved prompts to see saved prompt from your sandbox project.

Checkpoint icon Check your progress

The following image shows the project's Assets tab with the prompt template asset:

Project's Assets tab

Checkmark The following image shows saved prompt in the Prompt Lab:

Saved prompt in Prompt Lab



Next steps

Watch this video showing Chat mode in the Prompt Lab.

This video provides a visual method to learn the concepts and tasks in this documentation.

You are now ready to:

Additional resources

  • Saving your work

  • Python library

  • View more videos.

  • Find sample data sets, projects, models, prompts, and notebooks in the Resource hub to gain hands-on experience:

    Notebook Notebooks that you can add to your project to get started analyzing data and building models.

    Project Projects that you can import containing notebooks, data sets, prompts, and other assets.

    Data set Data sets that you can add to your project to refine, analyze, and build models.

    Prompt Prompts that you can use in the Prompt Lab to prompt a foundation model.

    Model Foundation models that you can use in the Prompt Lab.

Parent topic: Quick start tutorials

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more