0 / 0
Sample foundation model prompts for common tasks
Last updated: Oct 09, 2024
Sample foundation model prompts for common tasks

Foundation models can generate text to perform many different tasks. Common tasks include: classification, extracting details, generating text, question answering, and summarization. This topic contains sample prompts for all of these tasks.

 

How to use this topic

Explore the sample prompts in this topic:

  • Copy and paste the prompt text and input parameter values into the Prompt Lab in IBM watsonx.ai
  • See what text is generated
  • See how different models generate different output
  • Change the prompt text and parameters to see how results vary

There is no one right way to prompt foundation models. But there are patterns that have been found, in academia and industry, to work pretty reliably. Use the samples in this topic to build your skills and your intuition about prompt engineering through experimentation.

 

This video provides a visual method to learn the concepts and tasks in this documentation.


Video chapters
[ 0:11 ] Introduction to prompts and Prompt Lab
[ 0:33 ] Key concept: Everything is text completion
[ 1:34 ] Useful prompt pattern: Few-shot prompt
[ 1:58 ] Stopping criteria: Max tokens, stop sequences
[ 3:32 ] Key concept: Fine-tuning
[ 4:32 ] Useful prompt pattern: Zero-shot prompt
[ 5:32 ] Key concept: Be flexible, try different prompts
[ 6:14 ] Next steps: Experiment with sample prompts

 

Table 1. Sample prompts in this topic
Link Scenario Prompt editor Prompt format             Model             Decoding Notes
Sample 1 Classify a message Freeform Zero-shot • mt0-xxl-13b
• flan-t5-xxl-11b
• flan-ul2-20b
Greedy • Uses the class names as stop sequences to stop the model after printing the class name
Sample 2 Classify a message Freeform Few-shot • gpt-neox-20b
• mpt-7b-instruct
Greedy • Uses the class names as stop sequences
Sample 3 Classify a message Structured Few-shot • gpt-neox-20b
• mpt-7b-instruct
Greedy • Uses the class names as stop sequences
Sample 4 Extract details from a complaint Freeform Zero-shot • flan-ul2-20b Greedy  
Sample 5 Generate a numbered list on a given theme Freeform Few-shot • gpt-neox-20b Sampling • Generates formatted output
• Uses two newline characters as a stop sequence to stop the model after one list
Sample 6 Generate a numbered list on a given theme Structured Few-shot • gpt-neox-20b Sampling • Generates formatted output
• Uses two newline characters as a stop sequence
Sample 7 Answer a question based on an article Freeform Zero-shot • mt0-xxl-13b
• flan-t5-xxl-11b
• flan-ul2-20b
Greedy • Uses a period "." as a stop sequence to cause the model to return only a single sentence
Sample 8 Answer a question based on an article Structured Zero-shot • mt0-xxl-13b
• flan-t5-xxl-11b
• flan-ul2-20b
Greedy • Uses a period "." as a stop sequence
• Generates results for multiple inputs at once
Sample 9 Summarize a meeting transcript Freeform Zero-shot • flan-t5-xxl-11b
• flan-ul2-20b
• mpt-7b-instruct2
Greedy  
Sample 10 Summarize a meeting transcript Freeform Few-shot • gpt-neox-20b Greedy  
Sample 11 Summarize a meeting transcript Structured Few-shot • gpt-neox-20b Greedy • Generates formatted output
• Uses two newline characters as a stop sequence to stop the model after one list
Sample 12 Generate programmatic code from instructions Freeform Few-shot • starcoder-15.5b Greedy • Generates programmatic code as output
• Uses <end of code> as a stop sequence
Sample 13 Convert code from one programming language to another Freeform Few-shot • starcoder-15.5b Greedy • Generates programmatic code as output
• Uses <end of code> as a stop sequence
Sample 14 Converse in a dialogue Freeform Custom structure • llama-2-70b-chat Greedy • Model generates dialogue output like a chatbot
• Uses a model-specific prompt format

 

Classification

Sample 1

Scenario: Given a message submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description so the chat can be routed to the correct support team.

Model choice
Models that have been instruction-tuned can generally perform this task with this sample prompt. Suggestions: mt0-xxl-13b, flan-t5-xxl-11b, or flan-ul2-20b

Decoding
Greedy. The model must only return one of the class names, not be creative and make up new classes.

Stopping criteria

  • Specify two stop sequences: "Question" and "Problem". After the model generates either of those words, it should stop.
  • With such short output, the Max tokens parameter can be set to 5.

Prompt text
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Classify this customer message into one of two classes: Question, Problem.

Class name: Question
Description: The customer is asking a technical question or a how-to question 
about our products or services.

Class name: Problem
Description: The customer is describing a problem they are having.  They might 
say they are trying something, but it's not working.  They might say they are 
getting an error or unexpected results.

Message: I'm having trouble registering for a new account.
Class name: 

 

Sample 2

Scenario: Given a message submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description so the chat can be routed to the correct support team.

Model choice
With few-shot examples of both classes, most models can perform this task well, including: gpt-neox-20b and mpt-7b-instruct.

Decoding
Greedy. The model must only return one of the class names, not be creative and make up new classes.

Stopping criteria

  • Specify two stop sequences: "Question" and "Problem". After the model generates either of those words, it should stop.
  • With such short output, the Max tokens parameter can be set to 5.

Prompt text
Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Message: When I try to log in, I get an error.
Class name: Problem

Message: Where can I find the plan prices?
Class name: Question

Message: What is the difference between trial and paygo?
Class name: Question

Message: The registration page crashed, and now I can't create a new account.
Class name: Problem

Message: What regions are supported?
Class name: Question

Message: I can't remember my password.
Class name: Problem

Message: I'm having trouble registering for a new account.

 

Sample 3

Scenario: Given a message submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description so the chat can be routed to the correct support team.

Model choice
With few-shot examples of both classes, most models can perform this task well, including: gpt-neox-20b and mpt-7b-instruct.

Decoding
Greedy. The model must only return one of the class names, not be creative and make up new classes.

Stopping criteria

  • Specify two stop sequences: "Question" and "Problem". After the model generates either of those words, it should stop.
  • With such short output, the Max tokens parameter can be set to 5.

Set up section
Paste these headers and examples into the Examples area of the Set up section:

Table 2. Classification few-shot examples
Message: Class name:
When I try to log in, I get an error. Problem
Where can I find the plan prices? Question
What is the difference between trial and paygo? Question
The registration page crashed, and now I can't create a new account. Problem
What regions are supported? Question
I can't remember my password. Problem


Try section
Paste this message in the Try section:

I'm having trouble registering for a new account.

Select the model and set parameters, then click Generate to see the result.

 

Extracting details

Sample 4

Scenario: Given a complaint from a customer who had trouble booking a flight on a reservation website, identify the factors that contributed to this customer's unsatisfactory experience.

Model choice
flan-ul2-20b

Decoding
Greedy. We need the model to return words that are in the given input, not be creative and make up new words.

Stopping criteria
The list of extracted factors will not be very long, so set the Max tokens parameter to 50.

Prompt text
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

From the following customer complaint, extract all the factors that 
caused the customer to be unhappy.

Customer complaint:
I just tried to book a flight on your incredibly slow website.  All 
the times and prices were confusing.  I liked being able to compare 
the amenities in economy with business class side by side.  But I 
never got to reserve a seat because I didn't understand the seat map.  
Next time, I'll use a travel agent!

Numbered list of all the factors that caused the customer to be unhappy:

 

Generating natural language

Sample 5

Scenario: Generate a numbered list on a given theme.

Model choice
gpt-neox-20b was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted a specific way with special characters.

Decoding
Sampling. This is a creative task. Set the following parameters:

  • Temperature: 0.7
  • Top P: 1
  • Top K: 50
  • Random seed: 9045 (To get different output each time you click Generate, specify a different value for the Random seed parameter or clear the parameter.)

Stopping criteria

  • To make sure the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click the Stop sequence text box, press the Enter key twice, then click Add sequence.
  • The list will not be very long, so set the Max tokens parameter to 50.

Prompt text
Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

What are 4 types of dog breed?
1. Poodle
2. Dalmatian
3. Golden retriever
4. Bulldog

What are 3 ways to incorporate exercise into your day?
1. Go for a walk at lunch
2. Take the stairs instead of the elevator
3. Park farther away from your destination

What are 4 kinds of vegetable?
1. Spinach
2. Carrots
3. Broccoli
4. Cauliflower

What are the 3 primary colors?
1. Red
2. Green
3. Blue

What are 3 ingredients that are good on pizza?

 

Sample 6

Scenario: Generate a numbered list on a given theme.

Model choice
gpt-neox-20b was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted a specific way with special characters.

Decoding
Sampling. This is a creative task. Set the following parameters:

  • Temperature: 0.7
  • Top P: 1
  • Top K: 50
  • Random seed: 9045 (To generate different results, specify a different value for the Random seed parameter or clear the parameter.)

Stopping criteria

  • To make sure the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, then click Add sequence.
  • The list will not be very long, so set the Max tokens parameter to 50.

Set up section
Paste these headers and examples into the Examples area of the Set up section:

Table 3. Generation few-shot examples
Input: Output:
What are 4 types of dog breed? 1. Poodle
2. Dalmatian
3. Golden retriever
4. Bulldog
What are 3 ways to incorporate exercise into your day? 1. Go for a walk at lunch
2. Take the stairs instead of the elevator
3. Park farther away from your destination
What are 4 kinds of vegetable? 1. Spinach
2. Carrots
3. Broccoli
4. Cauliflower
What are the 3 primary colors? 1. Red
2. Green
3. Blue


Try section
Paste this input in the Try section:

What are 3 ingredients that are good on pizza?

Select the model and set parameters, then click Generate to see the result.

 

Question answering

Sample 7

Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. A new widget is being added to the website to answer customer questions based on the contents of the article the customer is viewing. Given a question related to a given article, answer the question based on the article.

Model choice
Models that have been instruction-tuned, such as mt0-xxl-13b, flan-t5-xxl-11b, or flan-ul2-20b, can generally perform this task with this sample prompt.

Decoding
Greedy. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer.

Stopping criteria
To cause the model to return a one-sentence answer, specify a period "." as a stop sequence. The Max tokens parameter can be set to 50.

Prompt text
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Article:
###
Tomatoes are one of the most popular plants for vegetable gardens. 
Tip for success: If you select varieties that are resistant to 
disease and pests, growing tomatoes can be quite easy.  For 
experienced gardeners looking for a challenge, there are endless 
heirloom and specialty varieties to cultivate.  Tomato plants come 
in a range of sizes.  There are varieties that stay very small, less 
than 12 inches, and grow well in a pot or hanging basket on a balcony 
or patio.  Some grow into bushes that are a few feet high and wide, 
and can be grown is larger containers.  Other varieties grow into 
huge bushes that are several feet wide and high in a planter or 
garden bed.  Still other varieties grow as long vines, six feet or 
more, and love to climb trellises.  Tomato plants do best in full 
sun.  You need to water tomatoes deeply and often.  Using mulch 
prevents soil-borne disease from splashing up onto the fruit when you 
water.  Pruning suckers and even pinching the tips will encourage the 
plant to put all its energy into producing fruit.
###

Answer the following question using only information from the article. 
Answer in a complete sentence, with proper capitalization and punctuation. 
If there is no good answer in the article, say "I don't know".

Question: Why should you use mulch when growing tomatoes?
Answer: 

You can experiment with asking other questions too, such as:

  • "How large do tomato plants get?"
  • "Do tomato plants prefer shade or sun?"
  • "Is it easy to grow tomatoes?"

Try out-of-scope questions too, such as:

  • "How do you grow cucumbers?"

 

Sample 8

Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. A new widget is being added to the website to answer customer questions based on the contents of the article the customer is viewing. Given a question related to a given article, answer the question based on the article.

Model choice
Models that have been instruction-tuned, such as mt0-xxl-13b, flan-t5-xxl-11b, or flan-ul2-20b, can generally perform this task with this sample prompt.

Decoding
Greedy. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer.

Stopping criteria
To cause the model to return a one-sentence answer, specify a period "." as a stop sequence. The Max tokens parameter can be set to 50.

Set up section
Paste this text into the Instruction area of the Set up section:

Article:
###
Tomatoes are one of the most popular plants for vegetable gardens. 
Tip for success: If you select varieties that are resistant to 
disease and pests, growing tomatoes can be quite easy.  For 
experienced gardeners looking for a challenge, there are endless 
heirloom and specialty varieties to cultivate.  Tomato plants come 
in a range of sizes.  There are varieties that stay very small, less 
than 12 inches, and grow well in a pot or hanging basket on a balcony 
or patio.  Some grow into bushes that are a few feet high and wide, 
and can be grown is larger containers.  Other varieties grow into 
huge bushes that are several feet wide and high in a planter or 
garden bed.  Still other varieties grow as long vines, six feet or 
more, and love to climb trellises.  Tomato plants do best in full 
sun.  You need to water tomatoes deeply and often.  Using mulch 
prevents soil-borne disease from splashing up onto the fruit when you 
water.  Pruning suckers and even pinching the tips will encourage the 
plant to put all its energy into producing fruit.
###

Answer the following question using only information from the article. 
Answer in a complete sentence, with proper capitalization and punctuation. 
If there is no good answer in the article, say "I don't know".


Try section
In the Try section, add an extra test row so you can paste each of these two questions in a separate row:

Why should you use mulch when growing tomatoes?

How do you grow cucumbers?

Select the model and set parameters, then click Generate to see two results.

 

Summarization

Sample 9

Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who could not attend the meeting.

Model choice
Models that have been instruction-tuned can generally perform this task with this sample prompt. Suggestions: flan-t5-xxl-11b, flan-ul2-20b, or mpt-7b-instruct2.

Decoding
Greedy. The model must return the most predictable content based on what's in the prompt, not be too creative.

Stopping criteria
The summary might run several sentences, so set the Max tokens parameter to 60.

Prompt text
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Summarize the following transcript.
Transcript:
00:00   [alex]  Let's plan the team party!
00:10   [ali]   How about we go out for lunch at the restaurant?
00:21   [sam]   Good idea.
00:47   [sam]   Can we go to a movie too?
01:04   [alex]  Maybe golf?
01:15   [sam]   We could give people an option to do one or the other.
01:29   [alex]  I like this plan.  Let's have a party!
Summary:

 

Sample 10

Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who could not attend the meeting.

Model choice
With few-shot examples, most models can perform this task well. Try: gpt-neox-20b.

Decoding
Greedy. The model must return the most predictable content based on what's in the prompt, not be too creative.

Stopping criteria

  • To make sure the model stops generating text after the summary, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, then click Add sequence.
  • Set the Max tokens parameter to 60.

Prompt text
Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Transcript:
00:00   [sam]   I wanted to share an update on project X today.
00:15   [sam]   Project X will be completed at the end of the week.
00:30   [erin]  That's great!
00:35   [erin]  I heard from customer Y today, and they agreed to buy our product.
00:45   [alex]  Customer Z said they will too.
01:05   [sam]   Great news, all around.
Summary:
Sam shared an update that project X will be complete at the end of the week.  
Erin said customer Y will buy our product.  And Alex said customer Z will buy 
our product too.

Transcript:
00:00   [ali]   The goal today is to agree on a design solution.
00:12   [alex]  I think we should consider choice 1.
00:25   [ali]   I agree
00:40   [erin]  Choice 2 has the advantage that it will take less time.
01:03   [alex]  Actually, that's a good point.
01:30   [ali]   So, what should we do?
01:55   [alex]  I'm good with choice 2.
02:20   [erin]  Me too.
02:45   [ali]   Done!
Summary:
Alex suggested considering choice 1.  Erin pointed out choice two will take 
less time.  The team agreed with choice 2 for the design solution.

Transcript:
00:00   [alex]  Let's plan the team party!
00:10   [ali]   How about we go out for lunch at the restaurant?
00:21   [sam]   Good idea.
00:47   [sam]   Can we go to a movie too?
01:04   [alex]  Maybe golf?
01:15   [sam]   We could give people an option to do one or the other.
01:29   [alex]  I like this plan.  Let's have a party!
Summary:

 

Sample 11

Scenario: Given a meeting transcript, summarize the main points in a bulleted list so that list can be shared with teammates who could not attend the meeting.

Model choice
gpt-neox-20b was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted a specific way with special characters.

Decoding
Greedy. The model must return the most predictable content based on what's in the prompt, not be too creative.

Stopping criteria

  • To make sure the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, then click Add sequence.
  • Set the Max tokens parameter to 60.

Set up section
Paste these headers and examples into the Examples area of the Set up section:

Table 4. Summarization few-shot examples
Transcript: Summary:
00:00 [sam] I wanted to share an update on project X today.
00:15   [sam]   Project X will be completed at the end of the week.
00:30   [erin]  That's great!
00:35   [erin]  I heard from customer Y today, and they agreed to buy our product.
00:45   [alex]  Customer Z said they will too.
01:05   [sam]  Great news, all around.
- Sam shared an update that project X will be complete at the end of the week
- Erin said customer Y will buy our product
- And Alex said customer Z will buy our product too
00:00   [ali]   The goal today is to agree on a design solution.
00:12   [alex]  I think we should consider choice 1.
00:25   [ali]   I agree
00:40   [erin]  Choice 2 has the advantage that it will take less time.
01:03   [alex]  Actually, that's a good point.
01:30   [ali]   So, what should we do?
01:55   [alex]  I'm good with choice 2.
02:20   [erin]  Me too.
02:45  [ali]   Done!
- Alex suggested considering choice 1
- Erin pointed out choice two will take less time
- The team agreed with choice 2 for the design solution


Try section
Paste this message in the Try section:

00:00   [alex]  Let's plan the team party!
00:10   [ali]   How about we go out for lunch at the restaurant?
00:21   [sam]   Good idea.
00:47   [sam]   Can we go to a movie too?
01:04   [alex]  Maybe golf?
01:15   [sam]   We could give people an option to do one or the other.
01:29   [alex]  I like this plan.  Let's have a party!

Select the model and set parameters, then click Generate to see the result.

 

Code

Sample 12

Scenario: You want to generate code from instructions. Namely, you want to write a function in the Python programming language that returns a sequence of prime numbers lower than the number passed to the function as a variable.

Model choice
Models that are capable of code generation, such as starcoder-15.5b, can generally perform this task when given a sample prompt.

Decoding
Greedy. The answer should be a valid code snippet. The model should not be creative and make up an answer.

Stopping criteria
To stop the model after it returns a single code snippt, specify <end of code> as the stop sequence. The Max tokens parameter can be set to 1,000.

Prompt text
Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Using the directions below, generate Python code for the given task.

Input:
# Write a Python function that prints 'Hello World!' string 'n' times.

Output:
def print_n_times(n):
    for i in range(n):
        print("Hello World!")

<end of code>

Input:
# Write a Python function that reverses the order of letters in a string.
# The function named 'reversed' takes the argument 'my_string', which is a string. It returns the string in reverse order.

Output:

The output should contain Python code similar to the following snippet:

def reversed(my_string):
    return my_string[::-1]

Be sure to test the generated code to verify that it works as you expect.

For example, if you run reversed("good morning"), the result is 'gninrom doog'.

Note: The StarCoder model might generate code that is taken directly from its training data. As a result, generated code might require attribution. You are responsible for ensuring that any generated code that you use is properly attributed, if necessary.

 

Sample 13

Scenario: You want to convert code from one programming language to another. Namely, you want to translate a code snippet from C++ to Python.

Model choice
Models that are capable of code conversion, such as starcoder-15.5b, can generally perform this task when given a sample prompt.

Decoding
Greedy. The answer should be a valid code snippet. The model should not be creative and make up an answer.

Stopping criteria
To stop the model after it returns a single code snippt, specify <end of code> as the stop sequence. The Max tokens parameter can be set to 300.

Prompt text
Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

This prompt includes an example input and output pair. The input is C++ code and the output is the equivalent function in Python code.

The C++ code snippet to be converted is included next. It is a function that counts the number of arithmetic progressions with the sum S and common difference of D, where S and D are integer values that are passed as parameters.

The final part of the prompt identifies the language that you want the C++ code snippet to be converted into.

Translate the following code from C++ to Python.

C++:
#include "bits/stdc++.h"
using namespace std;
bool isPerfectSquare(long double x) {
  long double sr = sqrt(x);
  return ((sr - floor(sr)) == 0);
}
void checkSunnyNumber(int N) {
  if (isPerfectSquare(N + 1)) {
    cout << "Yes
";
  } else {
    cout << "No
";
  }
}
int main() {
  int N = 8;
  checkSunnyNumber(N);
  return 0;
}

Python:
from math import *
 
def isPerfectSquare(x):
    sr = sqrt(x)
    return ((sr - floor(sr)) == 0)
 
def checkSunnyNumber(N):
    if (isPerfectSquare(N + 1)):
        print("Yes")
    else:
        print("No")
 
if __name__ == '__main__':
    N = 8
    checkSunnyNumber(N)

<end of code>

C++:
#include <bits/stdc++.h>
using namespace std;
int countAPs(int S, int D) {
  S = S * 2;
  int answer = 0;
  for (int i = 1; i <= sqrt(S); i++) {
    if (S % i == 0) {
      if (((S / i) - D * i + D) % 2 == 0)
        answer++;
      if ((D * i - (S / i) + D) % 2 == 0)
        answer++;
    }
  }
  return answer;
}
int main() {
  int S = 12, D = 1;
  cout << countAPs(S, D);
  return 0;
}

Python:

The output should contain Python code similar to the following snippet:

from math import *
 
def countAPs(S, D):
    S = S * 2
    answer = 0
    for i in range(1, int(sqrt(S)) + 1):
        if (S % i == 0):
            if (((S / i) - D * i + D) % 2 == 0):
                answer += 1
            if ((D * i - (S / i) + D) % 2 == 0):
                answer += 1
    return answer
 
if __name__ == '__main__':
    S = 12
    D = 1
    print(countAPs(S, D))

The generated Python code functions the same as the C++ function included in the prompt.

Test the generated Python code to verify that it works as you expect.

Remember, the StarCoder model might generate code that is taken directly from its training data. As a result, generated code might require attribution. You are responsible for ensuring that any generated code that you use is properly attributed, if necessary.

 

Dialogue

Sample 14

Scenario: Generate dialogue output like a chatbot.

Model choice
Like other foundation models, llama-2-70b-chat can be used for multiple tasks. But llama-2-70b-chat is unusual in that it has been optimized for dialogue use cases, and it is the only model in watsonx.ai fine-tuned for the [INST]<<SYS>><</SYS>>[/INST] prompt format. For more information about this prompt format, see: How to prompt Llama 2

Decoding
Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output.

Stopping criteria

  • To prevent the model from hallucinating its own dialogue with both user messages and assistant messages, specify a stop sequence of one newline character. To do that, click the Stop sequence text box, press the Enter key twice, then click Add sequence.
  • Set the Max tokens parameter to 100.

Prompt text
Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

[INST] <<SYS>>
You are a helpful assistant. Begin each response with the phrase "Dear user, ".  If you are unsure about an answer, truthfully say "I don't know"<</SYS>>

I have a question about the Earth.[/INST]
Dear user, go on.
[INST]

How many moons are there?
[/INST]

After the initial output is generated, continue the dialogue to ask about more planets, wrapping your user messages in the same [INST][/INST] structure. The output will look something like this:

[INST] <<SYS>>
You are a helpful assistant. Begin each response with the phrase "Dear user, ".  If you are unsure about an answer, truthfully say "I don't know"<</SYS>>

I have a question about the Earth.[/INST]
Dear user, go on.
[INST]

How many moons are there?
[/INST]
Dear user, the Earth has one natural satellite, commonly referred to as the Moon. 
[INST]

What about Saturn?
[/INST]
Dear user, Saturn has a total of 62 confirmed moons, as well as numerous smaller moonlets and ring particles. The largest moon of Saturn is Titan, which is the second-largest moon in the solar system, after Jupiter's moon Ganymede. 
[INST]

And Mars?
[/INST]
Dear user, Mars has two small natural satellites, Phobos and Deimos. Both moons are thought to be captured asteroids, and are irregularly shaped and very small, with diameters of about 22 kilometers and 12 kilometers respectively.

Notice a few important points about the output the model generates as you go:

  • The so-called system instruction, in the <<SYS>><</SYS>> tags, continues to influence the output at each dialogue turn without having to repeat the system instruction. In other words, the assistant responses continue to be prepended with "Dear user, ".
  • In true dialogue fashion, the assistant response to the user input "How many moons are there?" takes into account the previous user input "I have a question about the Earth." and returns the number of moons orbiting Earth.
  • Also in proper dialogue form, the assistant responses continue to follow the topic of the conversation, which is the number of moons. (Otherwise, the generated output to the vague user message "And Mars?" could wander off in any direction.)
  • Caution: Newline (carriage-return) characters especially, and spaces to a lesser extent, in the prompt text can have a dramatic impact on the output generated. Pay close attention to the formatting of your prompt text with this sample.

 

Parent topic: Prompt tips

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more