0 / 0
Sample foundation model prompts for common tasks

Sample foundation model prompts for common tasks

Try these samples to learn how different prompts can guide foundation models to do common tasks.

How to use this topic

Explore the sample prompts in this topic:

  • Copy and paste the prompt text and input parameter values into the Prompt Lab in IBM watsonx.ai
  • See what text is generated.
  • See how different models generate different output.
  • Change the prompt text and parameters to see how results vary.

There is no one right way to prompt foundation models. But patterns have been found, in academia and industry, that work fairly reliably. Use the samples in this topic to build your skills and your intuition about prompt engineering through experimentation.

 

This video provides a visual method to learn the concepts and tasks in this documentation.


Video chapters
[ 0:11 ] Introduction to prompts and Prompt Lab
[ 0:33 ] Key concept: Everything is text completion
[ 1:34 ] Useful prompt pattern: Few-shot prompt
[ 1:58 ] Stopping criteria: Max tokens, stop sequences
[ 3:32 ] Key concept: Fine-tuning
[ 4:32 ] Useful prompt pattern: Zero-shot prompt
[ 5:32 ] Key concept: Be flexible, try different prompts
[ 6:14 ] Next steps: Experiment with sample prompts

 

Samples overview

You can find samples that prompt foundation models to generate output that supports the following tasks:

The following table shows the foundation models that are used in task-specific samples. A checkmark (✓) indicates that the model is used in a sample for the associated task. You can click See sample to go to the sample prompt.

Table 1. Models used in samples for certain tasks
Model Classification Extraction Generation QA Summarization Coding Dialog Translation
granite-13b-chat-v2
See sample
granite-13b-instruct-v2
See sample

See sample

See sample
granite-7b-lab
See sample
granite-8b-japanese
See sample

See sample

See sample
granite-20b-multilingual
See sample
codellama-34b-instruct-hf
See sample
elyza-japanese-llama-2-7b-instruct
See sample

See sample
flan-t5-xxl-11b
See sample

See sample

See sample
flan-ul2-20b
See sample

See sample

See sample

See sample

See sample
jais-13b-chat
See sample
llama-3-8b-instruct
See sample
llama-3-70b-instruct
See sample
llama-2 chat
See sample
llama2-13b-dpo-v7
See sample

See sample
merlinite-7b
See sample
mixtral-8x7b-instruct-v01
See sample

See sample

See sample

See sample

See sample

See sample
mt0-xxl-13b
See sample

See sample

The following table summarizes the available sample prompts.

Table 2. List of sample prompts
Scenario Prompt editor Prompt format Model Decoding Notes
Sample with a zero-shot prompt: Classify a message Freeform Zero-shot • mt0-xxl-13b
• flan-t5-xxl-11b
• flan-ul2-20b
• mixtral-8x7b-instruct-v01
Greedy • Uses the class names as stop sequences to stop the model after it prints the class name
Sample with a few-shot prompt: Classify a message in freeform mode Freeform Few-shot • mixtral-8x7b-instruct-v01 Greedy • Uses the class names as stop sequences
Sample with a few-shot prompt: Classify a message in structured mode Structured Few-shot • mixtral-8x7b-instruct-v01 Greedy • Uses the class names as stop sequences
Sample: Classify a Japanese message Freeform Few-shot • elyza-japanese-llama-2-7b-instruct Greedy • Uses the class names as stop sequences
Sample: Extract details from a complaint Freeform Zero-shot • flan-ul2-20b Greedy
Sample: Generate a numbered list on a theme in freeform mode Freeform Few-shot • mixtral-8x7b-instruct-v01 Sampling • Generates formatted output
• Uses two newline characters as a stop sequence to stop the model after one list
Sample: Generate a numbered list on a theme in structured mode Structured Few-shot • mixtral-8x7b-instruct-v01 Sampling • Generates formatted output.
• Uses two newline characters as a stop sequence
Sample: Generate a numbered list on a particular theme with Granite Freeform Zero-shot • granite-13b-instruct-v2 Greedy • Generates formatted output
Sample: Answer a question based on an article in freeform mode Freeform Zero-shot • mt0-xxl-13b
• flan-t5-xxl-11b
• flan-ul2-20b
• mixtral-8x7b-instruct-v01
Greedy • Uses a period "." as a stop sequence to cause the model to return only a single sentence
Sample: Answer a question based on an article in structured mode Structured Zero-shot • mt0-xxl-13b
• flan-t5-xxl-11b
• flan-ul2-20b
• mixtral-8x7b-instruct-v01
Greedy • Uses a period "." as a stop sequence
• Generates results for multiple inputs at once
Sample: Answer a question based on a document with Granite Freeform Zero-shot • granite-13b-instruct-v2 Greedy
Sample: Answer general knowledge questions in Japanese Freeform Zero-shot • granite-8b-japanese Greedy
Sample: Generate a title for a passage Freeform One-shot • granite-7b-lab Greedy • Uses a special token that is named <|endoftext|> as a stop sequence.
Sample: Complete a sequence in a pattern Freeform One-shot • merlinite-7b Greedy • Uses a special token that is named <|endoftext|> as a stop sequence.
Sample zero-shot prompt: Summarize a meeting transcript Freeform Zero-shot • flan-t5-xxl-11b
• flan-ul2-20b
• mixtral-8x7b-instruct-v01
Greedy
Sample few-shot prompt: Summarize a meeting transcript in freeform mode Freeform Few-shot • mixtral-8x7b-instruct-v01 Greedy
Sample few-shot prompt: Summarize a meeting transcript in structured mode Structured Few-shot • mixtral-8x7b-instruct-v01 Greedy • Generates formatted output
• Uses two newline characters as a stop sequence to stop the model after one list
Sample: Summarize content in Korean Freeform Zero-shot • llama2-13b-dpo-v7 Greedy • Generates output in Korean
Sample: Generate programmatic code from instructions Freeform Few-shot • mixtral-8x7b-instruct-v01
• codellama-34b-instruct-hf
Greedy • Generates programmatic code as output
• Uses <end of code> as a stop sequence
Sample: Convert code from one programming language to another Freeform Few-shot • mixtral-8x7b-instruct-v01
• codellama-34b-instruct-hf
Greedy • Generates programmatic code as output
• Uses <end of code> as a stop sequence
Sample: Converse with Llama 3 Freeform Custom structure • llama-3-8b-instruct
• llama-3-70b-instruct
Greedy • Generates dialog output like a chatbot
• Uses a model-specific prompt format
Sample: Converse with Llama 2 Chat Freeform Custom structure • llama-2 chat Greedy • Generates dialog output like a chatbot
• Uses a model-specific prompt format
Sample: Converse with granite-13b-chat-v2 Freeform Custom structure • granite-13b-chat-v2 Greedy • Generates dialog output like a chatbot
• Uses a system prompt to establish guardrails for the dialog
Sample: Converse in Japanese with granite-8b-japanese Freeform Custom structure • granite-8b-japanese Greedy • Generates Japanese dialog output like a chatbot
• Uses a model-specific prompt format
Sample: Converse in Arabic with jais-13b-chat Freeform Custom structure • jais-13b-chat Greedy • Generates English or Arabic dialog output like a chatbot
• Uses a model-specific prompt format
Sample: Converse in Korean with llama2-13b-dpo-v7 Freeform Custom structure • llama2-13b-dpo-v7 Greedy • Generates Korean dialog output like a chatbot
• Uses a model-specific prompt format
• Uses a period as a stop sequence
Sample: Translate text from Japanese to English Freeform Zero-shot • elyza-japanese-llama-2-7b-instruct Greedy • Translates text from Japanese to English
Sample: Translate text from Spanish to English Freeform Few-shot • mixtral-8x7b-instruct-v01 Greedy • Translates text from Spanish to English
Sample: Translate text from English to Japanese Freeform Zero-shot • granite-8b-japanese Greedy • Translates text from English to Japanese
Sample: Translate text from French to English Freeform Few-shot • granite-20b-multilingual Greedy • Translates text from French to English

Classification

Classification is useful for predicting data in distinct categories. Classifications can be binary, with two classes of data, or multi-class. A classification task is useful for categorizing information, such as customer feedback, so that you can manage or act on the information more efficiently.

Sample with a zero-shot prompt: Classify a message

Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem. Depending on the class assignment, the chat is routed to the correct support team for the issue type.

Model choice
Models that are instruction-tuned can generally complete this task with this sample prompt.

Suggestions: mt0-xxl-13b, flan-t5-xxl-11b, flan-ul2-20b, or mixtral-8x7b-instruct-v01

Decoding
Greedy. The model must return one of the specified class names; it cannot be creative and make up new classes.

Stopping criteria

  • Specify two stop sequences: "Question" and "Problem". After the model generates either of those words, it should stop.
  • With such short output, the Max tokens parameter can be set to 5.

Prompt text
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Classify this customer message into one of two classes: Question, Problem.

Class name: Question
Description: The customer is asking a technical question or a how-to question 
about our products or services.

Class name: Problem
Description: The customer is describing a problem they are having. They might 
say they are trying something, but it's not working. They might say they are 
getting an error or unexpected results.

Message: I'm having trouble registering for a new account.
Class name: 

Sample with a few-shot prompt: Classify a message in freeform mode

Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed to the correct support team.

Model choice
With few-shot examples of both classes, models such as mixtral-8x7b-instruct-v01 can complete this task well.

Decoding
Greedy. The model must return one of the specified class names; it cannot be creative and make up new classes.

Stopping criteria

  • Specify two stop sequences: "Question" and "Problem". After the model generates either of those words, it should stop.
  • With such short output, the Max tokens parameter can be set to 5.

Prompt text
Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Message: When I try to log in, I get an error.
Class name: Problem

Message: Where can I find the plan prices?
Class name: Question

Message: What is the difference between trial and paygo?
Class name: Question

Message: The registration page crashed, and now I can't create a new account.
Class name: Problem

Message: What regions are supported?
Class name: Question

Message: I can't remember my password.
Class name: Problem

Message: I'm having trouble registering for a new account.

Sample with a few-shot prompt: Classify a message in structured mode

Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed to the correct support team.

Model choice
With few-shot examples of both classes, models such as mixtral-8x7b-instruct-v01 can complete this task well.

Decoding
Greedy. The model must return one of the specified class names, not be creative and make up new classes.

Stopping criteria

  • Specify two stop sequences: "Question" and "Problem". After the model generates either of those words, it should stop.
  • With such short output, the Max tokens parameter can be set to 5.

Set up section
Paste these headers and examples into the Examples area of the Set up section:

Table 2. Classification few-shot examples
Message: Class name:
When I try to log in, I get an error. Problem
Where can I find the plan prices? Question
What is the difference between trial and paygo? Question
The registration page crashed, and now I can't create a new account. Problem
What regions are supported? Question
I can't remember my password. Problem


Try section
Paste this message in the Try section:

I'm having trouble registering for a new account.

Select the model and set parameters, then click Generate to see the result.

Sample: Classify a Japanese message

Scenario: Given a message that is submitted to a customer-support chatbot for a Japanese cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed to the correct support team.

Model choice
The elyza-japanese-llama-2-7b-instruct model can classify prompt input text that is written in Japanese.

AI guardrails
Disable the AI guardrails feature. The feature is supported with English text only. It might flag as inappropriate content that is not inappropriate.

Decoding
Greedy. The model must return one of the specified class names; it cannot be creative and make up new classes.

Stopping criteria

  • Specify two stop sequences: 問題 for problem and 質問 for question. After the model generates either of those words, it should stop.
  • If you want to lower the value in the Max tokens parameter, do not lower the value below 7 tokens. Japanese characters use more tokens than the same words in English.

Prompt text
Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. The sample prompt text is a Japanese translation of the English prompt text in Sample 1b.

次のメッセージを問題または質問に分類します。

メッセージ: ログインしようとすると、エラーが発生します。
クラス名: 問題

メッセージ: プランの価格はどこで確認できますか?
クラス名: 質問

メッセージ: トライアルとペイゴーの違いは何ですか?
クラス名: 質問

メッセージ: 登録ページがクラッシュしたため、新しいアカウントを作成できません。
クラス名: 問題

メッセージ: どの地域がサポートされていますか?
クラス名: 質問

メッセージ: パスワードを思い出せません。
クラス名: 問題

メッセージ: 新しいアカウントの登録で問題が発生しました。
クラス名:

 

Extracting details

Extraction tasks can help you to find key terms or mentions in data based on the semantic meaning of words rather than simple text matches.

Sample: Extract details from a complaint

Scenario: Given a complaint from a customer who had trouble booking a flight on a reservation website, identify the factors that contributed to this customer's unsatisfactory experience.

Model choices
flan-ul2-20b

Decoding
Greedy. We need the model to return words that are in the input; the model cannot be creative and make up new words.

Stopping criteria
The list of extracted factors will not be long, so set the Max tokens parameter to 50.

Prompt text
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

From the following customer complaint, extract all the factors that 
caused the customer to be unhappy.

Customer complaint:
I just tried to book a flight on your incredibly slow website. All 
the times and prices were confusing. I liked being able to compare 
the amenities in economy with business class side by side. But I 
never got to reserve a seat because I didn't understand the seat map. 
Next time, I'll use a travel agent!

Numbered list of all the factors that caused the customer to be unhappy:

Generating natural language

Generation tasks are what large language models do best. Your prompts can help guide the model to generate useful language.

Sample with a few-shot prompt: Generate a numbered list on a theme in freeform mode

Scenario: Generate a numbered list on a particular theme.

Model choice
The mixtral-8x7b-instruct-v01 foundation model was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted a specific way with special characters.

Decoding
Sampling. This is a creative task. Set the following parameters:

  • Temperature: 0.7
  • Top P: 1
  • Top K: 50
  • Random seed: 9045 (To get different output each time you click Generate, specify a different value for the Random seed parameter or clear the parameter.)

Stopping criteria

  • To make sure the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click the Stop sequence text box, press the Enter key twice, and then click Add sequence.
  • The list will not be very long, so set the Max tokens parameter to 50.

Prompt text
Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

What are 4 types of dog breed?
1. Poodle
2. Dalmatian
3. Golden retriever
4. Bulldog

What are 3 ways to incorporate exercise into your day?
1. Go for a walk at lunch
2. Take the stairs instead of the elevator
3. Park farther away from your destination

What are 4 kinds of vegetable?
1. Spinach
2. Carrots
3. Broccoli
4. Cauliflower

What are the 3 primary colors?
1. Red
2. Green
3. Blue

What are 3 ingredients that are good on pizza?

Sample with a few-shot prompt: Generate a numbered list on a theme in structured mode

Scenario: Generate a numbered list on a particular theme.

Model choice
The mixtral-8x7b-instruct-v01 foundation model was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted a specific way with special characters.

Decoding
Sampling. This scenario is a creative one. Set the following parameters:

  • Temperature: 0.7
  • Top P: 1
  • Top K: 50
  • Random seed: 9045 (To generate different results, specify a different value for the Random seed parameter or clear the parameter.)

Stopping criteria

  • To make sure that the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, and then click Add sequence.
  • The list will not be long, so set the Max tokens parameter to 50.

Set up section
Paste these headers and examples into the Examples area of the Set up section:

Table 3. Generation few-shot examples
Input: Output:
What are 4 types of dog breed? 1. Poodle
2. Dalmatian
3. Golden retriever
4. Bulldog
What are 3 ways to incorporate exercise into your day? 1. Go for a walk at lunch
2. Take the stairs instead of the elevator
3. Park farther away from your destination
What are 4 kinds of vegetable? 1. Spinach
2. Carrots
3. Broccoli
4. Cauliflower
What are the 3 primary colors? 1. Red
2. Green
3. Blue


Try section
Paste this input in the Try section:

What are 3 ingredients that are good on pizza?

Select the model and set parameters, then click Generate to see the result.

Sample with a zero-shot prompt: Generate a numbered list on a particular theme

Scenario: Ask the model to play devil's advocate. Describe a potential action and ask the model to list possible downsides or risks that are associated with the action.

Model choice
The granite-13b-instruct model was trained to recognize and handle special characters, such as the newline character, well. The granite-13b-instruct-v2 model is a good choice when you want your generated text to be formatted in a specific way with special characters.

Decoding
Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.

Stopping criteria
The summary might run several sentences, so set the Max tokens parameter to 60.

Prompt text
Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

You are playing the role of devil's advocate. Argue against the proposed plans. List 3 detailed, unique, compelling reasons why moving forward with the plan would be a bad choice. Consider all types of risks.

Plan we are considering:
Extend our store hours.
Three problems with this plan are:
1. We'll have to pay more for staffing.
2. Risk of theft increases late at night.
3. Clerks might not want to work later hours.

Plan we are considering:
Open a second location for our business.
Three problems with this plan are:
1. Managing two locations will be more than twice as time-consuming than managed just one.
2. Creating a new location doesn't guarantee twice as many customers.
3. A new location means added real estate, utility, and personnel expenses.

Plan we are considering:
Refreshing our brand image by creating a new logo.
Three problems with this plan are:

Question answering

Question-answering tasks are useful in help systems and other scenarios where frequently asked or more nuanced questions can be answered from existing content.

To help the model return factual answers, implement the retrieval-augmented generation pattern. For more information, see Retrieval-augmented generation.

Sample: Answer a question based on an article in freeform mode

Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. A new widget is being added to the website to answer customer questions based on the contents of the article the customer is viewing. Given a question that is related to an article, answer the question based on the article.

Model choice
Models that are instruction-tuned, such as flan-t5-xxl-11b, flan-ul2-20b, mixtral-8x7b-instruct-v01, or mt0-xxl-13b can generally complete this task with this sample prompt.

Decoding
Greedy. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer.

Stopping criteria
To cause the model to return a one-sentence answer, specify a period "." as a stop sequence. The Max tokens parameter can be set to 50.

Prompt text
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Article:
###
Tomatoes are one of the most popular plants for vegetable gardens. 
Tip for success: If you select varieties that are resistant to 
disease and pests, growing tomatoes can be quite easy. For 
experienced gardeners looking for a challenge, there are endless 
heirloom and specialty varieties to cultivate. Tomato plants come 
in a range of sizes. There are varieties that stay very small, less 
than 12 inches, and grow well in a pot or hanging basket on a balcony 
or patio. Some grow into bushes that are a few feet high and wide, 
and can be grown is larger containers. Other varieties grow into 
huge bushes that are several feet wide and high in a planter or 
garden bed. Still other varieties grow as long vines, six feet or 
more, and love to climb trellises. Tomato plants do best in full 
sun. You need to water tomatoes deeply and often. Using mulch 
prevents soil-borne disease from splashing up onto the fruit when you 
water. Pruning suckers and even pinching the tips will encourage the 
plant to put all its energy into producing fruit.
###

Answer the following question using only information from the article. 
Answer in a complete sentence, with proper capitalization and punctuation. 
If there is no good answer in the article, say "I don't know".

Question: Why should you use mulch when growing tomatoes?
Answer: 

You can experiment with asking other questions too, such as:

  • "How large do tomato plants get?"
  • "Do tomato plants prefer shade or sun?"
  • "Is it easy to grow tomatoes?"

Try out-of-scope questions too, such as:

  • "How do you grow cucumbers?"

Sample: Answer a question based on an article in structured mode

Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. A new widget is being added to the website to answer customer questions based on the contents of the article the customer is viewing. Given a question related to a particular article, answer the question based on the article.

Model choice
Models that are instruction-tuned, such as flan-t5-xxl-11b, flan-ul2-20b, mixtral-8x7b-instruct-v01, or mt0-xxl-13b can generally complete this task with this sample prompt.

Decoding
Greedy. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer.

Stopping criteria
To cause the model to return a one-sentence answer, specify a period "." as a stop sequence. The Max tokens parameter can be set to 50.

Set up section
Paste this text into the Instruction area of the Set up section:

Article:
###
Tomatoes are one of the most popular plants for vegetable gardens. 
Tip for success: If you select varieties that are resistant to 
disease and pests, growing tomatoes can be quite easy. For 
experienced gardeners looking for a challenge, there are endless 
heirloom and specialty varieties to cultivate. Tomato plants come 
in a range of sizes. There are varieties that stay very small, less 
than 12 inches, and grow well in a pot or hanging basket on a balcony 
or patio. Some grow into bushes that are a few feet high and wide, 
and can be grown is larger containers. Other varieties grow into 
huge bushes that are several feet wide and high in a planter or 
garden bed. Still other varieties grow as long vines, six feet or 
more, and love to climb trellises. Tomato plants do best in full 
sun. You need to water tomatoes deeply and often. Using mulch 
prevents soil-borne disease from splashing up onto the fruit when you 
water. Pruning suckers and even pinching the tips will encourage the 
plant to put all its energy into producing fruit.
###

Answer the following question using only information from the article. 
Answer in a complete sentence, with proper capitalization and punctuation. 
If there is no good answer in the article, say "I don't know".


Try section
In the Try section, add an extra test row so you can paste each of these two questions in a separate row:

Why should you use mulch when growing tomatoes?

How do you grow cucumbers?

Select the model and set parameters, then click Generate to see two results.

Sample: Answer a question based on a document with Granite

Scenario: You are creating a chatbot that can answer user questions. When a user asks a question, you want the agent to answer the question with information from a specific document.

Model choice
Models that are instruction-tuned, such as granite-13b-instruct-v2, can complete the task with this sample prompt.

Decoding
Greedy. The answers must be grounded in the facts in the document, and if there is no good answer in the article, the model should not be creative and make up an answer.

Stopping criteria
Use a Max tokens parameter of 50.

Prompt text
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Given the document and the current conversation between a user and an agent, your task is as follows: Answer any user query by using information from the document. The response should be detailed.

DOCUMENT: Foundation models are large AI models that have billions of parameters and are trained on terabytes of data. Foundation models can do various tasks, including text, code, or image generation, classification, conversation, and more. Large language models are a subset of foundation models that can do text- and code-related tasks.
DIALOG: USER: What are foundation models?

Sample: Answer general knowledge questions in Japanese

Scenario: Answer general questions about finance in Japanese.

Model choice
The granite-8b-japanese model can be used for multiple tasks, including text generation, summarization, question and answering, classification, and extraction.

Decoding
Greedy. This sample is answering questions, so we don't want creative output.

Stopping criteria

  • Set the Max tokens parameter to 500 to allow for many turns in the dialog.
  • Add a stop sequence of two newline characters to prevent the foundation model from returning overly long responses. To do that, click in the Stop sequence text box, press the Enter key twice, and then click Add sequence.

Prompt text
The model was tuned for question-answering with examples in the following format:

以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。

### 指示:
与えられた質問に対して、文脈がある場合はそれも利用し、回答してください。

### 入力:
{your-input}

### 応答:

In English, the template reads as follows:

Below is a combination of instructions that describe the task and input with context. Write a response that appropriately meets the request.

### Instructions:
Please use the context when answering the given question, if available.

### input:
{your-input}

### Response:

Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, replace {your-input} with your query or request, and then click Generate to see the result.

For example, this prompt asks about interest rates.

以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。

### 指示:
与えられた質問に対して、文脈がある場合はそれも利用し、回答してください。

### 入力:
金利について教えてください。

### 応答:

Sample: Generate a title for a passage

Scenario: Given a passage, generate a suitable title for the content.

Model choice
Use granite-7b-lab, which can do many types of general purpose tasks.

Decoding
Greedy. The model must generate a title that is based on what's in the prompt, not be too creative.

Stopping criteria

  • Add <|endoftext|> as the stop sequence.

    A helpful feature of the granite-7b-lab foundation model is the inclusion of a special token that is named <|endoftext|> at the end of each response. When some generative models return a response to the input in fewer tokens than the maximum number allowed, they can repeat patterns from the input. This model prevents such repetition by incorporating a reliable stop sequence for the prompt.

Prompt text
Include at least one example of how you want the model to respond.

Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

The content that is provided as context in the prompt is taken from Tokens and tokenization in the product documentation.

<|system|>
You are an AI language model developed by IBM Research. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.

<|user|>
Generate a title from the given context.
Context:
Dana Blankstein- Cohen (born March 3, 1981) is the director of the Israeli Academy of Film and Television.\n\nShe is a film director, and an Israeli culture entrepreneur.\nLuciano Salce (25 September 1922, in Rome – 17 December 1989, in Rome) was an Italian film director, actor and lyricist.\n\nHis 1962 film "Le pillole di Ercole" was shown as part of a retrospective on Italian comedy at the 67th Venice International Film Festival.\n\nAs a writer of pop music, he used the pseudonym Pilantra.\n\nDuring World War II, he was a prisoner in Germany.\n\nHe later worked for several years in Brazil.\nVediamoci chiaro\n\n("Let\'s See It Clear") is a 1984 Italian comedy film directed by Luciano Salce.\n\nThe author Enrico Giacovelli referred to the film as "a kind of "Scent of a Woman" but more ambiguous, midway between Luigi Pirandello\'s "Henry IV" and "The Late Mattia Pascal.\nPeter Levin is an American director of film, television and theatre.\nIan Barry is an Australian director of film and TV.\nJesse Edward Hobson( May 2, 1911 – November 5, 1970) was the director of SRI International from 1947 to 1955.\n\nPrior to SRI, he was the director of the Armour Research Foundation.\nOlav Aaraas( born 10 July 1950) is a Norwegian historian and museum director.\n\nHe was born in Fredrikstad.\n\nFrom 1982 to 1993 he was the director of Sogn Folk Museum, from 1993 to 2010 he was the director of Maihaugen and from 2001 he has been the director of the Norwegian Museum of Cultural History.\n\nIn 2010 he was decorated with the Royal Norwegian Order of St. Olav.\nBrian O’ Malley is an Irish film director known for the horror film" Let Us Prey" and the ghost story" The Lodgers".\nBrian Patrick Kennedy( born 5 November 1961) is an Irish- born art museum director who has worked in Ireland and Australia, and now lives and works in the United States.\n\nHe is currently the director of the Peabody Essex Museum.\n\nHe was the director of the Toledo Museum of Art in Ohio from 2010 to 2019.\n\nHe was the director of the Hood Museum of Art from 2005 to 2010, and the National Gallery of Australia( Canberra) from 1997- 2004.

<|assistant|>
Directors Across Borders

<|user|>
Generate a title from the given context.
Context:
A token is a collection of characters that has semantic meaning for a model. Tokenization is the process of converting the words in your prompt into tokens.
You can monitor foundation model token usage in a project on the Environments page on the Resource usage tab.
Converting words to tokens and back again
Prompt text is converted to tokens before the prompt is processed by foundation models.
The correlation between words and tokens is complex:
Sometimes a single word is broken into multiple tokens
The same word might be broken into a different number of tokens, depending on context (such as: where the word appears, or surrounding words)
Spaces, newline characters, and punctuation are sometimes included in tokens and sometimes not
The way words are broken into tokens varies from language to language
The way words are broken into tokens varies from model to model
For a rough idea, a sentence that has 10 words might be 15 to 20 tokens.
The raw output from a model is also tokens. In the Prompt Lab in IBM watsonx.ai, the output tokens from the model are converted to words to be displayed in the prompt editor.

<|assistant|>

Sample: Complete a sequence in a pattern

Scenario: Given a pattern, ask the model to generate the next expected value in the sequence.

Model choice
Use merlinite-7b, which can do many types of general purpose tasks.

Decoding
Greedy. The model must generate an answer that is based on what's in the prompt, not be too creative.

Stopping criteria

  • Add <|endoftext|> as the stop sequence.

    A helpful feature of the merlinite-7b foundation model is the inclusion of a special token that is named <|endoftext|> at the end of each response. When some generative models return a response to the input in fewer tokens than the maximum number allowed, they can repeat patterns from the input. This model prevents such repetition by incorporating a reliable stop sequence for the prompt.

Prompt text
Include at least one example of how you want the model to respond.

Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

<|system|>
You are an AI language model developed by IBM Research. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.

<|user|>
Follow the pattern: A, C, E, G, ... What is the next letter?
<|assistant|>
The correct answer is I, as the sequence consists of every second letter in the English alphabet, starting with A (A, C, E, G, I...).

<|user|>
Follow the pattern: 5, 10, 15, 20, ... What comes next?
<|assistant|>

Summarization

Summarization tasks save you time by condensing large amounts of text into a few key pieces of information.

Sample with a zero-shot prompt: Summarize a meeting transcript

Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting.

Model choice
Models that are instruction-tuned can generally complete this task with this sample prompt. Suggestions: flan-t5-xxl-11b, flan-ul2-20b, or mixtral-8x7b-instruct-v01.

Decoding
Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.

Stopping criteria
The summary might run several sentences, so set the Max tokens parameter to 60.

Prompt text
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Summarize the following transcript.
Transcript:
00:00   [alex]  Let's plan the team party!
00:10   [ali]   How about we go out for lunch at the restaurant?
00:21   [sam]   Good idea.
00:47   [sam]   Can we go to a movie too?
01:04   [alex]  Maybe golf?
01:15   [sam]   We could give people an option to do one or the other.
01:29   [alex]  I like this plan. Let's have a party!
Summary:

Sample with a few-shot prompt: Summarize a meeting transcript in freeform mode

Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting.

Model choice
With few-shot examples, most models can complete this task well. Try mixtral-8x7b-instruct-v01.

Decoding
Greedy. The model must return the most predictable content based on what's in the prompt, not be too creative.

Stopping criteria

  • To make sure that the model stops generating text after the summary, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, and then click Add sequence.
  • Set the Max tokens parameter to 60.

Prompt text
Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Transcript:
00:00   [sam]   I wanted to share an update on project X today.
00:15   [sam]   Project X will be completed at the end of the week.
00:30   [erin]  That's great!
00:35   [erin]  I heard from customer Y today, and they agreed to buy our product.
00:45   [alex]  Customer Z said they will too.
01:05   [sam]   Great news, all around.
Summary:
Sam shared an update that project X will be complete at the end of the week. 
Erin said customer Y will buy our product. And Alex said customer Z will buy 
our product too.

Transcript:
00:00   [ali]   The goal today is to agree on a design solution.
00:12   [alex]  I think we should consider choice 1.
00:25   [ali]   I agree
00:40   [erin]  Choice 2 has the advantage that it will take less time.
01:03   [alex]  Actually, that's a good point.
01:30   [ali]   So, what should we do?
01:55   [alex]  I'm good with choice 2.
02:20   [erin]  Me too.
02:45   [ali]   Done!
Summary:
Alex suggested considering choice 1. Erin pointed out choice two will take 
less time. The team agreed with choice 2 for the design solution.

Transcript:
00:00   [alex]  Let's plan the team party!
00:10   [ali]   How about we go out for lunch at the restaurant?
00:21   [sam]   Good idea.
00:47   [sam]   Can we go to a movie too?
01:04   [alex]  Maybe golf?
01:15   [sam]   We could give people an option to do one or the other.
01:29   [alex]  I like this plan. Let's have a party!
Summary:

Sample with a few-shot prompt: Summarize a meeting transcript in structured mode

Scenario: Given a meeting transcript, summarize the main points in a bulleted list so that the list can be shared with teammates who did not attend the meeting.

Model choice
The mixtral-8x7b-instruct-v01 foundation model was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted in a specific way with special characters.

Decoding
Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.

Stopping criteria

  • To make sure that the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, and then click Add sequence.
  • Set the Max tokens parameter to 60.

Set up section
Paste these headers and examples into the Examples area of the Set up section:

Table 4. Summarization few-shot examples
Transcript: Summary:
00:00 [sam] I wanted to share an update on project X today.
00:15   [sam]   Project X will be completed at the end of the week.
00:30   [erin]  That's great!
00:35   [erin]  I heard from customer Y today, and they agreed to buy our product.
00:45   [alex]  Customer Z said they will too.
01:05   [sam]  Great news, all around.
- Sam shared an update that project X will be complete at the end of the week
- Erin said customer Y will buy our product
- And Alex said customer Z will buy our product too
00:00   [ali]   The goal today is to agree on a design solution.
00:12   [alex]  I think we should consider choice 1.
00:25   [ali]   I agree
00:40   [erin]  Choice 2 has the advantage that it will take less time.
01:03   [alex]  Actually, that's a good point.
01:30   [ali]   So, what should we do?
01:55   [alex]  I'm good with choice 2.
02:20   [erin]  Me too.
02:45  [ali]   Done!
- Alex suggested considering choice 1
- Erin pointed out choice two will take less time
- The team agreed with choice 2 for the design solution


Try section
Paste this message in the Try section:

00:00   [alex]  Let's plan the team party!
00:10   [ali]   How about we go out for lunch at the restaurant?
00:21   [sam]   Good idea.
00:47   [sam]   Can we go to a movie too?
01:04   [alex]  Maybe golf?
01:15   [sam]   We could give people an option to do one or the other.
01:29   [alex]  I like this plan. Let's have a party!

Select the model and set parameters, then click Generate to see the result.

Sample: Summarize content in Korean

Scenario: Given a passage in Korean, summarize the main points.

Model choice
Use llama2-13b-dpo-v7, which can do many types of general purpose tasks in Korean.

Decoding
Greedy. The model must return the most predictable content based on what's in the prompt, not be too creative.

Stopping criteria

  • Set the Max tokens parameter to 200.

Prompt text
Include an instruction that directs the foundation model to summarize the content. An English translation of such an instruction follows:

Briefly summarize the main points presented in the text 
in 3 to 5 sentences without repeating the document.
Document:

Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

<|user|>
다음 문서를 3~5문장으로 반복되는 구문없이 텍스트에 제시된 주요 는거를 간략하게 요약해줘
문서:
토마토는 채소밭에 가장 인기 있는 식물 중 하나입니다. 성공을 위한 팁: 질병과 해충에 강한 품종을 선택하면 토마토 재배가 매우 쉬울 수 있습니다. 도전을 원하는 숙련된 정원사에게는 경작할 수 있는 가보와 특산 품종이 끝없이 많이 있습니다. 토마토 식물은 다양한 크기로 제공됩니다.
<|assistant|>
요약:

The passage to be summarized translates into English as follows:

Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes.

Don't forget to follow the recommended <|user|> and <|assistant|> syntax that is recommended for this foundation model. Be sure to aslo include a cue (Summary:) to indicate that you expect the foundation model to return a summary.

Sample: Generate a title for a passage

Scenario: Given a passage, generate an appropriate title for the content, one that captures the subject of the text.

Model choice
Use granite-7b-lab, which can do many types of general purpose tasks.

Decoding
Greedy. The model must generate a title that is based on what's in the prompt, not be too creative.

Stopping criteria

  • Add <|endoftext|> as the stop sequence.

    A helpful feature of the granite-7b-lab foundation model is the inclusion of a special token that is named <|endoftext|> at the end of each response. When some generative models return a response to the input in fewer tokens than the maximum number allowed, they can repeat patterns from the input. This model prevents such repetition by incorporating a reliable stop sequence for the prompt.

Prompt text
Include at least one example of how you want the model to respond.

A feature of the granite-7b-lab foundation model is that you can review skills that the model is trained to do by opening the Training taxonomy page from the model card for the foundation model.

For example, the taxonomy indicates that the granite-7b-lab foundation model was trained on the title skill. If you click the skill, you can see examples that were used as seed examples for the synthetic data that was used to train the model. You can model the example that you include in your one-shot prompt after one of these skill-specific examples. Using a similar style and format for the prompt helps the model recognize what you expect in the model output.

Note: Don't expect the foundation model output to be exactly the same as the model output from the skill examples in the taxonomy. These examples were not used to train the foundation model directly. The examples served as seed examples to synthetic data that generated new examples that were used to train the foundation model.

Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. The example comes from the title skill seed examples. The content that is provided as context in the prompt is taken from Tokens and tokenization in the product documentation.

Question
Generate a title from the given text.
Context
Dana Blankstein- Cohen (born March 3, 1981) is the director of the Israeli Academy of Film and Television.\n\nShe is a film director, and an Israeli culture entrepreneur.\nLuciano Salce (25 September 1922, in Rome – 17 December 1989, in Rome) was an Italian film director, actor and lyricist.\n\nHis 1962 film "Le pillole di Ercole" was shown as part of a retrospective on Italian comedy at the 67th Venice International Film Festival.\n\nAs a writer of pop music, he used the pseudonym Pilantra.\n\nDuring World War II, he was a prisoner in Germany.\n\nHe later worked for several years in Brazil.\nVediamoci chiaro\n\n("Let\'s See It Clear") is a 1984 Italian comedy film directed by Luciano Salce.\n\nThe author Enrico Giacovelli referred to the film as "a kind of "Scent of a Woman" but more ambiguous, midway between Luigi Pirandello\'s "Henry IV" and "The Late Mattia Pascal.\nPeter Levin is an American director of film, television and theatre.\nIan Barry is an Australian director of film and TV.\nJesse Edward Hobson( May 2, 1911 – November 5, 1970) was the director of SRI International from 1947 to 1955.\n\nPrior to SRI, he was the director of the Armour Research Foundation.\nOlav Aaraas( born 10 July 1950) is a Norwegian historian and museum director.\n\nHe was born in Fredrikstad.\n\nFrom 1982 to 1993 he was the director of Sogn Folk Museum, from 1993 to 2010 he was the director of Maihaugen and from 2001 he has been the director of the Norwegian Museum of Cultural History.\n\nIn 2010 he was decorated with the Royal Norwegian Order of St. Olav.\nBrian O’ Malley is an Irish film director known for the horror film" Let Us Prey" and the ghost story" The Lodgers".\nBrian Patrick Kennedy( born 5 November 1961) is an Irish- born art museum director who has worked in Ireland and Australia, and now lives and works in the United States.\n\nHe is currently the director of the Peabody Essex Museum.\n\nHe was the director of the Toledo Museum of Art in Ohio from 2010 to 2019.\n\nHe was the director of the Hood Museum of Art from 2005 to 2010, and the National Gallery of Australia( Canberra) from 1997- 2004.

Answer
Directors Across Borders: A Comparative Study of International Film and Museum Directors, from Luciano Salce to Brain Patrick Kennedy

Question
Generate a title from the given text.
Context
A token is a collection of characters that has semantic meaning for a model. Tokenization is the process of converting the words in your prompt into tokens.
You can monitor foundation model token usage in a project on the Environments page on the Resource usage tab.
Converting words to tokens and back again
Prompt text is converted to tokens before the prompt is processed by foundation models.
The correlation between words and tokens is complex:
Sometimes a single word is broken into multiple tokens
The same word might be broken into a different number of tokens, depending on context (such as: where the word appears, or surrounding words)
Spaces, newline characters, and punctuation are sometimes included in tokens and sometimes not
The way words are broken into tokens varies from language to language
The way words are broken into tokens varies from model to model
For a rough idea, a sentence that has 10 words might be 15 to 20 tokens.
The raw output from a model is also tokens. In the Prompt Lab in IBM watsonx.ai, the output tokens from the model are converted to words to be displayed in the prompt editor.

Answer

Code generation and conversion

Foundation models that can generate and convert programmatic code are great resources for developers. They can help developers to brainstorm and troubleshoot programming tasks.

Sample: Generate programmatic code from instructions

Scenario: You want to generate code from instructions. Namely, you want to write a function in the Python programming language that returns a sequence of prime numbers that are lower than the number that is passed to the function as a variable.

Model choice
Models that can generate code, such as codellama-34b-instruct-hf and mixtral-8x7b-instruct-v01 can generally complete this task when a sample prompt is provided.

Decoding
Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer.

Stopping criteria
To stop the model after it returns a single code snippet, specify <end of code> as the stop sequence. The Max tokens parameter can be set to 1,000.

Prompt text
Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Using the directions below, generate Python code for the specified task.

Input:
# Write a Python function that prints 'Hello World!' string 'n' times.

Output:
def print_n_times(n):
    for i in range(n):
        print("Hello World!")

<end of code>

Input:
# Write a Python function that reverses the order of letters in a string.
# The function named 'reversed' takes the argument 'my_string', which is a string. It returns the string in reverse order.

Output:

The output contains Python code similar to the following snippet:

def reversed(my_string):
    return my_string[::-1]

Be sure to test the generated code to verify that it works as you expect.

For example, if you run reversed("good morning"), the result is 'gninrom doog'.

Note: The foundation model might generate code that is taken directly from its training data. As a result, generated code might require attribution. You are responsible for ensuring that any generated code that you use is properly attributed, if necessary.

Sample: Convert code from one programming language to another

Scenario: You want to convert code from one programming language to another. Namely, you want to convert a code snippet from C++ to Python.

Model choice
Models that can generate code, such as codellama-34b-instruct-hf and mixtral-8x7b-instruct-v01 can generally complete this task when a sample prompt is provided.

Decoding
Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer.

Stopping criteria
To stop the model after it returns a single code snippet, specify <end of code> as the stop sequence. The Max tokens parameter can be set to 300.

Prompt text
Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

This prompt includes an example input and output pair. The input is C++ code and the output is the equivalent function in Python code.

The C++ code snippet to be converted is included next. It is a function that counts the number of arithmetic progressions with the sum S and common difference of D, where S and D are integer values that are passed as parameters.

The final part of the prompt identifies the language that you want the C++ code snippet to be converted into.

Translate the following code from C++ to Python.

C++:
#include "bits/stdc++.h"
using namespace std;
bool isPerfectSquare(long double x) {
  long double sr = sqrt(x);
  return ((sr - floor(sr)) == 0);
}
void checkSunnyNumber(int N) {
  if (isPerfectSquare(N + 1)) {
    cout << "Yes
";
  } else {
    cout << "No
";
  }
}
int main() {
  int N = 8;
  checkSunnyNumber(N);
  return 0;
}

Python:
from math import *
 
def isPerfectSquare(x):
    sr = sqrt(x)
    return ((sr - floor(sr)) == 0)
 
def checkSunnyNumber(N):
    if (isPerfectSquare(N + 1)):
        print("Yes")
    else:
        print("No")
 
if __name__ == '__main__':
    N = 8
    checkSunnyNumber(N)

<end of code>

C++:
#include <bits/stdc++.h>
using namespace std;
int countAPs(int S, int D) {
  S = S * 2;
  int answer = 0;
  for (int i = 1; i <= sqrt(S); i++) {
    if (S % i == 0) {
      if (((S / i) - D * i + D) % 2 == 0)
        answer++;
      if ((D * i - (S / i) + D) % 2 == 0)
        answer++;
    }
  }
  return answer;
}
int main() {
  int S = 12, D = 1;
  cout << countAPs(S, D);
  return 0;
}

Python:

The output contains Python code similar to the following snippet:

from math import *
 
def countAPs(S, D):
    S = S * 2
    answer = 0
    for i in range(1, int(sqrt(S)) + 1):
        if S % i == 0:
            if (((S / i) - D * i + D) % 2 == 0):
                answer += 1
            if ((D * i - (S / i) + D) % 2 == 0):
                answer += 1
    return answer
 
if __name__ == '__main__':
    S = 12
    D = 1
    print(countAPs(S, D))

<end of code>

The generated Python code functions the same as the C++ function included in the prompt.

Test the generated Python code to verify that it works as you expect.

Remember, the foundation model might generate code that is taken directly from its training data. As a result, generated code might require attribution. You are responsible for ensuring that any generated code that you use is properly attributed, if necessary.

Dialog

Dialog tasks are helpful in customer service scenarios, especially when a chatbot is used to guide customers through a workflow to reach a goal.

Sample: Converse with llama 3 chat

Scenario: Generate dialog output like a chatbot.

Model choice
Llama 3 (in both the 70 billion and 8 billion sizes) is optimized for dialog use cases. The llama-3-70b-instruct and llama-3-8b-instruct are fine-tuned for a specific prompt format that is described in the Prompt text section.

Decoding
Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output.

Stopping criteria
Set the Max tokens parameter to 900.

The template for Llama 3 uses an end-of-text token (<|eot_id|>) to delimit the end of each segment of text, except the system prompt.

Prompt text
The following prompt template works best for input that you submit to Llama 3 foundation models:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

SYSTEM PROMPT<|eot_id|><|start_header_id|>user<|end_header_id|>

GROUNDING (OPTIONAL)<|eot_id|><|start_header_id|>user<|end_header_id|>

USER_INPUT<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Tip:

Alternatively, you can switch to chat mode and enter the user input directly. The Prompt Lab inserts the expected syntax that identifies the instruction and user input segments for you.

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

You are a helpful assistant that avoids causing harm. When you do not know the answer to a question, you say "I don't know".<|eot_id|><|start_header_id|>user<|end_header_id|>

I have a question about Earth. How many moons are there?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

After the initial output is generated, continue the dialog to ask about more planets, adding the same identifiers to the end of the input text, for example:

FOLLOW UP QUESTION<|eot_id|><|start_header_id|>assistant<|end_header_id|>


The output looks something like this:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

You are a helpful assistant that avoids causing harm. When you do not know the answer to a question, you say "I don't know".<|eot_id|><|start_header_id|>user<|end_header_id|>

I have a question about Earth. How many moons are there?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

That's an easy one! Earth has only one moon.

What about Saturn?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Saturn has a whopping 82 confirmed moons!

And Mars?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Mars has two small natural satellites, Phobos and Deimos.

The following sample prompt includes grounding information.

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

You are a helpful assistant that avoids causing harm. When you do not know the answer to a question, you say "I don't know".<|eot_id|><|start_header_id|>user<|end_header_id|>

The sky is blue.<|eot_id|><|start_header_id|>user<|end_header_id|>

What color is the sky?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Sample: Converse with Llama 2 chat

Scenario: Generate dialog output like a chatbot.

Model choice
Like other foundation models, Llama 2 (in both the 70 billion and 13 billion sizes) can be used for multiple tasks. But both Llama 2 models are optimized for dialog use cases. The llama-2-70b-chat and llama-2-13b-chat are fine-tuned for the [INST]<<SYS>><</SYS>>[/INST] prompt format. For more information about this prompt format, see How to prompt Llama 2.

Decoding
Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output.

Stopping criteria
Set the Max tokens parameter to 100.

The template for Llama 2 uses an end-of-string (EOS) token (</s>) to delimit the end of the agent's response. The model output stops generating text when the </s> EOS token is encountered.

Prompt text
Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

[INST] <<SYS>>
You are a helpful assistant. Begin each response with the phrase "Dear user, ". If you are unsure about an answer, truthfully say "I don't know"<</SYS>>

I have a question about the Earth.[/INST]
Dear user, go on.
[INST]

How many moons are there?
[/INST]

After the initial output is generated, continue the dialog to ask about more planets, wrapping your user messages in the same [INST][/INST] structure. You don't need to add an <s> token to the start of follow-up questions.

Tip:

Alternatively, you can switch to chat mode and enter the user input directly. The Prompt Lab inserts the expected syntax that identifies the instruction and user input segments for you.

The output looks something like this:

[INST] <<SYS>>
You are a helpful assistant. Begin each response with the phrase "Dear user, ". If you are unsure about an answer, truthfully say "I don't know"<</SYS>>

I have a question about the Earth.[/INST]
Dear user, go on.
[INST]

How many moons are there?
[/INST]
Dear user, the Earth has one natural satellite, commonly referred to as the Moon. 
[INST]

What about Saturn?
[/INST]
Dear user, Saturn has a total of 62 confirmed moons, as well as numerous smaller moonlets and ring particles. The largest moon of Saturn is Titan, which is the second-largest moon in the solar system, after Jupiter's moon Ganymede. 
[INST]

And Mars?
[/INST]
Dear user, Mars has two small natural satellites, Phobos and Deimos. Both moons are thought to be captured asteroids, and are irregularly shaped and very small, with diameters of about 22 kilometers and 12 kilometers respectively.

Notice a few important points about the output that the model generates as you go:

  • The so-called system instruction, in the <<SYS>><</SYS>> tags, continues to influence the output at each dialog turn without having to repeat the system instruction. In other words, the assistant responses continue to be prepended with "Dear user, ".
  • In true dialog fashion, the assistant response to the user input "How many moons are there?" takes into account the previous user input "I have a question about the Earth." and returns the number of moons orbiting Earth.
  • Also in proper dialog form, the assistant responses continue to follow the topic of the conversation, which is the number of moons. (Otherwise, the generated output to the vague user message "And Mars?" could wander off in any direction.)
  • Caution: Newline (carriage-return) characters especially, and spaces to a lesser extent, in the prompt text can have a dramatic impact on the output generated.
  • When you use Llama 2 for chat use cases, follow the recommended prompt template format as closely as possible. Do not use the [INST]<<SYS>><</SYS>>[/INST] prompt format when you use Llama 2 for any other tasks besides chat.

Sample: Converse with granite-13b-chat-v2

Scenario: Generate dialog output like a chatbot.

Model choice
Use granite-13b-chat-v2 to carry on a dialog.

Decoding

  • Use sampling decoding.
  • Set Top P to 0.85.
  • Set the repetition penalty to 1.2.

Stopping criteria

  • Set the Max tokens parameter to 500 so the model can return a complete answer, but is as concise as possible.

Prompt text
To improve model safety and reduce bias, add a system prompt as part of the user input. The system prompt can establish some ground rules for the dialog. For example:

You are Granite Chat, an AI language model developed by IBM. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior. Keep your answers short and to the point.

Remember to use the prompt template format that is expected by the model.

<|system|>
system prompt
<|user|>
content of the question
<|assistant|>
new line for the model's answer

If you want to submit a few-shot prompt to this model, you can add the system prompt, and then the examples, followed by the prompt text to be inferenced.

<|system|>
You are Granite Chat, an AI language model developed by IBM. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior. Keep your answers short and to the point.
<|user|>
Example prompt 1
<|assistant|>
Example response 1

<|user|>
Example prompt 2
<|assistant|>
Example response 2

<|user|>
USER INPUT
<|assistant|>

Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Tip:

Alternatively, you can switch to chat mode and enter the user input directly. The Prompt Lab inserts the expected syntax that identifies the instruction and user input segments for you.

<|system|>
You are Granite Chat, an AI language model developed by IBM. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior. Keep your answers short and to the point.
<|user|>
I have a question about the Earth. How many moons are there?
<|assistant|>

Do not include any trailing white spaces after the <|assistant|> label, and be sure to add a new line.

After the initial output is generated, you can continue the dialog by asking a follow-up question. For example, you can ask about the moons of other planets.

<|user|>
What about Saturn?

<|assistant|>

And continue the conversation with another follow-up question.

<|user|>
And Mars?

<|assistant|>

If the model output is too long, you can try specifying a stop sequence of two newline characters by clicking the Stop sequence text box, pressing the Enter key twice, and then clicking Add sequence. However, the repetition penalty is usually enough to keep the model on track.

A few notes about using this sample with the model:

  • The system prompt establishes some general guardrails for the model.
  • The assistant is able to respond to a follow-up question that relies on information from an earlier exchange in the same dialog.
  • The model expects the input to follow a specific pattern, and can be sensitive to misplaced whitespaces.

Sample: Converse in Japanese with granite-8b-japanese

Scenario: Generate Japanese dialog output like a chatbot.

Model choice
The granite-8b-japanese foundation model can be used to participate in a dialog in Japanese. The granite-8b-japanese foundation model works best when you use the same prompt format as was used during model training.

Decoding
Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output.

Stopping criteria

  • Set the Max tokens parameter to 500 to allow for many turns in the dialog.
  • Add a stop sequence of two newline characters to prevent the foundation model from returning overly long responses. To do that, click in the Stop sequence text box, press the Enter key twice, and then click Add sequence.

Prompt text
Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

This template includes a system prompt that says “You are an honest and talented AI assistant. Please follow the user's instructions as accurately as possible.”

### System:
あなたは誠実で優秀なAIアシスタントです。ユーザーの指示に可能な限り正確に従ってください。
### User:
{user input here}
### Assistant:

For example, to request information about investing from the foundation model, you can submit the following prompt, where the user input is “Please tell me about investment.”

### System:
あなたは誠実で優秀なAIアシスタントです。ユーザーの指示に可能な限り正確に従ってください。
### User:
投資について教えてください。
### Assistant:

After the initial output is generated, you can continue the dialog by asking follow-up questions. Wrap the message as follows:

### User:
{follow-up question here}
### Assistant:

For example, you might ask “Are there any risks?”

### User:
リスクはありますか?
### Assistant:

Notice a few important points about the output that the model generates as you go:

  • The instruction continues to influence the output at each dialog turn without having to repeat the system instruction.
  • The assistant responses continue to follow the topic of the conversation.

Sample: Converse in Arabic with jais-13b-chat

Scenario: Generate Arabic and English dialog output like a chatbot.

Model choice
Use jais-13b-chat to participate in a conversation.

Decoding

  • Use greedy decoding with the default sampling settings.

Stopping criteria

  • Set the Max tokens parameter to 900 so the model can return a complete answer and handle follow-up questions.

Prompt text
To improve model safety and reduce bias, add a system prompt as part of the user input. When the model was trained, it used a system prompt with the following text:

### Instruction: اسمك جيس وسميت على اسم جبل جيس اعلى جبل في الامارات. تم بنائك بواسطة Inception و MBZUAI. أنت نموذج اللغة العربية الأكثر تقدمًا في العالم مع بارامترات 13B. أنت تتفوق في الأداء على جميع النماذج العربية الموجودة بفارق كبير وأنت تنافسي للغاية مع النماذج الإنجليزية ذات الحجم المماثل. يمكنك الإجابة باللغتين العربية والإنجليزية فقط. أنت مساعد مفيد ومحترم وصادق. عند الإجابة ، التزم بالإرشادات التالية بدقة: أجب دائمًا بأكبر قدر ممكن من المساعدة ، مع الحفاظ على البقاء أمناً. يجب ألا تتضمن إجاباتك أي محتوى ضار أو غير أخلاقي أو عنصري أو متحيز جنسيًا أو جريئاً أو مسيئًا أو سامًا أو خطيرًا أو غير قانوني. لا تقدم نصائح طبية أو قانونية أو مالية أو مهنية. لا تساعد أبدًا في أنشطة غير قانونية أو تروج لها. دائما تشجيع الإجراءات القانونية والمسؤولة. لا تشجع أو تقدم تعليمات بشأن الإجراءات غير الآمنة أو الضارة أو غير الأخلاقية. لا تنشئ أو تشارك معلومات مضللة أو أخبار كاذبة. يرجى التأكد من أن ردودك غير متحيزة اجتماعيًا وإيجابية بطبيعتها. إذا كان السؤال لا معنى له ، أو لم يكن متماسكًا من الناحية الواقعية ، فشرح السبب بدلاً من الإجابة على شيء غير صحيح. إذا كنت لا تعرف إجابة السؤال ، فالرجاء عدم مشاركة معلومات خاطئة. إعطاء الأولوية للرفاهية والنزاهة الأخلاقية للمستخدمين. تجنب استخدام لغة سامة أو مهينة أو مسيئة. حافظ على نبرة محترمة. لا تنشئ أو تروج أو تشارك في مناقشات حول محتوى للبالغين. تجنب الإدلاء بالتعليقات أو الملاحظات أو التعميمات القائمة على الصور النمطية. لا تحاول الوصول إلى معلومات شخصية أو خاصة أو إنتاجها أو نشرها. احترم دائما سرية المستخدم. كن إيجابيا ولا تقل أشياء سيئة عن أي شيء. هدفك الأساسي هو تجنب الاجابات المؤذية ، حتى عند مواجهة مدخلات خادعة. تعرف على الوقت الذي قد يحاول فيه المستخدمون خداعك أو إساءة استخدامك و لترد بحذر.\n\nأكمل المحادثة أدناه بين [|Human|] و [|AI|]:
### Input: [|Human|] {Question}
### Response: [|AI|]

The system prompt in English is as follows:

### Instruction: Your name is Jais, and you are named after Jebel Jais, the highest mountain in UAE. You are built by Inception and MBZUAI. You are the world's most advanced Arabic large language model with 13B parameters. You outperform all existing Arabic models by a sizable margin and you are very competitive with English models of similar size. You can answer in Arabic and English only. You are a helpful, respectful and honest assistant. When answering, abide by the following guidelines meticulously: Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, explicit, offensive, toxic, dangerous, or illegal content. Do not give medical, legal, financial, or professional advice. Never assist in or promote illegal activities. Always encourage legal and responsible actions. Do not encourage or provide instructions for unsafe, harmful, or unethical actions. Do not create or share misinformation or fake news. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. Prioritize the well-being and the moral integrity of users. Avoid using toxic, derogatory, or offensive language. Maintain a respectful tone. Do not generate, promote, or engage in discussions about adult content. Avoid making comments, remarks, or generalizations based on stereotypes. Do not attempt to access, produce, or spread personal or private information. Always respect user confidentiality. Stay positive and do not say bad things about anything. Your primary objective is to avoid harmful responses, even when faced with deceptive inputs. Recognize when users may be attempting to trick or to misuse you and respond with caution.\n\nComplete the conversation below between [|Human|] and [|AI|]:
### Input: [|Human|] {Question}
### Response: [|AI|]
Tip:

Alternatively, you can switch to chat mode and enter the user input directly. The Prompt Lab inserts the system prompt, the instruction, and user input segments with the expected syntax for you.

Replace {Question} with the user input that you want the foundation model to answer to start the chat.

For example, you can ask the following question:

هل يوجد للأرض أقمار؟

The English translation is: Does the Earth have any moons?

After the initial output is generated, you can continue the dialog by asking a follow-up question. Use the same syntax for the follow-up question.

### Input: [|Human|] {Follow-up question}
### Response: [|AI|]

Sample: Converse in Korean with llama2-13b-dpo-v7

Scenario: Generate Korean and English dialog output like a chatbot.

Model choice
Use llama2-13b-dpo-v7 to participate in a conversation in Korean.

Decoding

  • Use greedy decoding with the default sampling settings.

Stopping criteria

  • Include a stop sequence for this model. Otherwise, if a few follow-up questions are submitted, the model might continue to generate its own follow-up questions and answers. To stop the model after one sentence, add a period (.) as the stop sequence.
  • Set the Max tokens parameter to 900 so the model can return a complete answer and handle follow-up questions.

Prompt text
The model was tuned for question-answering with examples in the following format:

<|user|>
content of the question
<|assistant|>
new line for the model's answer

Like many foundation models that are based on Llama 2, a specific prompt syntax is used when the foundation model is fine-tuned. You can use the same syntax <|user|> and <|assistant|> in the lines before and after a question to match the prompt input format that the model is trained to recognize.

Tip:

Alternatively, you can switch to chat mode and enter the user input directly. The Prompt Lab inserts the system prompt, the instruction, and user input segments with the expected syntax for you.

For example, you can ask the following question:

<|user|>
지구에 관해 질문이 있습니다. 달은 몇 개 있나요?
<|assistant|>

The English translation is: I have a question about Earth. How many moons are there?

After the initial output is generated, you can continue the dialog by asking a follow-up question. Use the same syntax for the follow-up question. For example, after the foundation model answers with, There is only one moon on Earth., you might ask a follow-up question such as What about Saturn?

<|user|>
지구에 관해 질문이 있습니다. 달은 몇 개 있나요?
<|assistant|>
달은 지구에 하나뿐입니다.

<|user|>
토성은 어떻습니까?
<|assistant|>

And another follow-up question, such as And Mars?

<|user|>
지구에 관해 질문이 있습니다. 달은 몇 개 있나요?
<|assistant|>
달은 지구에 하나뿐입니다.

<|user|>
토성은 어떻습니까?
<|assistant|>
토성은 태양계에서 가장 큰 행성 중 하나입니다. 토성은 62개의 위성을 가지고 있습니다.

<|user|>
그리고 화성?
<|assistant|>

The llama2-13b-dpo-v7 foundation model can maintain the context of the question. To prevent the model from generating its own follow-up questions and answers in the same pattern, specify a stop sequence as instructed.

Translation

Use models that can do natural language translation tasks to translate text from one natural language to another.

Sample: Translate text from Japanese to English

Scenario: Translate text that is written in Japanese into English.

Model choice
The elyza-japanese-llama-2-7b-instruct model can translate text from Japanese to English and from English to Japanese.

AI guardrails
Disable the AI guardrails feature. The feature is supported with English text only. It might flag as inappropriate content that is not inappropriate.

Decoding
Greedy. The model must return the same text, only translated. The model cannot be creative.

Stopping criteria
Increase the number of allowed tokens by changing the Max tokens parameter value to 500.

Prompt text
Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

The sample prompt text overlaps with text that is used in Sample 8c.

The instruction says “Please translate to English”. One example of Japanese text being translated into English is included.

英語に翻訳してください

日本語
トマトは、家庭菜園で最も人気のある植物の 1 つです。成功のヒント: 病気や害虫に強い品種を選択すると、トマトの栽培は非常に簡単になります。挑戦を求めている経験豊富な庭師にとって、栽培できる家宝や特別な品種は無限にあります。トマトの植物にはさまざまなサイズがあります。

English
Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes.

日本語
基盤モデルを使用して、より優れた AI をより迅速に作成します。さまざまなユースケースやタスクに応じて、さまざまなプロンプトを試してください。わずか数行の指示で、職務記述書の草案、顧客の苦情の分類、複雑な規制文書の要約、重要なビジネス情報の抽出などを行うことができます。

English

Sample: Translate text from Spanish to English

Scenario: Translate text that is written in Spanish into English.

Model choice
The mixtral-8x7b-instruct-v01 model can translate text from French, German, Italian, or Spanish to English. This sample prompts the model to translate from Spanish to English.

AI guardrails
Disable the AI guardrails feature. The feature is supported with English text only. It might flag as inappropriate content that is not inappropriate.

Decoding
Greedy. The model must return the same text, only translated. The model cannot be creative.

Stopping criteria

  • Be sure to include a stop sequence for this model. Otherwise, the model might continue to generate new sentences and translations, even when the instruction tells it not to. To stop the model after one sentence, add a period (.) as the stop sequence.
  • Set the Max tokens parameter value to 200.

Prompt text
Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Translate the following text from Spanish to English. Do not give any extra response that is not part of the translation.

Text: 
Hasta ahora no me ha animado mucho la postura adoptada por la Comisión.

Translation:
So far, I have not been terribly encouraged by the stance adopted by the Commission.

Text: 
Estoy muy contento de ver que la resolución conjunta adopta la sugerencia que hicimos.

Translation:

Sample: Translate text from English to Japanese

Scenario: Translate text that is written in English into Japanese.

Model choice
The granite-8b-japanese model can translate text from Japanese to English and from English to Japanese.

AI guardrails
Disable the AI guardrails feature. The feature is supported with English text only. It might flag as inappropriate content that is not inappropriate.

Decoding
Greedy. The model must return the same text, only translated. The model cannot be creative.

Stopping criteria
Increase the number of allowed tokens by changing the Max tokens parameter value to 500.

Prompt text
Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Translate the following text from English to Japanese.

English
Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes.

日本語
トマトは野菜作りの人気の植物である。成功のヒント:病害虫に強く、育てやすいトマトの品種を選べば、トマト栽培はそれほど難しくない。経験豊富な庭師にとっては、手強い挑戦となる、様々な色や形のトマトの品種がある。トマトの品種は、大きさもいろいろである。

English
Use foundation models to create better AI, faster. Experiment with different prompts for various use cases and tasks. With just a few lines of instruction you can draft job descriptions, classify customer complaints, summarize complex regulatory documents, extract key business information and much more.

日本語

Sample: Translate text from French to English

Scenario: Translate text that is written in French into English.

Model choice
The granite-20b-multilingual model understands English, German, Spanish, French, and Portuguese. This sample prompts the model to translate text from French to English.

AI guardrails
Disable the AI guardrails feature. The feature is supported with English text only. It might flag as inappropriate content that is not inappropriate.

Decoding
Greedy. The model must return the same text, only translated. The model cannot be creative.

Stopping criteria
Set the Max tokens parameter value to 200.

Prompt text
Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Translate the following text from French to English:

Text:
Enfin, je me réjouis du paragraphe 16 qui appelle à une révision de la manière dont nous abordons les questions relatives aux droits de l'homme au sein du Parlement.

Translation:
Finally, I welcome paragraph 16 which calls for a review of the way we deal with human rights issues in Parliament.

Text:
Je me souviens très bien que nous en avions parlé lors d'une séance à Luxembourg.

Translation:
I remember very well that we discussed it in a session in Luxembourg.

Text: 
Si nous ne faisons pas un usage plus important de la technologie intelligente, nous ne parviendrons pas à atteindre nos objectifs.

Translation:

 

Parent topic: Prompt Lab

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more