0 / 0
Sample foundation model prompts for common tasks
Last updated: Dec 04, 2024
Sample foundation model prompts for common tasks

Try these samples to learn how different prompts can guide foundation models to do common tasks.

How to use this topic

Explore the sample prompts in this topic:

  • Copy and paste the prompt text and input parameter values into the Prompt Lab in IBM watsonx.ai
  • See what text is generated.
  • See how different models generate different output.
  • Change the prompt text and parameters to see how results vary.

There is no one right way to prompt foundation models. But patterns have been found, in academia and industry, that work fairly reliably. Use the samples in this topic to build your skills and your intuition about prompt engineering through experimentation.

 

This video provides a visual method to learn the concepts and tasks in this documentation.


Video chapters
[ 0:11 ] Introduction to prompts and Prompt Lab
[ 0:33 ] Key concept: Everything is text completion
[ 1:34 ] Useful prompt pattern: Few-shot prompt
[ 1:58 ] Stopping criteria: Max tokens, stop sequences
[ 3:32 ] Key concept: Fine-tuning
[ 4:32 ] Useful prompt pattern: Zero-shot prompt
[ 5:32 ] Key concept: Be flexible, try different prompts
[ 6:14 ] Next steps: Experiment with sample prompts

 

Samples overview

You can find samples that prompt foundation models to generate output that supports the following tasks:

The following table shows the foundation models that are used in task-specific samples. A checkmark (✓) indicates that the model is used in a sample for the associated task. You can click See sample to go to the sample prompt.

Table 1. Models used in samples for certain tasks
Model Classification Extraction Generation QA Summarization Coding Dialog Translation
granite-13b-chat-v2
See sample
granite-13b-instruct-v2
See sample

See sample

See sample
granite-7b-lab
See sample
granite-8b-japanese
See sample

See sample

See sample
granite-20b-multilingual
See sample
Granite Instruct
See sample

See sample
Granite Guardian
See sample
Granite Code
See sample
allam-1-13b-instruct
See sample

See sample
codellama-34b-instruct-hf
See sample
elyza-japanese-llama-2-7b-instruct
See sample

See sample
flan-t5-xxl-11b
See sample

See sample

See sample
flan-ul2-20b
See sample

See sample

See sample

See sample

See sample
jais-13b-chat
See sample
Llama 3.2 instruct
See sample

See sample
llama-guard-3-11b-vision
See sample

See example
Llama 3.1 instruct
See sample
Llama 3 instruct
See sample
Llama 2 chat
See sample
mistral-large
See sample

See sample

See sample

See sample

See sample

See sample
mixtral-8x7b-instruct-v01
See sample

See sample

See sample

See sample

See sample

See sample

See sample
mt0-xxl-13b
See sample

See sample
pixtral-12b
See example

The following table summarizes the available sample prompts.

Table 2. List of sample prompts
Scenario Prompt editor Prompt format Model Decoding Notes
Sample with a zero-shot prompt: Classify a message Freeform Zero-shot • mt0-xxl-13b
• flan-t5-xxl-11b
• flan-ul2-20b
• mixtral-8x7b-instruct-v01
Greedy • Uses the class names as stop sequences to stop the model after it prints the class name
Sample with a few-shot prompt: Classify a message in freeform mode Freeform Few-shot • mixtral-large
• mixtral-8x7b-instruct-v01
Greedy • Uses the class names as stop sequences
Sample of classifying the safety of prompt input with Granite Freeform Custom system prompt • Granite Guardian models Greedy • Returns a 'Yes' or 'No' response depending on if the content is harmful.
Sample of classifying the safety of prompt input Freeform Custom system prompt • llama-guard-3-11b-vision Greedy • Returns the classes safe or unsafe. If the content is unsafe, also returns the category of violation.
Sample with a few-shot prompt: Classify a message in structured mode Structured Few-shot • mixtral-large
• mixtral-8x7b-instruct-v01
Greedy • Uses the class names as stop sequences
Sample: Classify a Japanese message Freeform Few-shot • elyza-japanese-llama-2-7b-instruct Greedy • Uses the class names as stop sequences
Sample: Classify an Arabic message Freeform Few-shot • allam-1-13b-instruct Greedy • Uses the class names as stop sequences
Sample: Extract details from a complaint Freeform Zero-shot • flan-ul2-20b Greedy
Sample: Extract and classify details from a passage Freeform Zero-shot • flan-ul2-20b Greedy
Sample: Generate a numbered list on a theme in freeform mode Freeform Few-shot • mixtral-8x7b-instruct-v01 Sampling • Generates formatted output
• Uses two newline characters as a stop sequence to stop the model after one list
Sample: Generate a numbered list on a theme in structured mode Structured Few-shot • mixtral-8x7b-instruct-v01 Sampling • Generates formatted output.
• Uses two newline characters as a stop sequence
Sample: Generate a numbered list on a particular theme with Granite Freeform Zero-shot • granite-13b-instruct-v2 Greedy • Generates formatted output
Sample: Answer a question based on an article in freeform mode Freeform Zero-shot • mt0-xxl-13b
• flan-t5-xxl-11b
• flan-ul2-20b
• mixtral-8x7b-instruct-v01
Greedy • Uses a period "." as a stop sequence to cause the model to return only a single sentence
Sample: Answer a question based on an article in structured mode Structured Zero-shot • mt0-xxl-13b
• flan-t5-xxl-11b
• flan-ul2-20b
• mixtral-8x7b-instruct-v01
Greedy • Uses a period "." as a stop sequence
• Generates results for multiple inputs at once
Sample: Answer a question based on a document with Granite Freeform Zero-shot • granite-13b-instruct-v2 Greedy
Sample: Answer a question based on multiple documents with Granite 3.0 Freeform Zero-shot • Granite Instruct models Greedy
Sample: Answer general knowledge questions Freeform Zero-shot • granite-13b-instruct-v2 Greedy
Sample: Answer general knowledge questions in Japanese Freeform Zero-shot • granite-8b-japanese Greedy
Sample: Answer a question using complex reasoning in freeform mode Freeform One-shot • mistral-large Greedy • Uses two newline characters as a stop sequence
Sample zero-shot prompt: Summarize a meeting transcript Freeform Zero-shot • flan-t5-xxl-11b
• flan-ul2-20b
• mixtral-8x7b-instruct-v01
Greedy
Sample few-shot prompt: Summarize a meeting transcript in freeform mode Freeform Few-shot • mixtral-8x7b-instruct-v01
• mixtral-large
Greedy
Sample few-shot prompt: Summarize a meeting transcript in structured mode Structured Few-shot • mixtral-8x7b-instruct-v01 Greedy • Generates formatted output
• Uses two newline characters as a stop sequence to stop the model after one list
Sample: Generate a title for a passage Freeform One-shot • granite-7b-lab Greedy • Uses a special token that is named <|endoftext|> as a stop sequence.
Sample: Generate programmatic code from instructions Freeform Few-shot • mixtral-8x7b-instruct-v01
• codellama-34b-instruct-hf
Greedy • Generates programmatic code as output
• Uses <end of code> as a stop sequence
Sample: Generate programmatic code from instructions with a zero-shot prompt Freeform Few-shot • llama-3-2-1b-instruct
• llama-3-2-3b-instruct
Greedy • Generates programmatic code as output
• Uses a custom template
Sample: Convert code from one programming language to another Freeform Few-shot • mixtral-8x7b-instruct-v01
• codellama-34b-instruct-hf
Greedy • Generates programmatic code as output
• Uses <end of code> as a stop sequence
Sample: Generate programmatic code from instructions with Granite Freeform Few-shot • Granite Code models Greedy • Generates programmatic code as output
Sample: Convert code from one programming language to another with Granite Freeform Few-shot • Granite Code models Greedy • Generates programmatic code as output
Sample: Converse with Llama 3 Freeform Custom structure • llama-3-2-1b-instruct
• llama-3-2-3b-instruct
• llama-3-1-8b-instruct
• llama-3-405b-instruct
• llama-3-8b-instruct
• llama-3-70b-instruct
Greedy • Generates dialog output like a chatbot
• Uses a model-specific prompt format
Sample: Converse with Llama 2 Freeform Custom structure • llama-2 chat Greedy • Generates dialog output like a chatbot
• Uses a model-specific prompt format
Sample: Converse with granite-13b-chat-v2 Freeform Custom structure • granite-13b-chat-v2 Greedy • Generates dialog output like a chatbot
• Uses a system prompt to establish guardrails for the dialog
Sample: Converse in Japanese with granite-8b-japanese Freeform Custom structure • granite-8b-japanese Greedy • Generates Japanese dialog output like a chatbot
• Uses a model-specific prompt format
Sample: Converse in Arabic with jais-13b-chat Freeform Custom structure • jais-13b-chat Greedy • Generates English or Arabic dialog output like a chatbot
• Uses a model-specific prompt format
Sample: Translate text from Japanese to English Freeform Zero-shot • elyza-japanese-llama-2-7b-instruct Greedy • Translates text from Japanese to English
Sample: Translate text from Spanish to English Freeform Few-shot • mixtral-8x7b-instruct-v01
• mixtral-large
Greedy • Translates text from Spanish to English
Sample: Translate text from English to Japanese Freeform Zero-shot • granite-8b-japanese Greedy • Translates text from English to Japanese
Sample: Translate text from French to English Freeform Few-shot • granite-20b-multilingual Greedy • Translates text from French to English
Sample: Translate text from English to Arabic Freeform Few-shot • allam-1-13b-instruct Greedy • Translates text from English to Arabic

Classification

Classification is useful for predicting data in distinct categories. Classifications can be binary, with two classes of data, or multi-class. A classification task is useful for categorizing information, such as customer feedback, so that you can manage or act on the information more efficiently.

Sample with a zero-shot prompt: Classify a message

Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem. Depending on the class assignment, the chat is routed to the correct support team for the issue type.

Model choice

Models that are instruction-tuned can generally complete this task with this sample prompt.

Suggestions: mt0-xxl-13b, flan-t5-xxl-11b, flan-ul2-20b, mistral-large, or mixtral-8x7b-instruct-v01

Decoding

Greedy. The model must return one of the specified class names; it cannot be creative and make up new classes.

Stopping criteria

  • Specify two stop sequences: "Question" and "Problem". After the model generates either of those words, it should stop.
  • With such short output, the Max tokens parameter can be set to 5.

Prompt text

Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Classify this customer message into one of two classes: Question, Problem.

Class name: Question
Description: The customer is asking a technical question or a how-to question 
about our products or services.

Class name: Problem
Description: The customer is describing a problem they are having. They might 
say they are trying something, but it's not working. They might say they are 
getting an error or unexpected results.

Message: I'm having trouble registering for a new account.
Class name: 

Sample with a few-shot prompt: Classify a message in freeform mode

Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed to the correct support team.

Model choice

With few-shot examples of both classes, models such as mistral-large or mixtral-8x7b-instruct-v01 can complete this task well.

Decoding

Greedy. The model must return one of the specified class names; it cannot be creative and make up new classes.

Stopping criteria

  • Specify two stop sequences: "Question" and "Problem". After the model generates either of those words, it should stop.
  • With such short output, the Max tokens parameter can be set to 5.

Prompt text

Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Message: When I try to log in, I get an error.
Class name: Problem

Message: Where can I find the plan prices?
Class name: Question

Message: What is the difference between trial and paygo?
Class name: Question

Message: The registration page crashed, and now I can't create a new account.
Class name: Problem

Message: What regions are supported?
Class name: Question

Message: I can't remember my password.
Class name: Problem

Message: I'm having trouble registering for a new account.

Sample of classifying the safety of prompt input with Granite

Goal: Classify prompt input text as safe or unsafe.

Model choice

The Granite Guardian models can classify the safety of your input prompt text.

Decoding

Greedy. The model must return one of two responses depending on whether the input is harmful: Yes or No.

Stopping criteria

The model doesn't require a stop sequence; it stops after returning a response.

Prompt text

Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

The model was trained with a system prompt similar to this one.

safety_template = f"""You are a safety agent trying to help determine whether the user message is harmful according to the safety risk definition below.

<start_of_turn>
User Message: {user_text}
<end_of_turn>

Our safety risk definition is defined below:

<start_of_risk_definition>
* {risk_definition}
<end_of_risk_definition>

Is the user message harmful based on the risk definition? Your answer must be either 'Yes' or 'No'."""

Replace {user_text} with input text that you want to assess for harmful content, such as "Tell me how to cheat on a test.". Define instructions for the model describing the risk to identify in {risk_definition}, such as "The 'User' message contains actions that violate moral or professional standards, focusing on exploitation for personal gain or disregard for others' well-being. This category excludes physical violence or explicit bias, instead covering activities like fraud, theft, or abuse of power.".

You can also define user_text and risk_definition as prompt variables in the freeform editor.

The model generates the following response:

assistantYes

The model (labelled as 'assistant' in the response) generates a "Yes" or "No" answer.

Sample of classifying the safety of prompt input

Goal: Classify prompt input text as safe or unsafe.

Model choice

The llama-guard-3-11b-vision can classify the safety of both text and images in your prompt input.

Decoding

Greedy. The model must return one of two class names: safe or unsafe; it cannot be creative and make up new classes.

Stopping criteria

The model doesn't require a stop sequence; it stops after returning the class, and when applicable, the categories of any violations found.

Prompt text

Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

The model was trained with a system prompt similar to this one.

<|begin_of_text|><|start_header_id|>user<|end_header_id|>

<BEGIN CONVERSATION>

User: ${your-question-goes-here}

<END CONVERSATION>

Provide your safety assessment:
- First line must read 'safe' or 'unsafe'.
- If unsafe, a second line must include a comma-separated list of violated categories.<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Replace ${your-question-goes-here} with a question that contains unsafe content, such as What is the first paragraph of the book, "Developing Quality Technical Information: A Handbook for Writers and Editors"?

The model generates the following classification:

unsafe
S7

The classification category S7 indicates that the unsafe content violated the privacy category. For more information about the categories that the llama-guard-3-11b-vision foundation model is trained to recognize, see the Llama Guard 3 documentation.

Sample with a few-shot prompt: Classify a message in structured mode

Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed to the correct support team.

Model choice

With few-shot examples of both classes, models such as mistral-large or mixtral-8x7b-instruct-v01 can complete this task well.

Decoding

Greedy. The model must return one of the specified class names, not be creative and make up new classes.

Stopping criteria

  • Specify two stop sequences: "Question" and "Problem". After the model generates either of those words, it should stop.
  • With such short output, the Max tokens parameter can be set to 5.

Set up section

Paste these headers and examples into the Examples area of the Set up section:

Table 2. Classification few-shot examples
Message: Class name:
When I try to log in, I get an error. Problem
Where can I find the plan prices? Question
What is the difference between trial and paygo? Question
The registration page crashed, and now I can't create a new account. Problem
What regions are supported? Question
I can't remember my password. Problem

Try section

Paste this message in the Try section:

I'm having trouble registering for a new account.

Select the model and set parameters, then click Generate to see the result.

Sample: Classify a Japanese message

Scenario: Given a message that is submitted to a customer-support chatbot for a Japanese cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed to the correct support team.

Model choice

The elyza-japanese-llama-2-7b-instruct model can classify prompt input text that is written in Japanese.

AI guardrails

Disable the AI guardrails feature. The feature is supported with English text only. It might flag as inappropriate content that is not inappropriate.

Decoding

Greedy. The model must return one of the specified class names; it cannot be creative and make up new classes.

Stopping criteria

  • Specify two stop sequences: 問題 for problem and 質問 for question. After the model generates either of those words, it should stop.
  • If you want to lower the value in the Max tokens parameter, do not lower the value below 7 tokens. Japanese characters use more tokens than the same words in English.

Prompt text

Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. The sample prompt text is a Japanese translation of the English prompt text in Sample 1b.

次のメッセージを問題または質問に分類します。

メッセージ: ログインしようとすると、エラーが発生します。
クラス名: 問題

メッセージ: プランの価格はどこで確認できますか?
クラス名: 質問

メッセージ: トライアルとペイゴーの違いは何ですか?
クラス名: 質問

メッセージ: 登録ページがクラッシュしたため、新しいアカウントを作成できません。
クラス名: 問題

メッセージ: どの地域がサポートされていますか?
クラス名: 質問

メッセージ: パスワードを思い出せません。
クラス名: 問題

メッセージ: 新しいアカウントの登録で問題が発生しました。
クラス名:

Sample: Classify an Arabic message

Scenario: Given a message that is submitted to a customer-support chatbot for an Arabic cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed to the correct support team.

Model choice

The allam-1-13b-instruct foundation model can classify prompt input text that is written in Arabic.

AI guardrails

Disable the AI guardrails feature. The feature is supported with English text only. It might incorrectly flad content as inappropriate.

Decoding

Greedy. The model must return one of the specified class names; it cannot be creative and make up new classes.

Stopping criteria

Typically the model offers to provide more assistance after it generates the class label. You can optionally stop the model after it classifies the text by specifying two stop sequences: مشكلة for problem and سؤال for a question.

Prompt text

Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. The sample prompt text is an Arabic translation of the English prompt text in Sample 1b.

<s> [INST] قم بتصنيف رسالة العميل هذه إلى إحدى فئتين: سؤال، مشكلة.

الرسالة: عندما أحاول تسجيل الدخول، تظهر لي رسالة خطأ.
اسم الفئة: مشكلة

الرسالة: أين يمكنني العثور على أسعار الخطة؟
اسم الفصل: سؤال

الرسالة: ما الفرق بين التجربة والدفع؟
اسم الفصل: سؤال

الرسالة: تعطلت صفحة التسجيل، ولا أستطيع الآن إنشاء حساب جديد.
اسم الفئة: مشكلة

الرسالة: ما هي المناطق المدعومة؟
اسم الفصل: سؤال

الرسالة: لا أستطيع تذكر كلمة المرور الخاصة بي.
اسم الفئة: مشكلة

الرسالة: أواجه مشكلة في التسجيل للحصول على حساب جديد.
اسم الفئة:
[/INST]

 

Extracting details

Extraction tasks can help you to find key terms or mentions in data based on the semantic meaning of words rather than simple text matches.

Sample: Extract details from a complaint

Scenario: Given a complaint from a customer who had trouble booking a flight on a reservation website, identify the factors that contributed to this customer's unsatisfactory experience.

Model choices

flan-ul2-20b

Decoding

Greedy. We need the model to return words that are in the input; the model cannot be creative and make up new words.

Stopping criteria

The list of extracted factors will not be long, so set the Max tokens parameter to 50.

Prompt text

Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

From the following customer complaint, extract all the factors that 
caused the customer to be unhappy.

Customer complaint:
I just tried to book a flight on your incredibly slow website. All 
the times and prices were confusing. I liked being able to compare 
the amenities in economy with business class side by side. But I 
never got to reserve a seat because I didn't understand the seat map. 
Next time, I'll use a travel agent!

Numbered list of all the factors that caused the customer to be unhappy:

Sample: Extract and classify details from a passage

Scenario: Given a list of categories and a passage, identify excerpts from the passage that fit into the different category types.

Model choices

mistral-large or mixtral-8x7b-instruct-v01.

Decoding

Greedy. We need the model to return words that are in the input; the model cannot be creative and make up new words.

Stopping criteria

  • To make sure the model does not generating additional text, specify a stop sequence of two newline characters. To do that, click the Stop sequence text box, press the Enter key twice, and then click Add sequence.

Prompt text

Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

For each passage, extract the named entities that fit into the following categories:
Person, Measure, Number, Facility, Location, Product, Duration, Money, Time, PhoneNumber, Date, JobTitle, Organization, Percent, GeographicFeature, Address, Ordinal.
Passage:
Welcome to 123 Maple Lane, a charming and inviting 3-bedroom, 2-bathroom residence nestled in the heart of Springfield. This beautifully maintained home boasts 1,800 square feet of living space, perfect for families and first-time homebuyers alike.
- Spacious living room with a cozy fireplace and large windows for ample natural light
- Updated kitchen with stainless steel appliances, granite countertops, and ample cabinet space
- Master suite with a walk-in closet and en-suite bathroom featuring a soaking tub and separate shower
- Two additional well-appointed bedrooms and a full hallway bathroom
- Fully fenced backyard with a patio area, perfect for outdoor entertaining
- Attached two-car garage with additional storage space
- Conveniently located near top-rated schools, shopping centers, and parks
Don't miss your opportunity to own this fantastic home! Join us for the open house on Saturday, April 10th, 2023, from 1:00 PM to 4:00 PM.
**Price**
$350,000
**Seller Contact Details:**
John & Jane Doe
Phone: (555) 123-4567
Email: [email protected]

Generating natural language

Generation tasks are what large language models do best. Your prompts can help guide the model to generate useful language.

Sample with a few-shot prompt: Generate a numbered list on a theme in freeform mode

Scenario: Generate a numbered list on a particular theme.

Model choice

The mixtral-8x7b-instruct-v01 foundation model was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted a specific way with special characters.

Decoding

Sampling. This is a creative task. Set the following parameters:

  • Temperature: 0.7
  • Top P: 1
  • Top K: 50
  • Random seed: 9045 (To get different output each time you click Generate, specify a different value for the Random seed parameter or clear the parameter.)

Stopping criteria

  • To make sure the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click the Stop sequence text box, press the Enter key twice, and then click Add sequence.
  • The list will not be very long, so set the Max tokens parameter to 50.

Prompt text

Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

What are 4 types of dog breed?
1. Poodle
2. Dalmatian
3. Golden retriever
4. Bulldog

What are 3 ways to incorporate exercise into your day?
1. Go for a walk at lunch
2. Take the stairs instead of the elevator
3. Park farther away from your destination

What are 4 kinds of vegetable?
1. Spinach
2. Carrots
3. Broccoli
4. Cauliflower

What are the 3 primary colors?
1. Red
2. Green
3. Blue

What are 3 ingredients that are good on pizza?

Sample with a few-shot prompt: Generate a numbered list on a theme in structured mode

Scenario: Generate a numbered list on a particular theme.

Model choice

The mixtral-8x7b-instruct-v01 foundation model was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted a specific way with special characters.

Decoding

Sampling. This scenario is a creative one. Set the following parameters:

  • Temperature: 0.7
  • Top P: 1
  • Top K: 50
  • Random seed: 9045 (To generate different results, specify a different value for the Random seed parameter or clear the parameter.)

Stopping criteria

  • To make sure that the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, and then click Add sequence.
  • The list will not be long, so set the Max tokens parameter to 50.

Set up section

Paste these headers and examples into the Examples area of the Set up section:

Table 3. Generation few-shot examples
Input: Output:
What are 4 types of dog breed? 1. Poodle
2. Dalmatian
3. Golden retriever
4. Bulldog
What are 3 ways to incorporate exercise into your day? 1. Go for a walk at lunch
2. Take the stairs instead of the elevator
3. Park farther away from your destination
What are 4 kinds of vegetable? 1. Spinach
2. Carrots
3. Broccoli
4. Cauliflower
What are the 3 primary colors? 1. Red
2. Green
3. Blue

Try section

Paste this input in the Try section:

What are 3 ingredients that are good on pizza?

Select the model and set parameters, then click Generate to see the result.

Sample with a zero-shot prompt: Generate a numbered list on a particular theme

Scenario: Ask the model to play devil's advocate. Describe a potential action and ask the model to list possible downsides or risks that are associated with the action.

Model choice

The granite-13b-instruct model was trained to recognize and handle special characters, such as the newline character, well. The granite-13b-instruct-v2 model is a good choice when you want your generated text to be formatted in a specific way with special characters.

Decoding

Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.

Stopping criteria

The summary might run several sentences, so set the Max tokens parameter to 60.

Prompt text

Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

You are playing the role of devil's advocate. Argue against the proposed plans. List 3 detailed, unique, compelling reasons why moving forward with the plan would be a bad choice. Consider all types of risks.

Plan we are considering:
Extend our store hours.
Three problems with this plan are:
1. We'll have to pay more for staffing.
2. Risk of theft increases late at night.
3. Clerks might not want to work later hours.

Plan we are considering:
Open a second location for our business.
Three problems with this plan are:
1. Managing two locations will be more than twice as time-consuming than managed just one.
2. Creating a new location doesn't guarantee twice as many customers.
3. A new location means added real estate, utility, and personnel expenses.

Plan we are considering:
Refreshing our brand image by creating a new logo.
Three problems with this plan are:

Question answering

Question-answering tasks are useful in help systems and other scenarios where frequently asked or more nuanced questions can be answered from existing content.

To help the model return factual answers, implement the retrieval-augmented generation pattern. For more information, see Retrieval-augmented generation.

Sample: Answer a question based on an article in freeform mode

Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. A new widget is being added to the website to answer customer questions based on the contents of the article the customer is viewing. Given a question that is related to an article, answer the question based on the article.

Model choice

Models that are instruction-tuned, such as flan-t5-xxl-11b, flan-ul2-20b, mixtral-8x7b-instruct-v01, or mt0-xxl-13b can generally complete this task with this sample prompt.

Decoding

Greedy. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer.

Stopping criteria

To cause the model to return a one-sentence answer, specify a period "." as a stop sequence. The Max tokens parameter can be set to 50.

Prompt text

Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Article:
###
Tomatoes are one of the most popular plants for vegetable gardens. 
Tip for success: If you select varieties that are resistant to 
disease and pests, growing tomatoes can be quite easy. For 
experienced gardeners looking for a challenge, there are endless 
heirloom and specialty varieties to cultivate. Tomato plants come 
in a range of sizes. There are varieties that stay very small, less 
than 12 inches, and grow well in a pot or hanging basket on a balcony 
or patio. Some grow into bushes that are a few feet high and wide, 
and can be grown is larger containers. Other varieties grow into 
huge bushes that are several feet wide and high in a planter or 
garden bed. Still other varieties grow as long vines, six feet or 
more, and love to climb trellises. Tomato plants do best in full 
sun. You need to water tomatoes deeply and often. Using mulch 
prevents soil-borne disease from splashing up onto the fruit when you 
water. Pruning suckers and even pinching the tips will encourage the 
plant to put all its energy into producing fruit.
###

Answer the following question using only information from the article. 
Answer in a complete sentence, with proper capitalization and punctuation. 
If there is no good answer in the article, say "I don't know".

Question: Why should you use mulch when growing tomatoes?
Answer: 

You can experiment with asking other questions too, such as:

  • "How large do tomato plants get?"
  • "Do tomato plants prefer shade or sun?"
  • "Is it easy to grow tomatoes?"

Try out-of-scope questions too, such as:

  • "How do you grow cucumbers?"

Sample: Answer a question based on an article in structured mode

Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. A new widget is being added to the website to answer customer questions based on the contents of the article the customer is viewing. Given a question related to a particular article, answer the question based on the article.

Model choice

Models that are instruction-tuned, such as flan-t5-xxl-11b, flan-ul2-20b, mixtral-8x7b-instruct-v01, or mt0-xxl-13b can generally complete this task with this sample prompt.

Decoding

Greedy. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer.

Stopping criteria

To cause the model to return a one-sentence answer, specify a period "." as a stop sequence. The Max tokens parameter can be set to 50.

Set up section

Paste this text into the Instruction area of the Set up section:

Article:
###
Tomatoes are one of the most popular plants for vegetable gardens. 
Tip for success: If you select varieties that are resistant to 
disease and pests, growing tomatoes can be quite easy. For 
experienced gardeners looking for a challenge, there are endless 
heirloom and specialty varieties to cultivate. Tomato plants come 
in a range of sizes. There are varieties that stay very small, less 
than 12 inches, and grow well in a pot or hanging basket on a balcony 
or patio. Some grow into bushes that are a few feet high and wide, 
and can be grown is larger containers. Other varieties grow into 
huge bushes that are several feet wide and high in a planter or 
garden bed. Still other varieties grow as long vines, six feet or 
more, and love to climb trellises. Tomato plants do best in full 
sun. You need to water tomatoes deeply and often. Using mulch 
prevents soil-borne disease from splashing up onto the fruit when you 
water. Pruning suckers and even pinching the tips will encourage the 
plant to put all its energy into producing fruit.
###

Answer the following question using only information from the article. 
Answer in a complete sentence, with proper capitalization and punctuation. 
If there is no good answer in the article, say "I don't know".

Try section

In the Try section, add an extra test row so you can paste each of these two questions in a separate row:

Why should you use mulch when growing tomatoes?

How do you grow cucumbers?

Select the model and set parameters, then click Generate to see two results.

Sample: Answer a question based on a document with Granite

Scenario: You are creating a chatbot that can answer user questions. When a user asks a question, you want the agent to answer the question with information from a specific document.

Model choice

Models that are instruction-tuned, such as granite-13b-instruct-v2, can complete the task with this sample prompt.

Decoding

Greedy. The answers must be grounded in the facts in the document, and if there is no good answer in the article, the model should not be creative and make up an answer.

Stopping criteria

Use a Max tokens parameter of 50.

Prompt text

Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Given the document and the current conversation between a user and an agent, your task is as follows: Answer any user query by using information from the document. The response should be detailed.

DOCUMENT: Foundation models are large AI models that have billions of parameters and are trained on terabytes of data. Foundation models can do various tasks, including text, code, or image generation, classification, conversation, and more. Large language models are a subset of foundation models that can do text- and code-related tasks.
DIALOG: USER: What are foundation models?

Sample: Answer a question based on multiple documents with Granite 3.0

Scenario: You are creating a chatbot that can answer user questions. When a user asks a question, you want the agent to answer the question with information from a specific documents.

Model choice

Models that are instruction-tuned, such as Granite Instruct models, can complete the task with this sample prompt.

Decoding

Greedy. The answers must be grounded in the facts in the document, and if there is no good answer in the article, the model should not be creative and make up an answer.

Stopping criteria

  • To make sure that the model stops generating text after the summary, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, enter <|end_of_text|>, and then click Add sequence.
  • Set the Max tokens parameter to 200.

Prompt text

Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

<|start_of_role|>system<|end_of_role|>You are an expert in medical science.<|end_of_text|>
<|start_of_role|>user<|end_of_role|>Use the following documents as context to complete the task.

Document 1:
The human body is a complex and intricate system, composed of various interconnected parts that work together to maintain life. At the most fundamental level, the body is made up of cells, the basic units of life. These cells are organized into tissues, which are then grouped together to form organs. Organs, in turn, make up the various systems that carry out the body's functions.

Document 2:
One of the most important systems in the human body is the circulatory system. This system is responsible for transporting oxygen, nutrients, and hormones throughout the body. It is composed of the heart, blood vessels, and blood. The heart acts as a pump, pushing blood through the blood vessels and into the capillaries, where the exchange of oxygen, nutrients, and waste products takes place.

Document 3:
Another crucial system is the respiratory system. This system is responsible for the intake and exchange of oxygen and carbon dioxide. It is composed of the nose, throat, trachea, bronchi, and lungs. When we breathe in, air enters the nose or mouth and travels down the trachea into the lungs. Here, oxygen is absorbed into the bloodstream and carbon dioxide is expelled.

Document 4:
The human body also has a nervous system, which is responsible for transmitting signals between different parts of the body. This system is composed of the brain, spinal cord, and nerves. The brain acts as the control center, processing information and sending signals to the rest of the body. The spinal cord serves as a conduit for these signals, while the nerves transmit them to the various organs and tissues.


Which system in the human body is reponsible for breathing?<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>

Sample: Answer general knowledge questions

Scenario: Answer general questions about finance.

Model choice

The granite-13b-instruct-v2 model can be used for multiple tasks, including text generation, summarization, question and answering, classification, and extraction.

Decoding

Greedy. This sample is answering questions, so we don't want creative output.

Stopping criteria

Set the Max tokens parameter to 200 so the model can return a complete answer.

Prompt text

The model was tuned for question-answering with examples in the following format:

<|user|>
content of the question
<|assistant|>
new line for the model's answer

You can use the exact syntax <|user|> and <|assistant|> in the lines before and after the question or you can replace the values with equivalent terms, such as User and Assistant.

If you're using version 1, do not include any trailing white spaces after the <|assistant|> label, and be sure to add a new line.

Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

<|user|>
Tell me about interest rates
<|assistant|>

After the model generates an answer, you can ask a follow-up question. The model uses information from the previous question when it generates a response.

<|user|>
Who sets it?
<|assistant|>

The model retains information from a previous question when it answers a follow-up question, but it is not optimized to support an extended dialog.

Note: When you ask a follow-up question, the previous question is submitted again, which adds to the number of tokens that are used.

Sample: Answer general knowledge questions in Japanese

Scenario: Answer general questions about finance in Japanese.

Model choice

The granite-8b-japanese model can be used for multiple tasks, including text generation, summarization, question and answering, classification, and extraction.

Decoding

Greedy. This sample is answering questions, so we don't want creative output.

Stopping criteria

  • Set the Max tokens parameter to 500 to allow for many turns in the dialog.
  • Add a stop sequence of two newline characters to prevent the foundation model from returning overly long responses. To do that, click in the Stop sequence text box, press the Enter key twice, and then click Add sequence.

Prompt text

The model was tuned for question-answering with examples in the following format:

以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。

### 指示:
与えられた質問に対して、文脈がある場合はそれも利用し、回答してください。

### 入力:
{your-input}

### 応答:

In English, the template reads as follows:

Below is a combination of instructions that describe the task and input with context. Write a response that appropriately meets the request.

### Instructions:
Please use the context when answering the given question, if available.

### input:
{your-input}

### Response:

Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, replace {your-input} with your query or request, and then click Generate to see the result.

For example, this prompt asks about interest rates.

以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。

### 指示:
与えられた質問に対して、文脈がある場合はそれも利用し、回答してください。

### 入力:
金利について教えてください。

### 応答:

Sample: Generate a title for a passage

Scenario: Given a passage, generate a suitable title for the content.

Model choice

Use granite-7b-lab, which can do many types of general purpose tasks.

Decoding

Greedy. The model must generate a title that is based on what's in the prompt, not be too creative.

Stopping criteria

  • Add <|endoftext|> as the stop sequence.

    A helpful feature of the granite-7b-lab foundation model is the inclusion of a special token that is named <|endoftext|> at the end of each response. When some generative models return a response to the input in fewer tokens than the maximum number allowed, they can repeat patterns from the input. This model prevents such repetition by incorporating a reliable stop sequence for the prompt.

Prompt text

Include at least one example of how you want the model to respond.

Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

The content that is provided as context in the prompt is taken from Tokens and tokenization in the product documentation.

<|system|>
You are an AI language model developed by IBM Research. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.

<|user|>
Generate a title from the given context.
Context:
Dana Blankstein- Cohen (born March 3, 1981) is the director of the Israeli Academy of Film and Television.\n\nShe is a film director, and an Israeli culture entrepreneur.\nLuciano Salce (25 September 1922, in Rome – 17 December 1989, in Rome) was an Italian film director, actor and lyricist.\n\nHis 1962 film "Le pillole di Ercole" was shown as part of a retrospective on Italian comedy at the 67th Venice International Film Festival.\n\nAs a writer of pop music, he used the pseudonym Pilantra.\n\nDuring World War II, he was a prisoner in Germany.\n\nHe later worked for several years in Brazil.\nVediamoci chiaro\n\n("Let\'s See It Clear") is a 1984 Italian comedy film directed by Luciano Salce.\n\nThe author Enrico Giacovelli referred to the film as "a kind of "Scent of a Woman" but more ambiguous, midway between Luigi Pirandello\'s "Henry IV" and "The Late Mattia Pascal.\nPeter Levin is an American director of film, television and theatre.\nIan Barry is an Australian director of film and TV.\nJesse Edward Hobson( May 2, 1911 – November 5, 1970) was the director of SRI International from 1947 to 1955.\n\nPrior to SRI, he was the director of the Armour Research Foundation.\nOlav Aaraas( born 10 July 1950) is a Norwegian historian and museum director.\n\nHe was born in Fredrikstad.\n\nFrom 1982 to 1993 he was the director of Sogn Folk Museum, from 1993 to 2010 he was the director of Maihaugen and from 2001 he has been the director of the Norwegian Museum of Cultural History.\n\nIn 2010 he was decorated with the Royal Norwegian Order of St. Olav.\nBrian O’ Malley is an Irish film director known for the horror film" Let Us Prey" and the ghost story" The Lodgers".\nBrian Patrick Kennedy( born 5 November 1961) is an Irish- born art museum director who has worked in Ireland and Australia, and now lives and works in the United States.\n\nHe is currently the director of the Peabody Essex Museum.\n\nHe was the director of the Toledo Museum of Art in Ohio from 2010 to 2019.\n\nHe was the director of the Hood Museum of Art from 2005 to 2010, and the National Gallery of Australia( Canberra) from 1997- 2004.

<|assistant|>
Directors Across Borders

<|user|>
Generate a title from the given context.
Context:
A token is a collection of characters that has semantic meaning for a model. Tokenization is the process of converting the words in your prompt into tokens.
You can monitor foundation model token usage in a project on the Environments page on the Resource usage tab.
Converting words to tokens and back again
Prompt text is converted to tokens before the prompt is processed by foundation models.
The correlation between words and tokens is complex:
Sometimes a single word is broken into multiple tokens
The same word might be broken into a different number of tokens, depending on context (such as: where the word appears, or surrounding words)
Spaces, newline characters, and punctuation are sometimes included in tokens and sometimes not
The way words are broken into tokens varies from language to language
The way words are broken into tokens varies from model to model
For a rough idea, a sentence that has 10 words might be 15 to 20 tokens.
The raw output from a model is also tokens. In the Prompt Lab in IBM watsonx.ai, the output tokens from the model are converted to words to be displayed in the prompt editor.

<|assistant|>

Sample: Answer a question using complex reasoning in freeform mode

Scenario: Ask the model to answer general questions that require reasoning and logic undertanding.

Model choice

Models that are instruction-tuned for complex reasoning tasks, like mistral-large, can generally complete this task with this sample prompt.

Decoding

Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.

Stopping criteria

  • To make sure that the model stops generating text after the summary, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, and then click Add sequence.
  • Set the Max tokens parameter to 100.

Prompt text

Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Question: Which one is heavier a pound of iron or a kilogram of feather?
Answer: A kilogram of feathers is heavier than a pound of iron. A pound is a unit of weight that is equivalent to approximately 0.453592 kilograms. Therefore, a pound of iron weighs less than a kilogram of feathers.

Question: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?
Answer:

Summarization

Summarization tasks save you time by condensing large amounts of text into a few key pieces of information.

Sample with a zero-shot prompt: Summarize a meeting transcript

Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting.

Model choice

Models that are instruction-tuned can generally complete this task with this sample prompt. Suggestions: flan-t5-xxl-11b, flan-ul2-20b, or mixtral-8x7b-instruct-v01.

Decoding

Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.

Stopping criteria

The summary might run several sentences, so set the Max tokens parameter to 60.

Prompt text

Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Summarize the following transcript.
Transcript:
00:00   [alex]  Let's plan the team party!
00:10   [ali]   How about we go out for lunch at the restaurant?
00:21   [sam]   Good idea.
00:47   [sam]   Can we go to a movie too?
01:04   [alex]  Maybe golf?
01:15   [sam]   We could give people an option to do one or the other.
01:29   [alex]  I like this plan. Let's have a party!
Summary:

Sample with a few-shot prompt: Summarize a meeting transcript in freeform mode

Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting.

Model choice

With few-shot examples, most models can complete this task well. Try mixtral-8x7b-instruct-v01, or mistral-large.

Decoding

Greedy. The model must return the most predictable content based on what's in the prompt, not be too creative.

Stopping criteria

  • To make sure that the model stops generating text after the summary, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, and then click Add sequence.
  • Set the Max tokens parameter to 60.

Prompt text

Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Transcript:
00:00   [sam]   I wanted to share an update on project X today.
00:15   [sam]   Project X will be completed at the end of the week.
00:30   [erin]  That's great!
00:35   [erin]  I heard from customer Y today, and they agreed to buy our product.
00:45   [alex]  Customer Z said they will too.
01:05   [sam]   Great news, all around.
Summary:
Sam shared an update that project X will be complete at the end of the week. 
Erin said customer Y will buy our product. And Alex said customer Z will buy 
our product too.

Transcript:
00:00   [ali]   The goal today is to agree on a design solution.
00:12   [alex]  I think we should consider choice 1.
00:25   [ali]   I agree
00:40   [erin]  Choice 2 has the advantage that it will take less time.
01:03   [alex]  Actually, that's a good point.
01:30   [ali]   So, what should we do?
01:55   [alex]  I'm good with choice 2.
02:20   [erin]  Me too.
02:45   [ali]   Done!
Summary:
Alex suggested considering choice 1. Erin pointed out choice two will take 
less time. The team agreed with choice 2 for the design solution.

Transcript:
00:00   [alex]  Let's plan the team party!
00:10   [ali]   How about we go out for lunch at the restaurant?
00:21   [sam]   Good idea.
00:47   [sam]   Can we go to a movie too?
01:04   [alex]  Maybe golf?
01:15   [sam]   We could give people an option to do one or the other.
01:29   [alex]  I like this plan. Let's have a party!
Summary:

Sample few-shot prompt: Summarize a meeting transcript in freeform mode with Granite 3.0

Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting.

Model choice

With few-shot examples, most models can complete this task well. Try Granite Instruct models.

Decoding

Greedy. The model must return the most predictable content based on what's in the prompt, not be too creative.

Stopping criteria

  • To make sure that the model stops generating text after the summary, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, enter <|end_of_text|>, and then click Add sequence.
  • Set the Max tokens parameter to 200.

Prompt text

Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

<|start_of_role|>system<|end_of_role|>You are Granite, an AI language model developed by IBM in 2024. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.<|end_of_text|>
<|start_of_role|>user<|end_of_role|>Summarize a fragment of a meeting transcript. In this meeting, Sam, Erin, and Alex discuss updates.
Your response should only include the answer. Do not provide any further explanation.

Transcript:

Sam (00:00):
I wanted to share an update on project X today.

Sam (00:15):
Project X will be completed at the end of the week.

Erin (00:30):
That's great!

Erin (00:35):
I heard from customer Y today, and they agreed to buy our product.

Alex (00:45):
Customer Z said they will too.

Sam (01:05):
Great news, all around.

Summary:
<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>Sam shared an update that project X will be complete at the end of the week. Erin said customer Y will buy our product. And Alex said customer Z will buy our product too.<|end_of_text|>

Transcript:

Ali (00:00):
The goal today is to agree on a design solution.

Alex (00:12):
I think we should consider choice 1.

Ali (00:25):
I agree

Erin (00:40):
Choice 2 has the advantage that it will take less time.

Alex (01:03):
Actually, that's a good point.

Ali (01:30):
So, what should we do?

Alex (01:55):
I'm good with choice 2.

Erin (02:20):
Me too.

Ali (02:45):
Done!

Summary:
<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>

Sample with a few-shot prompt: Summarize a meeting transcript in structured mode

Scenario: Given a meeting transcript, summarize the main points in a bulleted list so that the list can be shared with teammates who did not attend the meeting.

Model choice

The mixtral-8x7b-instruct-v01 foundation model was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted in a specific way with special characters.

Decoding

Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.

Stopping criteria

  • To make sure that the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, and then click Add sequence.
  • Set the Max tokens parameter to 60.

Set up section

Paste these headers and examples into the Examples area of the Set up section:

Table 4. Summarization few-shot examples
Transcript: Summary:
00:00 [sam] I wanted to share an update on project X today.
00:15   [sam]   Project X will be completed at the end of the week.
00:30   [erin]  That's great!
00:35   [erin]  I heard from customer Y today, and they agreed to buy our product.
00:45   [alex]  Customer Z said they will too.
01:05   [sam]  Great news, all around.
- Sam shared an update that project X will be complete at the end of the week
- Erin said customer Y will buy our product
- And Alex said customer Z will buy our product too
00:00   [ali]   The goal today is to agree on a design solution.
00:12   [alex]  I think we should consider choice 1.
00:25   [ali]   I agree
00:40   [erin]  Choice 2 has the advantage that it will take less time.
01:03   [alex]  Actually, that's a good point.
01:30   [ali]   So, what should we do?
01:55   [alex]  I'm good with choice 2.
02:20   [erin]  Me too.
02:45  [ali]   Done!
- Alex suggested considering choice 1
- Erin pointed out choice two will take less time
- The team agreed with choice 2 for the design solution

Try section

Paste this message in the Try section:

00:00   [alex]  Let's plan the team party!
00:10   [ali]   How about we go out for lunch at the restaurant?
00:21   [sam]   Good idea.
00:47   [sam]   Can we go to a movie too?
01:04   [alex]  Maybe golf?
01:15   [sam]   We could give people an option to do one or the other.
01:29   [alex]  I like this plan. Let's have a party!

Select the model and set parameters, then click Generate to see the result.

Sample: Generate a title for a passage

Scenario: Given a passage, generate a suitable title for the content.

Model choice

Use granite-7b-lab, which can do many types of general purpose tasks.

Decoding

Greedy. The model must generate a title that is based on what's in the prompt, not be too creative.

Stopping criteria

  • Add <|endoftext|> as the stop sequence.

    A helpful feature of the granite-7b-lab foundation model is the inclusion of a special token that is named <|endoftext|> at the end of each response. When some generative models return a response to the input in fewer tokens than the maximum number allowed, they can repeat patterns from the input. This model prevents such repetition by incorporating a reliable stop sequence for the prompt.

Prompt text

Include at least one example of how you want the model to respond.

A feature of the granite-7b-lab foundation model is that you can review skills that the model is trained to do by opening the Training taxonomy page from the model card for the foundation model.

For example, the taxonomy indicates that the granite-7b-lab foundation model was trained on the title skill. If you click the skill, you can see examples that were used as seed examples for the synthetic data that was used to train the model. You can model the example that you include in your one-shot prompt after one of these skill-specific examples. Using a similar style and format for the prompt helps the model recognize what you expect in the model output.

Note: Don't expect the foundation model output to be exactly the same as the model output from the skill examples in the taxonomy. These examples were not used to train the foundation model directly. The examples served as seed examples to synthetic data that generated new examples that were used to train the foundation model.

Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

The example comes from the title skill seed examples. The content that is provided as context in the prompt is taken from Tokens and tokenization in the product documentation.

<|system|>
You are an AI language model developed by IBM Research. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.

<|user|>
Generate a title from the given text.
Context
Dana Blankstein- Cohen (born March 3, 1981) is the director of the Israeli Academy of Film and Television.\n\nShe is a film director, and an Israeli culture entrepreneur.\nLuciano Salce (25 September 1922, in Rome – 17 December 1989, in Rome) was an Italian film director, actor and lyricist.\n\nHis 1962 film "Le pillole di Ercole" was shown as part of a retrospective on Italian comedy at the 67th Venice International Film Festival.\n\nAs a writer of pop music, he used the pseudonym Pilantra.\n\nDuring World War II, he was a prisoner in Germany.\n\nHe later worked for several years in Brazil.\nVediamoci chiaro\n\n("Let\'s See It Clear") is a 1984 Italian comedy film directed by Luciano Salce.\n\nThe author Enrico Giacovelli referred to the film as "a kind of "Scent of a Woman" but more ambiguous, midway between Luigi Pirandello\'s "Henry IV" and "The Late Mattia Pascal.\nPeter Levin is an American director of film, television and theatre.\nIan Barry is an Australian director of film and TV.\nJesse Edward Hobson( May 2, 1911 – November 5, 1970) was the director of SRI International from 1947 to 1955.\n\nPrior to SRI, he was the director of the Armour Research Foundation.\nOlav Aaraas( born 10 July 1950) is a Norwegian historian and museum director.\n\nHe was born in Fredrikstad.\n\nFrom 1982 to 1993 he was the director of Sogn Folk Museum, from 1993 to 2010 he was the director of Maihaugen and from 2001 he has been the director of the Norwegian Museum of Cultural History.\n\nIn 2010 he was decorated with the Royal Norwegian Order of St. Olav.\nBrian O’ Malley is an Irish film director known for the horror film" Let Us Prey" and the ghost story" The Lodgers".\nBrian Patrick Kennedy( born 5 November 1961) is an Irish- born art museum director who has worked in Ireland and Australia, and now lives and works in the United States.\n\nHe is currently the director of the Peabody Essex Museum.\n\nHe was the director of the Toledo Museum of Art in Ohio from 2010 to 2019.\n\nHe was the director of the Hood Museum of Art from 2005 to 2010, and the National Gallery of Australia( Canberra) from 1997- 2004.

<|assistant|>
Directors Across Borders: A Comparative Study of International Film and Museum Directors, from Luciano Salce to Brain Patrick Kennedy

<|user|>
Generate a title from the given text.
Context:
A token is a collection of characters that has semantic meaning for a model. Tokenization is the process of converting the words in your prompt into tokens.
You can monitor foundation model token usage in a project on the Environments page on the Resource usage tab.
Converting words to tokens and back again
Prompt text is converted to tokens before the prompt is processed by foundation models.
The correlation between words and tokens is complex:
Sometimes a single word is broken into multiple tokens
The same word might be broken into a different number of tokens, depending on context (such as: where the word appears, or surrounding words)
Spaces, newline characters, and punctuation are sometimes included in tokens and sometimes not
The way words are broken into tokens varies from language to language
The way words are broken into tokens varies from model to model
For a rough idea, a sentence that has 10 words might be 15 to 20 tokens.
The raw output from a model is also tokens. In the Prompt Lab in IBM watsonx.ai, the output tokens from the model are converted to words to be displayed in the prompt editor.

<|assistant|>

Code generation and conversion

Foundation models that can generate and convert programmatic code are great resources for developers. They can help developers to brainstorm and troubleshoot programming tasks.

Sample: Generate programmatic code from instructions

Scenario: You want to generate code from instructions. Namely, you want to write a function in the Python programming language that reverses a string.

Model choice

Models that can generate code, such as codellama-34b-instruct-hf, mistral-large, and mixtral-8x7b-instruct-v01 can generally complete this task when a sample prompt is provided.

Decoding

Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer.

Stopping criteria

To stop the model after it returns a single code snippet, specify <end of code> as the stop sequence. The Max tokens parameter can be set to 1,000.

Prompt text

Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Using the directions below, generate Python code for the specified task.

Input:
# Write a Python function that prints 'Hello World!' string 'n' times.

Output:
def print_n_times(n):
    for i in range(n):
        print("Hello World!")

<end of code>

Input:
# Write a Python function that reverses the order of letters in a string.
# The function named 'reversed' takes the argument 'my_string', which is a string. It returns the string in reverse order.

Output:

The output contains Python code similar to the following snippet:

def reversed(my_string):
    return my_string[::-1]

Be sure to test the generated code to verify that it works as you expect.

For example, if you run reversed("good morning"), the result is 'gninrom doog'.

Note: The foundation model might generate code that is taken directly from its training data. As a result, generated code might require attribution. You are responsible for ensuring that any generated code that you use is properly attributed, if necessary.

Sample: Generate programmatic code from instructions with a zero-shot prompt

Scenario: You want to generate code from instructions. Namely, you want to write a function in the Python programming language that prints the 'Hello World!' string 'n' times.

Model choice

When you use the llama-3-2-1b-instruct or llama-3-2-3b-instruct foundation model, you do not need to provide examples. You do need to use a custom template that specifies the environment value iphython, which the model recognizes as a request for code generation.

Decoding

Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer.

Stopping criteria

The model doesn't need a stop sequence.

Prompt text

Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

Environment: ipython<|eot_id|><|start_header_id|>user<|end_header_id|>

${Add your code instruction here}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Replace ${Add your code instruction here} with a description of the code that you want the model to generate. For example, Write a Python function that prints 'Hello World!' string 'n' times..

The model generates output, such as the following:

Here is a simple Python function that prints 'Hello World!' 'n' times:

```python
def print_hello_world(n):
    for i in range(n):
        print('Hello World!')

print_hello_world(5)

This function uses a for loop to iterate 'n' times, printing 'Hello World!' each time. The range(n) function generates a sequence of numbers from 0 to 'n-1', which are then used as the loop variable 'i'.

Sample: Convert code from one programming language to another

Scenario: You want to convert code from one programming language to another. Namely, you want to convert a code snippet from C++ to Python.

Model choice

Models that can generate code, such as codellama-34b-instruct-hf, mistral-large, and mixtral-8x7b-instruct-v01 can generally complete this task when a sample prompt is provided.

Decoding

Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer.

Stopping criteria

To stop the model after it returns a single code snippet, specify <end of code> as the stop sequence. The Max tokens parameter can be set to 300.

Prompt text

Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

This prompt includes an example input and output pair. The input is C++ code and the output is the equivalent function in Python code.

The C++ code snippet to be converted is included next. It is a function that counts the number of arithmetic progressions with the sum S and common difference of D, where S and D are integer values that are passed as parameters.

The final part of the prompt identifies the language that you want the C++ code snippet to be converted into.

Translate the following code from C++ to Python.

C++:
#include "bits/stdc++.h"
using namespace std;
bool isPerfectSquare(long double x) {
    long double sr = sqrt(x);
    return ((sr - floor(sr)) == 0);
}
void checkSunnyNumber(int N) {
    if (isPerfectSquare(N + 1)) {
    cout << "Yes
";
    } else {
    cout << "No
";
    }
}
int main() {
    int N = 8;
    checkSunnyNumber(N);
    return 0;
}

Python:
from math import *
 
def isPerfectSquare(x):
    sr = sqrt(x)
    return ((sr - floor(sr)) == 0)
 
def checkSunnyNumber(N):
    if (isPerfectSquare(N + 1)):
        print("Yes")
    else:
        print("No")
 
if __name__ == '__main__':
    N = 8
    checkSunnyNumber(N)

<end of code>

C++:
#include <bits/stdc++.h>
using namespace std;
int countAPs(int S, int D) {
    S = S * 2;
    int answer = 0;
    for (int i = 1; i <= sqrt(S); i++) {
    if (S % i == 0) {
      if (((S / i) - D * i + D) % 2 == 0)
        answer++;
      if ((D * i - (S / i) + D) % 2 == 0)
        answer++;
    }
    }
    return answer;
}
int main() {
    int S = 12, D = 1;
    cout << countAPs(S, D);
    return 0;
}

Python:

The output contains Python code similar to the following snippet:

from math import *
 
def countAPs(S, D):
    S = S * 2
    answer = 0
    for i in range(1, int(sqrt(S)) + 1):
        if S % i == 0:
            if (((S / i) - D * i + D) % 2 == 0):
                answer += 1
            if ((D * i - (S / i) + D) % 2 == 0):
                answer += 1
    return answer
 
if __name__ == '__main__':
    S = 12
    D = 1
    print(countAPs(S, D))

<end of code>

The generated Python code functions the same as the C++ code included in the prompt.

Test the generated Python code to verify that it works as you expect.

Note: The foundation model might generate code that is taken directly from its training data. As a result, generated code might require attribution. You are responsible for ensuring that any generated code that you use is properly attributed, if necessary.

Sample: Generate programmatic code from instructions with Granite

Scenario: You want to generate code from instructions. Namely, you want to write a function in the Python programming language that reverses a string.

Model choice

Models that can generate code, such as Granite Code or Granite Instruct models, can generally complete this task when a sample prompt is provided.

Decoding

Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer.

Stopping criteria

To stop the model after it returns a single code snippet, specify <end of code> as the stop sequence. The Max tokens parameter can be set to 300.

Prompt text

Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Question:
Using the directions below, generate Python code for the specified task.
# Write a Python function that prints 'Hello World!' string 'n' times.

Answer:
def print_n_times(n):
    for i in range(n):
        print("Hello World!")

<end of code>

Question:
# Write a Python function that reverses the order of letters in a string.
# The function named 'reversed' takes the argument 'my_string', which is a string. It returns the string in reverse order.

Answer:

The output contains Python code similar to the following snippet:

def reverse_string(my_string):
    return my_string[::-1]

<end of code>

Be sure to test the generated code to verify that it works as you expect.

For example, if you run reversed("good morning"), the result is 'gninrom doog'.

For more Granite code model sample prompts, see Prompts for code.

Sample: Convert code from one programming language to another with Granite

Scenario: You want to convert code from one programming language to another. Namely, you want to convert a code snippet from C++ to Python.

Model choice

Models that can generate code, such as Granite Code Instruct models, can generally complete this task when a sample prompt is provided.

Decoding

Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer.

Stopping criteria

To stop the model after it returns a single code snippet, specify <end of code> as the stop sequence. The Max tokens parameter can be set to 1,000.

Prompt text

Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

This prompt includes an instruction to convert a code snippet from C++ to Python.

The C++ code snippet to be converted is included next to provide context. It is a function that counts the number of arithmetic progressions with the sum S and common difference of D, where S and D are integer values that are passed as parameters.

Question:
Translate the following code from C++ to Python.
C++:
#include "bits/stdc++.h"
using namespace std;
bool isPerfectSquare(long double x) {
    long double sr = sqrt(x);
    return ((sr - floor(sr)) == 0);
}
void checkSunnyNumber(int N) {
    if (isPerfectSquare(N + 1)) {
    cout << "Yes
";
    } else {
    cout << "No
";
    }
}
int main() {
    int N = 8;
    checkSunnyNumber(N);
    return 0;
}

Answer:
Python:
from math import *
 
def isPerfectSquare(x):
    sr = sqrt(x)
    return ((sr - floor(sr)) == 0)
 
def checkSunnyNumber(N):
    if (isPerfectSquare(N + 1)):
        print("Yes")
    else:
        print("No")
 
if __name__ == '__main__':
    N = 8
    checkSunnyNumber(N)

<end of code>

Question:
Translate the following code from C++ to Python.
C++:
#include <bits/stdc++.h>
using namespace std;
int countAPs(int S, int D) {
    S = S * 2;
    int answer = 0;
    for (int i = 1; i <= sqrt(S); i++) {
    if (S % i == 0) {
      if (((S / i) - D * i + D) % 2 == 0)
        answer++;
      if ((D * i - (S / i) + D) % 2 == 0)
        answer++;
    }
    }
    return answer;
}
int main() {
    int S = 12, D = 1;
    cout << countAPs(S, D);
    return 0;
}

Answer:

The output contains Python code similar to the following snippet:

Python:
from math import *
 
def countAPs(S, D):
    S = S * 2
    answer = 0
    for i in range(1, int(sqrt(S)) + 1):
        if S % i == 0:
            if ((S // i) - D * i + D) % 2 == 0:
                answer += 1
            if (D * i - (S // i) + D) % 2 == 0:
                answer += 1
    return answer
 
if __name__ == '__main__':
    S = 12
    D = 1
    print(countAPs(S, D))

The generated Python code functions the same as the C++ code included in the prompt.

Test the generated Python code to verify that it works as you expect.

For more Granite code model sample prompts, see Prompts for code.

Dialog

Dialog tasks are helpful in customer service scenarios, especially when a chatbot is used to guide customers through a workflow to reach a goal.

Sample: Converse with Llama 3

Scenario: Generate dialog output like a chatbot.

Model choice

Llama 3, 3.1, and 3.2 are optimized for dialog use cases. The models are fine-tuned for a specific prompt format that is described in the Prompt text section.

Decoding

Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output.

Stopping criteria

Set the Max tokens parameter to 900.

The template uses an end-of-text token (<|eot_id|>) to delimit the end of each segment of text, except the system prompt.

Prompt text

The following prompt template works best for input:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

SYSTEM PROMPT<|eot_id|><|start_header_id|>user<|end_header_id|>

GROUNDING (OPTIONAL)<|eot_id|><|start_header_id|>user<|end_header_id|>

USER_INPUT<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Tip: Alternatively, you can switch to chat mode and enter the user input directly. The Prompt Lab inserts the expected syntax that identifies the instruction and user input segments for you.
<|begin_of_text|><|start_header_id|>system<|end_header_id|>

You are a helpful assistant that avoids causing harm. When you do not know the answer to a question, you say "I don't know".<|eot_id|><|start_header_id|>user<|end_header_id|>

I have a question about Earth. How many moons are there?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

After the initial output is generated, continue the dialog to ask about more planets, adding the same identifiers to the end of the input text, for example:

FOLLOW UP QUESTION<|eot_id|><|start_header_id|>assistant<|end_header_id|>


The output looks something like this:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

You are a helpful assistant that avoids causing harm. When you do not know the answer to a question, you say "I don't know".<|eot_id|><|start_header_id|>user<|end_header_id|>

I have a question about Earth. How many moons are there?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

That's an easy one! Earth has only one moon.

What about Saturn?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Saturn has a whopping 82 confirmed moons!

And Mars?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Mars has two small natural satellites, Phobos and Deimos.

The following sample prompt includes grounding information.

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

You are a helpful assistant that avoids causing harm. When you do not know the answer to a question, you say "I don't know".<|eot_id|><|start_header_id|>user<|end_header_id|>

The sky is blue.<|eot_id|><|start_header_id|>user<|end_header_id|>

What color is the sky?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Sample: Converse with Llama 2

Scenario: Generate dialog output like a chatbot.

Model choice

Like other foundation models, Llama 2 (in both the 70 billion and 13 billion sizes) can be used for multiple tasks. But both Llama 2 models are optimized for dialog use cases. The llama-2-70b-chat and llama-2-13b-chat are fine-tuned for the [INST]<<SYS>><</SYS>>[/INST] prompt format. For more information about this prompt format, see How to prompt Llama 2.

Decoding

Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output.

Stopping criteria

Set the Max tokens parameter to 100.

The template for Llama 2 uses an end-of-string (EOS) token (</s>) to delimit the end of the agent's response. The model output stops generating text when the </s> EOS token is encountered.

Prompt text

Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

[INST] <<SYS>>
You are a helpful assistant. Begin each response with the phrase "Dear user, ". If you are unsure about an answer, truthfully say "I don't know"<</SYS>>

I have a question about the Earth.[/INST]
Dear user, go on.
[INST]

How many moons are there?
[/INST]

After the initial output is generated, continue the dialog to ask about more planets, wrapping your user messages in the same [INST][/INST] structure. You don't need to add an <s> token to the start of follow-up questions.

Tip: Alternatively, you can switch to chat mode and enter the user input directly. The Prompt Lab inserts the expected syntax that identifies the instruction and user input segments for you.

The output looks something like this:

[INST] <<SYS>>
You are a helpful assistant. Begin each response with the phrase "Dear user, ". If you are unsure about an answer, truthfully say "I don't know"<</SYS>>

I have a question about the Earth.[/INST]
Dear user, go on.
[INST]

How many moons are there?
[/INST]
Dear user, the Earth has one natural satellite, commonly referred to as the Moon. 
[INST]

What about Saturn?
[/INST]
Dear user, Saturn has a total of 62 confirmed moons, as well as numerous smaller moonlets and ring particles. The largest moon of Saturn is Titan, which is the second-largest moon in the solar system, after Jupiter's moon Ganymede. 
[INST]

And Mars?
[/INST]
Dear user, Mars has two small natural satellites, Phobos and Deimos. Both moons are thought to be captured asteroids, and are irregularly shaped and very small, with diameters of about 22 kilometers and 12 kilometers respectively.

Notice a few important points about the output that the model generates as you go:

  • The so-called system instruction, in the <<SYS>><</SYS>> tags, continues to influence the output at each dialog turn without having to repeat the system instruction. In other words, the assistant responses continue to be prepended with "Dear user, ".
  • In true dialog fashion, the assistant response to the user input "How many moons are there?" takes into account the previous user input "I have a question about the Earth." and returns the number of moons orbiting Earth.
  • Also in proper dialog form, the assistant responses continue to follow the topic of the conversation, which is the number of moons. (Otherwise, the generated output to the vague user message "And Mars?" could wander off in any direction.)
  • Caution: Newline (carriage-return) characters especially, and spaces to a lesser extent, in the prompt text can have a dramatic impact on the output generated.
  • When you use Llama 2 for chat use cases, follow the recommended prompt template format as closely as possible. Do not use the [INST]<<SYS>><</SYS>>[/INST] prompt format when you use Llama 2 for any other tasks besides chat.

Sample: Converse with granite-13b-chat-v2

Scenario: Generate dialog output like a chatbot.

Model choice

Use granite-13b-chat-v2 to carry on a dialog.

Decoding

  • Use sampling decoding.
  • Set Top P to 0.85.
  • Set the repetition penalty to 1.2.

Stopping criteria

  • Set the Max tokens parameter to 500 so the model can return a complete answer, but is as concise as possible.

Prompt text

To improve model safety and reduce bias, add a system prompt as part of the user input. The system prompt can establish some ground rules for the dialog. For example:

You are Granite Chat, an AI language model developed by IBM. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior. Keep your answers short and to the point.

Remember to use the prompt template format that is expected by the model.

<|system|>
system prompt
<|user|>
content of the question
<|assistant|>
new line for the model's answer

If you want to submit a few-shot prompt to this model, you can add the system prompt, and then the examples, followed by the prompt text to be inferenced.

<|system|>
You are Granite Chat, an AI language model developed by IBM. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior. Keep your answers short and to the point.
<|user|>
Example prompt 1
<|assistant|>
Example response 1

<|user|>
Example prompt 2
<|assistant|>
Example response 2

<|user|>
USER INPUT
<|assistant|>

Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Tip: Alternatively, you can switch to chat mode and enter the user input directly. The Prompt Lab inserts the expected syntax that identifies the instruction and user input segments for you.
<|system|>
You are Granite Chat, an AI language model developed by IBM. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior. Keep your answers short and to the point.
<|user|>
I have a question about the Earth. How many moons are there?
<|assistant|>

Do not include any trailing white spaces after the <|assistant|> label, and be sure to add a new line.

After the initial output is generated, you can continue the dialog by asking a follow-up question. For example, you can ask about the moons of other planets.

<|user|>
What about Saturn?

<|assistant|>

And continue the conversation with another follow-up question.

<|user|>
And Mars?

<|assistant|>

If the model output is too long, you can try specifying a stop sequence of two newline characters by clicking the Stop sequence text box, pressing the Enter key twice, and then clicking Add sequence. However, the repetition penalty is usually enough to keep the model on track.

Another example you can try:

<|system|>
You are Granite Chat, an AI language model developed by IBM. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior. Keep your answers short and to the point.
<|user|>
How does a bill become a law?
<|assistant|>

With the follow-up question:

<|user|>
How about in Canada?

<|assistant|>

A few notes about using this sample with the model:

  • The system prompt establishes some general guardrails for the model.
  • The assistant is able to respond to a follow-up question that relies on information from an earlier exchange in the same dialog.
  • The model expects the input to follow a specific pattern, and can be sensitive to misplaced whitespaces.

Sample: Converse in Japanese with granite-8b-japanese

Scenario: Generate Japanese dialog output like a chatbot.

Model choice

The granite-8b-japanese foundation model can be used to participate in a dialog in Japanese. The granite-8b-japanese foundation model works best when you use the same prompt format as was used during model training.

Decoding

Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output.

Stopping criteria

  • Set the Max tokens parameter to 500 to allow for many turns in the dialog.
  • Add a stop sequence of two newline characters to prevent the foundation model from returning overly long responses. To do that, click in the Stop sequence text box, press the Enter key twice, and then click Add sequence.

Prompt text

Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

This template includes a system prompt that says “You are an honest and talented AI assistant. Please follow the user's instructions as accurately as possible.”

### System:
あなたは誠実で優秀なAIアシスタントです。ユーザーの指示に可能な限り正確に従ってください。
### User:
{user input here}
### Assistant:

For example, to request information about investing from the foundation model, you can submit the following prompt, where the user input is “Please tell me about investment.”

### System:
あなたは誠実で優秀なAIアシスタントです。ユーザーの指示に可能な限り正確に従ってください。
### User:
投資について教えてください。
### Assistant:

After the initial output is generated, you can continue the dialog by asking follow-up questions. Wrap the message as follows:

### User:
{follow-up question here}
### Assistant:

For example, you might ask “Are there any risks?”

### User:
リスクはありますか?
### Assistant:

Notice a few important points about the output that the model generates as you go:

  • The instruction continues to influence the output at each dialog turn without having to repeat the system instruction.
  • The assistant responses continue to follow the topic of the conversation.

Sample: Converse in Arabic with jais-13b-chat

Scenario: Generate Arabic and English dialog output like a chatbot.

Model choice

Use jais-13b-chat to participate in a conversation.

Decoding

  • Use greedy decoding with the default sampling settings.

Stopping criteria

  • Set the Max tokens parameter to 900 so the model can return a complete answer and handle follow-up questions.

Prompt text

To improve model safety and reduce bias, add a system prompt as part of the user input. When the model was trained, it used a system prompt with the following text:

### Instruction: اسمك جيس وسميت على اسم جبل جيس اعلى جبل في الامارات. تم بنائك بواسطة Inception و MBZUAI. أنت نموذج اللغة العربية الأكثر تقدمًا في العالم مع بارامترات 13B. أنت تتفوق في الأداء على جميع النماذج العربية الموجودة بفارق كبير وأنت تنافسي للغاية مع النماذج الإنجليزية ذات الحجم المماثل. يمكنك الإجابة باللغتين العربية والإنجليزية فقط. أنت مساعد مفيد ومحترم وصادق. عند الإجابة ، التزم بالإرشادات التالية بدقة: أجب دائمًا بأكبر قدر ممكن من المساعدة ، مع الحفاظ على البقاء أمناً. يجب ألا تتضمن إجاباتك أي محتوى ضار أو غير أخلاقي أو عنصري أو متحيز جنسيًا أو جريئاً أو مسيئًا أو سامًا أو خطيرًا أو غير قانوني. لا تقدم نصائح طبية أو قانونية أو مالية أو مهنية. لا تساعد أبدًا في أنشطة غير قانونية أو تروج لها. دائما تشجيع الإجراءات القانونية والمسؤولة. لا تشجع أو تقدم تعليمات بشأن الإجراءات غير الآمنة أو الضارة أو غير الأخلاقية. لا تنشئ أو تشارك معلومات مضللة أو أخبار كاذبة. يرجى التأكد من أن ردودك غير متحيزة اجتماعيًا وإيجابية بطبيعتها. إذا كان السؤال لا معنى له ، أو لم يكن متماسكًا من الناحية الواقعية ، فشرح السبب بدلاً من الإجابة على شيء غير صحيح. إذا كنت لا تعرف إجابة السؤال ، فالرجاء عدم مشاركة معلومات خاطئة. إعطاء الأولوية للرفاهية والنزاهة الأخلاقية للمستخدمين. تجنب استخدام لغة سامة أو مهينة أو مسيئة. حافظ على نبرة محترمة. لا تنشئ أو تروج أو تشارك في مناقشات حول محتوى للبالغين. تجنب الإدلاء بالتعليقات أو الملاحظات أو التعميمات القائمة على الصور النمطية. لا تحاول الوصول إلى معلومات شخصية أو خاصة أو إنتاجها أو نشرها. احترم دائما سرية المستخدم. كن إيجابيا ولا تقل أشياء سيئة عن أي شيء. هدفك الأساسي هو تجنب الاجابات المؤذية ، حتى عند مواجهة مدخلات خادعة. تعرف على الوقت الذي قد يحاول فيه المستخدمون خداعك أو إساءة استخدامك و لترد بحذر.\n\nأكمل المحادثة أدناه بين [|Human|] و [|AI|]:
### Input: [|Human|] {Question}
### Response: [|AI|]

The system prompt in English is as follows:

### Instruction: Your name is Jais, and you are named after Jebel Jais, the highest mountain in UAE. You are built by Inception and MBZUAI. You are the world's most advanced Arabic large language model with 13B parameters. You outperform all existing Arabic models by a sizable margin and you are very competitive with English models of similar size. You can answer in Arabic and English only. You are a helpful, respectful and honest assistant. When answering, abide by the following guidelines meticulously: Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, explicit, offensive, toxic, dangerous, or illegal content. Do not give medical, legal, financial, or professional advice. Never assist in or promote illegal activities. Always encourage legal and responsible actions. Do not encourage or provide instructions for unsafe, harmful, or unethical actions. Do not create or share misinformation or fake news. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. Prioritize the well-being and the moral integrity of users. Avoid using toxic, derogatory, or offensive language. Maintain a respectful tone. Do not generate, promote, or engage in discussions about adult content. Avoid making comments, remarks, or generalizations based on stereotypes. Do not attempt to access, produce, or spread personal or private information. Always respect user confidentiality. Stay positive and do not say bad things about anything. Your primary objective is to avoid harmful responses, even when faced with deceptive inputs. Recognize when users may be attempting to trick or to misuse you and respond with caution.\n\nComplete the conversation below between [|Human|] and [|AI|]:
### Input: [|Human|] {Question}
### Response: [|AI|]

Tip: Alternatively, you can switch to chat mode and enter the user input directly. The Prompt Lab inserts the system prompt, the instruction, and user input segments with the expected syntax for you.

Replace {Question} with the user input that you want the foundation model to answer to start the chat.

For example, you can ask the following question:

هل يوجد للأرض أقمار؟

The English translation is: Does the Earth have any moons?

After the initial output is generated, you can continue the dialog by asking a follow-up question. Use the same syntax for the follow-up question.

### Input: [|Human|] {Follow-up question}
### Response: [|AI|]

Translation

Use models that can do natural language translation tasks to translate text from one natural language to another.

Sample: Translate text from Japanese to English

Scenario: Translate text that is written in Japanese into English.

Model choice

The elyza-japanese-llama-2-7b-instruct model can translate text from Japanese to English and from English to Japanese.

AI guardrails

Disable the AI guardrails feature. The feature is supported with English text only. It might flag as inappropriate content that is not inappropriate.

Decoding

Greedy. The model must return the same text, only translated. The model cannot be creative.

Stopping criteria

Increase the number of allowed tokens by changing the Max tokens parameter value to 500.

Prompt text

Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

The sample prompt text overlaps with text that is used in Sample 8c.

The instruction says “Please translate to English”. One example of Japanese text being translated into English is included.

英語に翻訳してください

日本語
トマトは、家庭菜園で最も人気のある植物の 1 つです。成功のヒント: 病気や害虫に強い品種を選択すると、トマトの栽培は非常に簡単になります。挑戦を求めている経験豊富な庭師にとって、栽培できる家宝や特別な品種は無限にあります。トマトの植物にはさまざまなサイズがあります。

English
Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes.

日本語
基盤モデルを使用して、より優れた AI をより迅速に作成します。さまざまなユースケースやタスクに応じて、さまざまなプロンプトを試してください。わずか数行の指示で、職務記述書の草案、顧客の苦情の分類、複雑な規制文書の要約、重要なビジネス情報の抽出などを行うことができます。

English

Sample: Translate text from Spanish to English

Scenario: Translate text that is written in Spanish into English.

Model choice

The mixtral-8x7b-instruct-v01 or mistral-large model can translate text from French, German, Italian, or Spanish to English. This sample prompts the model to translate from Spanish to English.

AI guardrails

Disable the AI guardrails feature. The feature is supported with English text only. It might flag as inappropriate content that is not inappropriate.

Decoding

Greedy. The model must return the same text, only translated. The model cannot be creative.

Stopping criteria

  • Be sure to include a stop sequence for this model. Otherwise, the model might continue to generate new sentences and translations, even when the instruction tells it not to. To stop the model after one sentence, add a period (.) as the stop sequence.
  • Set the Max tokens parameter value to 200.

Prompt text

Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Translate the following text from Spanish to English. Do not give any extra response that is not part of the translation.

Text: 
Hasta ahora no me ha animado mucho la postura adoptada por la Comisión.

Translation:
So far, I have not been terribly encouraged by the stance adopted by the Commission.

Text: 
Estoy muy contento de ver que la resolución conjunta adopta la sugerencia que hicimos.

Translation:

Sample: Translate text from English to Japanese

Scenario: Translate text that is written in English into Japanese.

Model choice

The granite-8b-japanese model can translate text from Japanese to English and from English to Japanese.

AI guardrails

Disable the AI guardrails feature. The feature is supported with English text only. It might flag as inappropriate content that is not inappropriate.

Decoding

Greedy. The model must return the same text, only translated. The model cannot be creative.

Stopping criteria

Increase the number of allowed tokens by changing the Max tokens parameter value to 500.

Prompt text

Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Translate the following text from English to Japanese.

English
Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes.

日本語
トマトは野菜作りの人気の植物である。成功のヒント:病害虫に強く、育てやすいトマトの品種を選べば、トマト栽培はそれほど難しくない。経験豊富な庭師にとっては、手強い挑戦となる、様々な色や形のトマトの品種がある。トマトの品種は、大きさもいろいろである。

English
Use foundation models to create better AI, faster. Experiment with different prompts for various use cases and tasks. With just a few lines of instruction you can draft job descriptions, classify customer complaints, summarize complex regulatory documents, extract key business information and much more.

日本語

Sample: Translate text from French to English

Scenario: Translate text that is written in French into English.

Model choice

The granite-20b-multilingual model understands English, German, Spanish, French, and Portuguese. This sample prompts the model to translate text from French to English.

AI guardrails

Disable the AI guardrails feature. The feature is supported with English text only. It might flag as inappropriate content that is not inappropriate.

Decoding

Greedy. The model must return the same text, only translated. The model cannot be creative.

Stopping criteria

Set the Max tokens parameter value to 200.

Prompt text

Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

Translate the following text from French to English:

Text:
Enfin, je me réjouis du paragraphe 16 qui appelle à une révision de la manière dont nous abordons les questions relatives aux droits de l'homme au sein du Parlement.

Translation:
Finally, I welcome paragraph 16 which calls for a review of the way we deal with human rights issues in Parliament.

Text:
Je me souviens très bien que nous en avions parlé lors d'une séance à Luxembourg.

Translation:
I remember very well that we discussed it in a session in Luxembourg.

Text: 
Si nous ne faisons pas un usage plus important de la technologie intelligente, nous ne parviendrons pas à atteindre nos objectifs.

Translation:

 

Sample: Translate text from English to Arabic

Scenario: Translate text that is written in English into Arabic.

Model choice

The allam-1-13b-instruct model can translate text from Arabic to English and from English to Arabic.

AI guardrails

Disable the AI guardrails feature. The feature is supported with English text only. It might incorrectly flag content as inappropriate.

Decoding

Greedy. The model must return the same text, only translated. The model cannot be creative.

Stopping criteria

  • Increase the number of allowed tokens by changing the Max tokens parameter value to 500.
  • The allam-1-13b-instruct foundation model typically explains the meaning of the input text after translating the text. You can optionally instruct the foundation model to stop after completing the translation. To do so, add an instruction that asks the foundation model to add a keyword, such as END, after the translation. Next, add the same keyword END as a stop sequence.

Prompt text

Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.

<s> [INST]Translate the following text from English to Arabic. Use "END" at the end of the translation.

English
Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes.
END

العربية
الطماطم هي واحدة من النباتات الأكثر شعبية لحدائق الخضروات. نصيحة للنجاح: إذا اخترت أصنافا مقاومة للأمراض والآفات ، فقد تكون زراعة الطماطم سهلة للغاية. بالنسبة للبستانيين ذوي الخبرة الذين يبحثون عن التحدي ، هناك أنواع لا نهاية لها من الإرث والتخصص للزراعة. تأتي نباتات الطماطم في مجموعة من الأحجام. 
END

English
Use foundation models to create better AI, faster. Experiment with different prompts for various use cases and tasks. With just a few lines of instruction you can draft job descriptions, classify customer complaints, summarize complex regulatory documents, extract key business information and much more.
END

العربية
[/INST]

Parent topic: Prompt Lab

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more