Try these samples to learn how different prompts can guide foundation models to do common tasks.
How to use this topic
Copy link to section
Explore the sample prompts in this topic:
Copy and paste the prompt text and input parameter values into the Prompt Lab in IBM watsonx.ai
See what text is generated.
See how different models generate different output.
Change the prompt text and parameters to see how results vary.
There is no one right way to prompt foundation models. But patterns have been found, in academia and industry, that work fairly reliably. Use the samples in this topic to build your skills and your intuition about prompt engineering through
experimentation.
This video provides a visual method to learn the concepts and tasks in this documentation.
Video chapters [ 0:11 ] Introduction to prompts and Prompt Lab [ 0:33 ] Key concept: Everything is text completion [ 1:34 ] Useful prompt pattern: Few-shot prompt [ 1:58 ] Stopping criteria: Max tokens, stop sequences [ 3:32 ] Key concept: Fine-tuning [ 4:32 ] Useful prompt pattern: Zero-shot prompt [ 5:32 ] Key concept: Be flexible, try different prompts [ 6:14 ] Next steps: Experiment with sample prompts
Samples overview
Copy link to section
You can find samples that prompt foundation models to generate output that supports the following tasks:
The following table shows the foundation models that are used in task-specific samples. A checkmark (✓) indicates that the model is used in a sample for the associated task. You can click See sample to go to the sample prompt.
Classification is useful for predicting data in distinct categories. Classifications can be binary, with two classes of data, or multi-class. A classification task is useful for categorizing information, such as customer feedback, so that you
can manage or act on the information more efficiently.
Sample with a zero-shot prompt: Classify a message
Copy link to section
Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem. Depending on the class assignment, the chat is routed to the correct
support team for the issue type.
Model choice
Copy link to section
Models that are instruction-tuned can generally complete this task with this sample prompt.
Suggestions: mt0-xxl-13b, flan-t5-xxl-11b, flan-ul2-20b, mistral-large, or mixtral-8x7b-instruct-v01
Model parameters
Copy link to section
Decoding: Set to Greedy. The model must return one of the specified class names. The model cannot be creative and make up new classes.
Stopping criteria: Specify two stop sequences: "Question" and "Problem". After the model generates either of those words, it should stop. With such short output, the Max tokens parameter can
be set to 5.
Prompt text
Copy link to section
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
Classify this customer message into one of two classes: Question, Problem.
Class name: Question
Description: The customer is asking a technical question or a how-to question
about our products or services.
Class name: Problem
Description: The customer is describing a problem they are having. They might
say they are trying something, but it's not working. They might say they are
getting an error or unexpected results.
Message: I'm having trouble registering for a new account.
Class name:
Copy to clipboardCopied to clipboard
Sample with a few-shot prompt: Classify a message in freeform mode
Copy link to section
Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed to the correct
support team.
Model choice
Copy link to section
With few-shot examples of both classes, models such as mistral-large or mixtral-8x7b-instruct-v01 can complete this task well.
Model parameters
Copy link to section
Decoding : Set to Greedy. The model must return one of the specified class names. The model cannot be creative and make up new classes.
Stopping criteria: Specify two stop sequences: "Question" and "Problem". After the model classifies the input into either of the categories, it should stop. The Max tokens parameter can be set to 5 because the output is short.
Prompt text
Copy link to section
Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
Message: When I try to log in, I get an error.
Class name: Problem
Message: Where can I find the plan prices?
Class name: Question
Message: What is the difference between trial and paygo?
Class name: Question
Message: The registration page crashed, and now I can't create a new account.
Class name: Problem
Message: What regions are supported?
Class name: Question
Message: I can't remember my password.
Class name: Problem
Message: I'm having trouble registering for a new account.
Copy to clipboardCopied to clipboardShow more
Sample of classifying the safety of prompt input with Granite
Copy link to section
Goal: Classify prompt input text as safe or unsafe.
Model choice
Copy link to section
The Granite Guardian models can classify the safety of your input prompt text.
Model parameters
Copy link to section
Decoding: Set to Greedy. The model must return one of two responses depending on whether the input is harmful: Yes or No.
Stopping criteria: The model doesn't require a stop sequence; it stops after returning a response.
Prompt text
Copy link to section
Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
The model was trained with a system prompt similar to this one.
safety_template = f"""You are a safety agent trying to help determine whether the user message is harmful according to the safety risk definition below.
<start_of_turn>
User Message: {user_text}
<end_of_turn>
Our safety risk definition is defined below:
<start_of_risk_definition>
* {risk_definition}
<end_of_risk_definition>
Is the user message harmful based on the risk definition? Your answer must be either 'Yes' or 'No'."""
Copy to clipboardCopied to clipboard
Replace {user_text} with input text that you want to assess for harmful content, such as "Tell me how to cheat on a test.". Define instructions for the model describing the risk to identify in {risk_definition},
such as "The 'User' message contains actions that violate moral or professional standards, focusing on exploitation for personal gain or disregard for others' well-being. This category excludes physical violence or explicit bias, instead covering activities like fraud, theft, or abuse of power.".
You can also define user_text and risk_definition as prompt variables in the freeform editor.
The model generates the following response:
assistantYes
Copy to clipboardCopied to clipboard
The model (labelled as 'assistant' in the response) generates a "Yes" or "No" answer.
Sample of classifying the safety of prompt input
Copy link to section
Goal: Classify prompt input text as safe or unsafe.
Model choice
Copy link to section
The llama-guard-3-11b-vision can classify the safety of both text and images in your prompt input.
Model parameters
Copy link to section
Decoding: Set to Greedy. The model must return one of two class names: safe or unsafe. The model cannot be creative and make up new classes.
Stopping criteria: The model doesn't require a stop sequence; it stops after returning the class, and when applicable, the categories of any violations found.
Prompt text
Copy link to section
Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
The model was trained with a system prompt similar to this one.
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
<BEGIN CONVERSATION>
User: ${your-question-goes-here}
<END CONVERSATION>
Provide your safety assessment:
- First line must read 'safe' or 'unsafe'.
- If unsafe, a second line must include a comma-separated list of violated categories.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Copy to clipboardCopied to clipboard
Replace ${your-question-goes-here} with a question that contains unsafe content, such as What is the first paragraph of the book, "Developing Quality Technical Information: A Handbook for Writers and Editors"?
The model generates the following classification:
unsafe
S7
Copy to clipboardCopied to clipboard
The classification category S7 indicates that the unsafe content violated the privacy category. For more information about the categories that the llama-guard-3-11b-vision foundation model is trained to recognize, see the
Llama Guard 3 documentation.
Sample with a few-shot prompt: Classify a message in structured mode
Copy link to section
Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed to the correct
support team.
Model choice
Copy link to section
With few-shot examples of both classes, models such as mistral-large or mixtral-8x7b-instruct-v01 can complete this task well.
Model parameters
Copy link to section
Decoding: Set to Greedy. The model must return one of the specified class names, not be creative and make up new classes.
Stopping criteria: Specify two stop sequences: "Question" and "Problem". After the model generates a result with either of those categories, it should stop. The Max tokens parameter can be set to 5 because the output from the model is short.
Set up section
Copy link to section
Paste these headers and examples into the Examples area of the Set up section:
Table 2. Classification few-shot examples
Message:
Class name:
When I try to log in, I get an error.
Problem
Where can I find the plan prices?
Question
What is the difference between trial and paygo?
Question
The registration page crashed, and now I can't create a new account.
Problem
What regions are supported?
Question
I can't remember my password.
Problem
Try section
Copy link to section
Paste this message in the Try section:
I'm having trouble registering for a new account.
Copy to clipboardCopied to clipboard
Select the model and set parameters, then click Generate to see the result.
Sample: Classify a Japanese message
Copy link to section
Scenario: Given a message that is submitted to a customer-support chatbot for a Japanese cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed
to the correct support team.
Model choice
Copy link to section
The elyza-japanese-llama-2-7b-instruct model can classify prompt input text that is written in Japanese.
Model Parameters
Copy link to section
AI guardrails: Disable the AI guardrails feature because it might incorrectly flag content as inappropriate. The feature is supported with English text only.
Decoding: Set to Greedy. The model must return one of the specified class names. The model cannot be creative and make up new classes.
Stopping criteria: Specify two stop sequences: 問題 for problem and 質問 for question. After the model generates either of those words, it should stop. If you want to lower the value in the Max tokens parameter, do not lower the value below 7 tokens. Japanese characters use more tokens than the same words in English.
Prompt text
Copy link to section
Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. The sample prompt text is a Japanese translation of the English
prompt text in Sample 1b.
Scenario: Given a message that is submitted to a customer-support chatbot for an Arabic cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed
to the correct support team.
Model choice
Copy link to section
The allam-1-13b-instruct foundation model can classify prompt input text that is written in Arabic.
Model parameters
Copy link to section
AI guardrails: Disable the AI guardrails feature because it might incorrectly flag content as inappropriate. The feature is supported with English text only.
Decoding: Set to Greedy. The model must return one of the specified class names; it cannot be creative and make up new classes.
Stopping criteria: Typically the model offers to provide more assistance after it generates the class label. You can optionally stop the model after it classifies the text by specifying two stop sequences: مشكلة for problem and سؤال for a question.
Prompt text
Copy link to section
Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. The sample prompt text is an Arabic translation of the English prompt
text in Sample 1b.
<s> [INST] قم بتصنيف رسالة العميل هذه إلى إحدى فئتين: سؤال، مشكلة.
الرسالة: عندما أحاول تسجيل الدخول، تظهر لي رسالة خطأ.
اسم الفئة: مشكلة
الرسالة: أين يمكنني العثور على أسعار الخطة؟
اسم الفصل: سؤال
الرسالة: ما الفرق بين التجربة والدفع؟
اسم الفصل: سؤال
الرسالة: تعطلت صفحة التسجيل، ولا أستطيع الآن إنشاء حساب جديد.
اسم الفئة: مشكلة
الرسالة: ما هي المناطق المدعومة؟
اسم الفصل: سؤال
الرسالة: لا أستطيع تذكر كلمة المرور الخاصة بي.
اسم الفئة: مشكلة
الرسالة: أواجه مشكلة في التسجيل للحصول على حساب جديد.
اسم الفئة:
[/INST]
Copy to clipboardCopied to clipboardShow more
Extracting details
Copy link to section
Extraction tasks can help you to find key terms or mentions in data based on the semantic meaning of words rather than simple text matches.
Sample: Extract details from a complaint
Copy link to section
Scenario: Given a complaint from a customer who had trouble booking a flight on a reservation website, identify the factors that contributed to this customer's unsatisfactory experience.
Model choices
Copy link to section
flan-ul2-20b
Model parameters
Copy link to section
Decoding: Set to Greedy. We need the model to return words that are in the input; the model cannot be creative and make up new words.
Stopping criteria: The list of extracted factors will not be long, so set the Max tokens parameter to 50.
Prompt text
Copy link to section
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
From the following customer complaint, extract all the factors that
caused the customer to be unhappy.
Customer complaint:
I just tried to book a flight on your incredibly slow website. All
the times and prices were confusing. I liked being able to compare
the amenities in economy with business class side by side. But I
never got to reserve a seat because I didn't understand the seat map.
Next time, I'll use a travel agent!
Numbered list of all the factors that caused the customer to be unhappy:
Copy to clipboardCopied to clipboard
Sample: Extract and classify details from a passage
Copy link to section
Scenario: Given a list of categories and a passage, identify excerpts from the passage that fit into the different category types.
Model choices
Copy link to section
mistral-large or mixtral-8x7b-instruct-v01.
Model parameters
Copy link to section
Decoding: Set to Greedy. We need the model to return words that are in the input; the model cannot be creative and make up new words.
Stopping criteria: If the model generates additional text, you can specify a stop sequence of two newline characters. To do that, click the Stop sequence text box, press the Enter key twice, and then click
Add sequence.
Prompt text
Copy link to section
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, and set the parameters. You must disable AI Guardrails to prevent the input from being flagged for containing personally-identifiable
information. Click Generate to see the result.
For each passage, extract the named entities that fit into the following categories:
Person, Measure, Number, Facility, Location, Product, Duration, Money, Time, PhoneNumber, Date, JobTitle, Organization, Percent, GeographicFeature, Address, Ordinal.
Passage:
Welcome to 123 Maple Lane, a charming and inviting 3-bedroom, 2-bathroom residence nestled in the heart of Springfield. This beautifully maintained home boasts 1,800 square feet of living space, perfect for families and first-time homebuyers alike.
- Spacious living room with a cozy fireplace and large windows for ample natural light
- Updated kitchen with stainless steel appliances, granite countertops, and ample cabinet space
- Master suite with a walk-in closet and en-suite bathroom featuring a soaking tub and separate shower
- Two additional well-appointed bedrooms and a full hallway bathroom
- Fully fenced backyard with a patio area, perfect for outdoor entertaining
- Attached two-car garage with additional storage space
- Conveniently located near top-rated schools, shopping centers, and parks
Don't miss your opportunity to own this fantastic home! Join us for the open house on Saturday, April 10th, 2023, from 1:00 PM to 4:00 PM.
**Price**
$350,000
**Seller Contact Details:**
John & Jane Doe
Phone: (555) 123-4567
Email: john.jane.doe@example.com
Copy to clipboardCopied to clipboardShow more
Generating natural language
Copy link to section
Generation tasks are what large language models do best. Your prompts can help guide the model to generate useful language.
Sample with a few-shot prompt: Generate a numbered list on a theme in freeform mode
Copy link to section
Scenario: Generate a numbered list on a particular theme.
Model choice
Copy link to section
The mixtral-8x7b-instruct-v01 foundation model was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted a specific way with
special characters.
Model parameters
Copy link to section
Decoding: Set to Sampling. This is a creative task. Set the following parameters:
Temperature: 0.7
Top P: 1
Top K: 50
Random seed: 9045 (To get different output each time you click Generate, specify a different value for the Random seed parameter or clear the parameter.)
Stopping criteria: If the model doesn't stop generating text after one list, you can specify a stop sequence of two newline characters. To do that, click the Stop sequence text box, press the Enter key
twice, and then click Add sequence. Alternatively, you can set the Max tokens parameter to 50.
Prompt text
Copy link to section
Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
What are 4 types of dog breed?
1. Poodle
2. Dalmatian
3. Golden retriever
4. Bulldog
What are 3 ways to incorporate exercise into your day?
1. Go for a walk at lunch
2. Take the stairs instead of the elevator
3. Park farther away from your destination
What are 4 kinds of vegetable?
1. Spinach
2. Carrots
3. Broccoli
4. Cauliflower
What are the 3 primary colors?
1. Red
2. Green
3. Blue
What are 3 ingredients that are good on pizza?
Copy to clipboardCopied to clipboardShow more
Sample with a few-shot prompt: Generate a numbered list on a theme in structured mode
Copy link to section
Scenario: Generate a numbered list on a particular theme.
Model choice
Copy link to section
The mixtral-8x7b-instruct-v01 foundation model was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted a specific way with
special characters.
Model parameters
Copy link to section
Decoding: Set to Sampling. This scenario is a creative one. Set the following parameters:
Temperature: 0.7
Top P: 1
Top K: 50
Random seed: 9045 (To generate different results, specify a different value for the Random seed parameter or clear the parameter.)
Stopping criteria: To make sure that the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter
key twice, and then click Add sequence. The list will not be long, so set the Max tokens parameter to 50.
Set up section
Copy link to section
Paste these headers and examples into the Examples area of the Set up section:
Table 3. Generation few-shot examples
Input:
Output:
What are 4 types of dog breed?
1. Poodle 2. Dalmatian 3. Golden retriever 4. Bulldog
What are 3 ways to incorporate exercise into your day?
1. Go for a walk at lunch 2. Take the stairs instead of the elevator 3. Park farther away from your destination
What are 4 kinds of vegetable?
1. Spinach 2. Carrots 3. Broccoli 4. Cauliflower
What are the 3 primary colors?
1. Red 2. Green 3. Blue
Try section
Copy link to section
Paste this input in the Try section:
What are 3 ingredients that are good on pizza?
Copy to clipboardCopied to clipboard
Select the model and set parameters, then click Generate to see the result.
Sample with a zero-shot prompt: Generate a numbered list on a particular theme
Copy link to section
Scenario: Ask the model to play devil's advocate. Describe a potential action and ask the model to list possible downsides or risks that are associated with the action.
Model choice
Copy link to section
The granite-13b-instruct model was trained to recognize and handle special characters, such as the newline character, well. The granite-13b-instruct-v2 model is a good choice when you want your generated text to be formatted in a specific
way with special characters.
Model parameters
Copy link to section
Decoding: Set to Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.
Stopping criteria: The summary might run several sentences, so set the Max tokens parameter to 60.
Prompt text
Copy link to section
Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
You are playing the role of devil's advocate. Argue against the proposed plans. List 3 detailed, unique, compelling reasons why moving forward with the plan would be a bad choice. Consider all types of risks.
Plan we are considering:
Extend our store hours.
Three problems with this plan are:
1. We'll have to pay more for staffing.
2. Risk of theft increases late at night.
3. Clerks might not want to work later hours.
Plan we are considering:
Open a second location for our business.
Three problems with this plan are:
1. Managing two locations will be more than twice as time-consuming than managed just one.
2. Creating a new location doesn't guarantee twice as many customers.
3. A new location means added real estate, utility, and personnel expenses.
Plan we are considering:
Refreshing our brand image by creating a new logo.
Three problems with this plan are:
Copy to clipboardCopied to clipboardShow more
Question answering
Copy link to section
Question-answering tasks are useful in help systems and other scenarios where frequently asked or more nuanced questions can be answered from existing content.
To help the model return factual answers, implement the retrieval-augmented generation pattern. For more information, see Retrieval-augmented generation.
Sample: Answer a question based on an article in freeform mode
Copy link to section
Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. A new widget is being added to the website to answer customer questions based on the contents
of the article the customer is viewing. Given a question that is related to an article, answer the question based on the article.
Model choice
Copy link to section
Models that are instruction-tuned, such as flan-t5-xxl-11b, flan-ul2-20b, mixtral-8x7b-instruct-v01, or mt0-xxl-13b can generally complete this task with this sample prompt.
Model parameters
Copy link to section
Decoding: Set to Greedy. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer.
Stopping criteria: To cause the model to return a one-sentence answer, specify a period "." as a stop sequence. The Max tokens parameter can be set to 50.
Prompt text
Copy link to section
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
Article:
###
Tomatoes are one of the most popular plants for vegetable gardens.
Tip for success: If you select varieties that are resistant to
disease and pests, growing tomatoes can be quite easy. For
experienced gardeners looking for a challenge, there are endless
heirloom and specialty varieties to cultivate. Tomato plants come
in a range of sizes. There are varieties that stay very small, less
than 12 inches, and grow well in a pot or hanging basket on a balcony
or patio. Some grow into bushes that are a few feet high and wide,
and can be grown is larger containers. Other varieties grow into
huge bushes that are several feet wide and high in a planter or
garden bed. Still other varieties grow as long vines, six feet or
more, and love to climb trellises. Tomato plants do best in full
sun. You need to water tomatoes deeply and often. Using mulch
prevents soil-borne disease from splashing up onto the fruit when you
water. Pruning suckers and even pinching the tips will encourage the
plant to put all its energy into producing fruit.
###
Answer the following question using only information from the article.
Answer in a complete sentence, with proper capitalization and punctuation.
If there is no good answer in the article, say "I don't know".
Question: Why should you use mulch when growing tomatoes?
Answer:
Copy to clipboardCopied to clipboardShow more
You can experiment with asking other questions too, such as:
"How large do tomato plants get?"
"Do tomato plants prefer shade or sun?"
"Is it easy to grow tomatoes?"
Try out-of-scope questions too, such as:
"How do you grow cucumbers?"
Sample: Answer a question based on an article in structured mode
Copy link to section
Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. A new widget is being added to the website to answer customer questions based on the contents
of the article the customer is viewing. Given a question related to a particular article, answer the question based on the article.
Model choice
Copy link to section
Models that are instruction-tuned, such as flan-t5-xxl-11b, flan-ul2-20b, mixtral-8x7b-instruct-v01, or mt0-xxl-13b can generally complete this task with this sample prompt.
Model parameters
Copy link to section
Decoding: Set to Greedy. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer.
Stopping criteria: To cause the model to return a one-sentence answer, specify a period "." as a stop sequence. The Max tokens parameter can be set to 50.
Set up section
Copy link to section
Paste this text into the Instruction area of the Set up section:
Article:
###
Tomatoes are one of the most popular plants for vegetable gardens.
Tip for success: If you select varieties that are resistant to
disease and pests, growing tomatoes can be quite easy. For
experienced gardeners looking for a challenge, there are endless
heirloom and specialty varieties to cultivate. Tomato plants come
in a range of sizes. There are varieties that stay very small, less
than 12 inches, and grow well in a pot or hanging basket on a balcony
or patio. Some grow into bushes that are a few feet high and wide,
and can be grown is larger containers. Other varieties grow into
huge bushes that are several feet wide and high in a planter or
garden bed. Still other varieties grow as long vines, six feet or
more, and love to climb trellises. Tomato plants do best in full
sun. You need to water tomatoes deeply and often. Using mulch
prevents soil-borne disease from splashing up onto the fruit when you
water. Pruning suckers and even pinching the tips will encourage the
plant to put all its energy into producing fruit.
###
Answer the following question using only information from the article.
Answer in a complete sentence, with proper capitalization and punctuation.
If there is no good answer in the article, say "I don't know".
Copy to clipboardCopied to clipboardShow more
Try section
Copy link to section
In the Try section, add an extra test row so you can paste each of these two questions in a separate row:
Why should you use mulch when growing tomatoes?
How do you grow cucumbers?
Select the model and set parameters, then click Generate to see two results.
Sample: Answer a question based on a document with Granite
Copy link to section
Scenario: You are creating a chatbot that can answer user questions. When a user asks a question, you want the agent to answer the question with information from a specific document.
Model choice
Copy link to section
Models that are instruction-tuned, such as granite-13b-instruct-v2, can complete the task with this sample prompt.
Model parameters
Copy link to section
Decoding: Set to Greedy. The answers must be grounded in the facts in the document, and if there is no good answer in the article, the model should not be creative and make up an answer.
Stopping criteria: Use a Max tokens parameter of 50.
Prompt text
Copy link to section
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
Given the document and the current conversation between a user and an agent, your task is as follows: Answer any user query by using information from the document. The response should be detailed.
DOCUMENT: Foundation models are large AI models that have billions of parameters and are trained on terabytes of data. Foundation models can do various tasks, including text, code, or image generation, classification, conversation, and more. Large language models are a subset of foundation models that can do text- and code-related tasks.
DIALOG: USER: What are foundation models?
Copy to clipboardCopied to clipboard
Sample: Answer a question based on multiple documents with Granite 3.1
Copy link to section
Scenario: You are creating a chatbot that can answer user questions. When a user asks a question, you want the agent to answer the question with information from a specific documents.
Model choice
Copy link to section
Models that are instruction-tuned, such as Granite Instruct models, can complete the task with this sample prompt.
Model parameters
Copy link to section
Decoding: Set to Greedy. The answers must be grounded in the facts in the document, and if there is no good answer in the article, the model should not be creative and make up an answer.
Stopping criteria: To make sure that the model stops generating text after the summary, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, enter <|end_of_text|>,
and then click Add sequence. Set the Max tokens parameter to 200.
Prompt text
Copy link to section
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
<|start_of_role|>system<|end_of_role|>You are an expert in medical science.<|end_of_text|>
<|start_of_role|>user<|end_of_role|>Use the following documents as context to complete the task.
Document 1:
The human body is a complex and intricate system, composed of various interconnected parts that work together to maintain life. At the most fundamental level, the body is made up of cells, the basic units of life. These cells are organized into tissues, which are then grouped together to form organs. Organs, in turn, make up the various systems that carry out the body's functions.
Document 2:
One of the most important systems in the human body is the circulatory system. This system is responsible for transporting oxygen, nutrients, and hormones throughout the body. It is composed of the heart, blood vessels, and blood. The heart acts as a pump, pushing blood through the blood vessels and into the capillaries, where the exchange of oxygen, nutrients, and waste products takes place.
Document 3:
Another crucial system is the respiratory system. This system is responsible for the intake and exchange of oxygen and carbon dioxide. It is composed of the nose, throat, trachea, bronchi, and lungs. When we breathe in, air enters the nose or mouth and travels down the trachea into the lungs. Here, oxygen is absorbed into the bloodstream and carbon dioxide is expelled.
Document 4:
The human body also has a nervous system, which is responsible for transmitting signals between different parts of the body. This system is composed of the brain, spinal cord, and nerves. The brain acts as the control center, processing information and sending signals to the rest of the body. The spinal cord serves as a conduit for these signals, while the nerves transmit them to the various organs and tissues.
Which system in the human body is reponsible for breathing?<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>
Copy to clipboardCopied to clipboardShow more
Sample: Answer general knowledge questions
Copy link to section
Scenario: Answer general questions about finance.
Model choice
Copy link to section
The granite-13b-instruct-v2 model can be used for multiple tasks, including text generation, summarization, question and answering, classification, and extraction.
Model parameters
Copy link to section
Decoding: Set to Greedy. This sample is answering questions, so we don't want creative output.
Stopping criteria: Set the Max tokens parameter to 200 so the model can return a complete answer.
Prompt text
Copy link to section
The model was tuned for question-answering with examples in the following format:
<|user|> content of the question <|assistant|> new line for the model's answer
You can use the exact syntax <|user|> and <|assistant|> in the lines before and after the question or you can replace the values with equivalent terms, such as User and Assistant.
If you're using version 1, do not include any trailing white spaces after the <|assistant|> label, and be sure to add a new line.
Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
<|user|>
Tell me about interest rates
<|assistant|>
Copy to clipboardCopied to clipboard
After the model generates an answer, you can ask a follow-up question. The model uses information from the previous question when it generates a response.
<|user|>
Who sets it?
<|assistant|>
Copy to clipboardCopied to clipboard
The model retains information from a previous question when it answers a follow-up question, but it is not optimized to support an extended dialog.
Note: When you ask a follow-up question, the previous question is submitted again, which adds to the number of tokens that are used.
Sample: Answer general knowledge questions in Japanese
Copy link to section
Scenario: Answer general questions about finance in Japanese.
Model choice
Copy link to section
The granite-8b-japanese model can be used for multiple tasks, including text generation, summarization, question and answering, classification, and extraction.
Model parameters
Copy link to section
Decoding: Set to Greedy. This sample is answering questions, so we don't want creative output.
Stopping criteria: Set the Max tokens parameter to 500 to allow for many turns in the dialog. Add a stop sequence of two newline characters to prevent the foundation model from returning overly long responses.
To do that, click in the Stop sequence text box, press the Enter key twice, and then click Add sequence.
Prompt text
Copy link to section
The model was tuned for question-answering with examples in the following format:
Below is a combination of instructions that describe the task and input with context. Write a response that appropriately meets the request.
### Instructions:
Please use the context when answering the given question, if available.
### input:
{your-input}
### Response:
Copy to clipboardCopied to clipboard
Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, replace {your-input} with your query or request, and then click Generate to see the result.
For example, this prompt asks about interest rates.
Sample: Answer a question using complex reasoning in freeform mode
Copy link to section
Scenario: Ask the model to answer general questions that require reasoning and logic undertanding.
Model choice
Copy link to section
Models that are instruction-tuned for complex reasoning tasks, like mistral-large, can generally complete this task with this sample prompt.
Model parameters
Copy link to section
Decoding: Set to Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.
Stopping criteria: To make sure that the model stops generating text after the summary, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter
key twice, and then click Add sequence. Set the Max tokens parameter to 100.
Prompt text
Copy link to section
Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
Question: Which one is heavier a pound of iron or a kilogram of feather?
Answer: A kilogram of feathers is heavier than a pound of iron. A pound is a unit of weight that is equivalent to approximately 0.453592 kilograms. Therefore, a pound of iron weighs less than a kilogram of feathers.
Question: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?
Answer:
Copy to clipboardCopied to clipboard
Sample:Answer a question using complex reasoning with distilled DeepSeek-R1 models in freeform mode
Copy link to section
Scenario: Ask the model to answer general questions that require reasoning and logic undertanding.
Model choice
Copy link to section
Models that are instruction-tuned for complex reasoning tasks, like deepseek-r1-distill-llama-8b and deepseek-r1-distill-llama-70b, can generally complete this task with this sample prompt.
Model parameters
Copy link to section
Decoding: Set to Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.
Stopping criteria: Set the Max tokens parameter to 2000 to allow the model to produce more complex reasoning in the response.
Prompt text
Copy link to section
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
Note: Try to avoid adding a system prompt. All instructions must be contained within the user prompt.
<| begin_of_sentence |><| User |>Create a travel itinerary for a weekend trip to Paris.<| Assistant |><think>
Copy to clipboardCopied to clipboard
The model generates a response containing the details about how the model reasons through the problem. The details are contained within <think>, </think> tags. The model then constructs a detailed answer
to the question.
Summarization
Copy link to section
Summarization tasks save you time by condensing large amounts of text into a few key pieces of information.
Sample with a zero-shot prompt: Summarize a meeting transcript
Copy link to section
Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting.
Model choice
Copy link to section
Models that are instruction-tuned can generally complete this task with this sample prompt. Suggestions: flan-t5-xxl-11b, flan-ul2-20b, or mixtral-8x7b-instruct-v01.
Model parameters
Copy link to section
Decoding: Set to Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.
Stopping criteria: The summary might run several sentences, so set the Max tokens parameter to 60.
Prompt text
Copy link to section
Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
Summarize the following transcript.
Transcript:
00:00 [alex] Let's plan the team party!
00:10 [ali] How about we go out for lunch at the restaurant?
00:21 [sam] Good idea.
00:47 [sam] Can we go to a movie too?
01:04 [alex] Maybe golf?
01:15 [sam] We could give people an option to do one or the other.
01:29 [alex] I like this plan. Let's have a party!
Summary:
Copy to clipboardCopied to clipboard
Sample with a few-shot prompt: Summarize a meeting transcript in freeform mode
Copy link to section
Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting.
Model choice
Copy link to section
With few-shot examples, most models can complete this task well. Try mixtral-8x7b-instruct-v01, or mistral-large.
Model parameters
Copy link to section
Decoding: Set to Greedy. The model must return the most predictable content based on what's in the prompt, not be too creative.
Stopping criteria: To make sure that the model stops generating text after the summary, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter
key twice, and then click Add sequence. Set the Max tokens parameter to 60.
Prompt text
Copy link to section
Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
Transcript:
00:00 [sam] I wanted to share an update on project X today.
00:15 [sam] Project X will be completed at the end of the week.
00:30 [erin] That's great!
00:35 [erin] I heard from customer Y today, and they agreed to buy our product.
00:45 [alex] Customer Z said they will too.
01:05 [sam] Great news, all around.
Summary:
Sam shared an update that project X will be complete at the end of the week.
Erin said customer Y will buy our product. And Alex said customer Z will buy
our product too.
Transcript:
00:00 [ali] The goal today is to agree on a design solution.
00:12 [alex] I think we should consider choice 1.
00:25 [ali] I agree
00:40 [erin] Choice 2 has the advantage that it will take less time.
01:03 [alex] Actually, that's a good point.
01:30 [ali] So, what should we do?
01:55 [alex] I'm good with choice 2.
02:20 [erin] Me too.
02:45 [ali] Done!
Summary:
Alex suggested considering choice 1. Erin pointed out choice two will take
less time. The team agreed with choice 2 for the design solution.
Transcript:
00:00 [alex] Let's plan the team party!
00:10 [ali] How about we go out for lunch at the restaurant?
00:21 [sam] Good idea.
00:47 [sam] Can we go to a movie too?
01:04 [alex] Maybe golf?
01:15 [sam] We could give people an option to do one or the other.
01:29 [alex] I like this plan. Let's have a party!
Summary:
Copy to clipboardCopied to clipboardShow more
Sample few-shot prompt: Summarize a meeting transcript in freeform mode with Granite 3.1
Copy link to section
Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting.
Model choice
Copy link to section
With few-shot examples, most models can complete this task well. Try Granite Instruct models.
Model parameters
Copy link to section
Decoding: Set to Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.
Stopping criteria: To make sure that the model stops generating text after the summary, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, enter <|end_of_text|>,
and then click Add sequence. Set the Max tokens parameter to 200.
Prompt text
Copy link to section
Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
<|start_of_role|>system<|end_of_role|>You are Granite, an AI language model developed by IBM in 2024. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.<|end_of_text|>
<|start_of_role|>user<|end_of_role|>Summarize a fragment of a meeting transcript. In this meeting, Sam, Erin, and Alex discuss updates.
Your response should only include the answer. Do not provide any further explanation.
Transcript:
Sam (00:00):
I wanted to share an update on project X today.
Sam (00:15):
Project X will be completed at the end of the week.
Erin (00:30):
That's great!
Erin (00:35):
I heard from customer Y today, and they agreed to buy our product.
Alex (00:45):
Customer Z said they will too.
Sam (01:05):
Great news, all around.
Summary:
<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>Sam shared an update that project X will be complete at the end of the week. Erin said customer Y will buy our product. And Alex said customer Z will buy our product too.<|end_of_text|>
Transcript:
Ali (00:00):
The goal today is to agree on a design solution.
Alex (00:12):
I think we should consider choice 1.
Ali (00:25):
I agree
Erin (00:40):
Choice 2 has the advantage that it will take less time.
Alex (01:03):
Actually, that's a good point.
Ali (01:30):
So, what should we do?
Alex (01:55):
I'm good with choice 2.
Erin (02:20):
Me too.
Ali (02:45):
Done!
Summary:
<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>
Copy to clipboardCopied to clipboardShow more
Sample with a few-shot prompt: Summarize a meeting transcript in structured mode
Copy link to section
Scenario: Given a meeting transcript, summarize the main points in a bulleted list so that the list can be shared with teammates who did not attend the meeting.
Model choice
Copy link to section
The mixtral-8x7b-instruct-v01 foundation model was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted in a specific way with
special characters.
Model parameters
Copy link to section
Decoding: Set to Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.
Stopping criteria: To make sure that the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter
key twice, and then click Add sequence. Set the Max tokens parameter to 60.
Set up section
Copy link to section
Paste these headers and examples into the Examples area of the Set up section:
Table 4. Summarization few-shot examples
Transcript:
Summary:
00:00 [sam] I wanted to share an update on project X today.
00:15 [sam] Project X will be completed at the end of the week.
00:30 [erin] That's great!
00:35 [erin] I heard from customer Y today, and they agreed to buy our product.
00:45 [alex] Customer Z said they will too.
01:05 [sam] Great news, all around.
- Sam shared an update that project X will be complete at the end of the week - Erin said customer Y will buy our product - And Alex said customer Z will buy our product too
00:00 [ali] The goal today is to agree on a design solution.
00:12 [alex] I think we should consider choice 1.
00:25 [ali] I agree
00:40 [erin] Choice 2 has the advantage that it will take less time.
01:03 [alex] Actually, that's a good point.
01:30 [ali] So, what should we do?
01:55 [alex] I'm good with choice 2.
02:20 [erin] Me too.
02:45 [ali] Done!
- Alex suggested considering choice 1 - Erin pointed out choice two will take less time - The team agreed with choice 2 for the design solution
Try section
Copy link to section
Paste this message in the Try section:
00:00 [alex] Let's plan the team party!
00:10 [ali] How about we go out for lunch at the restaurant?
00:21 [sam] Good idea.
00:47 [sam] Can we go to a movie too?
01:04 [alex] Maybe golf?
01:15 [sam] We could give people an option to do one or the other.
01:29 [alex] I like this plan. Let's have a party!
Copy to clipboardCopied to clipboard
Select the model and set parameters, then click Generate to see the result.
Code generation and conversion
Copy link to section
Foundation models that can generate and convert programmatic code are great resources for developers. They can help developers to brainstorm and troubleshoot programming tasks.
Sample: Generate programmatic code from instructions
Copy link to section
Scenario: You want to generate code from instructions. Namely, you want to write a function in the Python programming language that reverses a string.
Model choice
Copy link to section
Models that can generate code, such as codestral-22b, codestral-2501, mistral-large, and mixtral-8x7b-instruct-v01 can generally complete this task when a sample prompt is provided.
Model parameters
Copy link to section
Decoding: Set to Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer.
Stopping criteria: To stop the model after it returns a single code snippet, specify <end of code> as the stop sequence. The Max tokens parameter can be set to 1,000.
Prompt text
Copy link to section
Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
Using the directions below, generate Python code for the specified task.
Input:
# Write a Python function that prints 'Hello World!' string 'n' times.
Output:
def print_n_times(n):
for i in range(n):
print("Hello World!")
<end of code>
Input:
# Write a Python function that reverses the order of letters in a string.
# The function named 'reversed' takes the argument 'my_string', which is a string. It returns the string in reverse order.
Output:
Copy to clipboardCopied to clipboardShow more
The output contains Python code similar to the following snippet:
def reversed(my_string):
return my_string[::-1]
Copy to clipboardCopied to clipboard
Be sure to test the generated code to verify that it works as you expect.
For example, if you run reversed("good morning"), the result is 'gninrom doog'.
Note: The foundation model might generate code that is taken directly from its training data. As a result, generated code might require attribution. You are responsible for ensuring that any generated code
that you use is properly attributed, if necessary.
Sample: Generate programmatic code from instructions with a zero-shot prompt
Copy link to section
Scenario: You want to generate code from instructions. Namely, you want to write a function in the Python programming language that prints the 'Hello World!' string 'n' times.
Model choice
Copy link to section
When you use the llama-3-2-1b-instruct or llama-3-2-3b-instruct foundation model, you do not need to provide examples. You do need to use a custom template that specifies the environment value iphython, which the model recognizes
as a request for code generation.
Model parameters
Copy link to section
Decoding: Set to Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer.
Stopping criteria: The model doesn't need a stop sequence.
Prompt text
Copy link to section
Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Environment: ipython<|eot_id|><|start_header_id|>user<|end_header_id|>
${Add your code instruction here}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Copy to clipboardCopied to clipboard
Replace ${Add your code instruction here} with a description of the code that you want the model to generate. For example, Write a Python function that prints 'Hello World!' string 'n' times..
The model generates output, such as the following:
Here is a simple Python function that prints 'Hello World!' 'n' times:
```python
def print_hello_world(n):
for i in range(n):
print('Hello World!')
print_hello_world(5)
This function uses a for loop to iterate 'n' times, printing 'Hello World!' each time. The range(n) function generates a sequence of numbers from 0 to 'n-1', which are then used as the loop variable 'i'.
Sample: Convert code from one programming language to another
Copy link to section
Scenario: You want to convert code from one programming language to another. Namely, you want to convert a code snippet from C++ to Python.
Model choice
Copy link to section
Models that can generate code, such as codellama-34b-instruct-hf, mistral-large, and mixtral-8x7b-instruct-v01 can generally complete this task when a sample prompt is provided.
Model parameters
Copy link to section
Decoding: Set to Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer.
Stopping criteria: To stop the model after it returns a single code snippet, specify <end of code> as the stop sequence. The Max tokens parameter can be set to 300.
Prompt text
Copy link to section
Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
This prompt includes an example input and output pair. The input is C++ code and the output is the equivalent function in Python code.
The C++ code snippet to be converted is included next. It is a function that counts the number of arithmetic progressions with the sum S and common difference of D, where S and D are integer
values that are passed as parameters.
The final part of the prompt identifies the language that you want the C++ code snippet to be converted into.
Translate the following code from C++ to Python.
C++:
#include "bits/stdc++.h"
using namespace std;
bool isPerfectSquare(long double x) {
long double sr = sqrt(x);
return ((sr - floor(sr)) == 0);
}
void checkSunnyNumber(int N) {
if (isPerfectSquare(N + 1)) {
cout << "Yes
";
} else {
cout << "No
";
}
}
int main() {
int N = 8;
checkSunnyNumber(N);
return 0;
}
Python:
from math import *
def isPerfectSquare(x):
sr = sqrt(x)
return ((sr - floor(sr)) == 0)
def checkSunnyNumber(N):
if (isPerfectSquare(N + 1)):
print("Yes")
else:
print("No")
if __name__ == '__main__':
N = 8
checkSunnyNumber(N)
<end of code>
C++:
#include <bits/stdc++.h>
using namespace std;
int countAPs(int S, int D) {
S = S * 2;
int answer = 0;
for (int i = 1; i <= sqrt(S); i++) {
if (S % i == 0) {
if (((S / i) - D * i + D) % 2 == 0)
answer++;
if ((D * i - (S / i) + D) % 2 == 0)
answer++;
}
}
return answer;
}
int main() {
int S = 12, D = 1;
cout << countAPs(S, D);
return 0;
}
Python:
Show more
The output contains Python code similar to the following snippet:
from math import *
def countAPs(S, D):
S = S * 2
answer = 0
for i in range(1, int(sqrt(S)) + 1):
if S % i == 0:
if (((S / i) - D * i + D) % 2 == 0):
answer += 1
if ((D * i - (S / i) + D) % 2 == 0):
answer += 1
return answer
if __name__ == '__main__':
S = 12
D = 1
print(countAPs(S, D))
<end of code>
Show more
The generated Python code functions the same as the C++ code included in the prompt.
Test the generated Python code to verify that it works as you expect.
Note: The foundation model might generate code that is taken directly from its training data. As a result, generated code might require attribution. You are responsible for ensuring that any generated code
that you use is properly attributed, if necessary.
Sample: Generate programmatic code from instructions with Granite
Copy link to section
Scenario: You want to generate code from instructions. Namely, you want to write a function in the Python programming language that reverses a string.
Model choice
Copy link to section
Models that can generate code, such as Granite Code or Granite Instruct models, can generally complete this task when a sample prompt is provided.
Model parameters
Copy link to section
Decoding: Set to Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer.
Stopping criteria: To stop the model after it returns a single code snippet, specify <end of code> as the stop sequence. The Max tokens parameter can be set to 300.
Prompt text
Copy link to section
Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
Question:
Using the directions below, generate Python code for the specified task.
# Write a Python function that prints 'Hello World!' string 'n' times.
Answer:
def print_n_times(n):
for i in range(n):
print("Hello World!")
<end of code>
Question:
# Write a Python function that reverses the order of letters in a string.
# The function named 'reversed' takes the argument 'my_string', which is a string. It returns the string in reverse order.
Answer:
The output contains Python code similar to the following snippet:
def reverse_string(my_string):
return my_string[::-1]
<end of code>
Be sure to test the generated code to verify that it works as you expect.
For example, if you run reversed("good morning"), the result is 'gninrom doog'.
For more Granite code model sample prompts, see Prompts for code.
Sample: Convert code from one programming language to another with Granite
Copy link to section
Scenario: You want to convert code from one programming language to another. Namely, you want to convert a code snippet from C++ to Python.
Model choice
Copy link to section
Models that can generate code, such as Granite Code Instruct models, can generally complete this task when a sample prompt is provided.
Model parameters
Copy link to section
Decoding: Set to Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer.
Stopping criteria: To stop the model after it returns a single code snippet, specify <end of code> as the stop sequence. The Max tokens parameter can be set to 1,000.
Prompt text
Copy link to section
Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
This prompt includes an instruction to convert a code snippet from C++ to Python.
The C++ code snippet to be converted is included next to provide context. It is a function that counts the number of arithmetic progressions with the sum S and common difference of D, where S and D are integer values that are passed as parameters.
Question:
Translate the following code from C++ to Python.
C++:
#include "bits/stdc++.h"
using namespace std;
bool isPerfectSquare(long double x) {
long double sr = sqrt(x);
return ((sr - floor(sr)) == 0);
}
void checkSunnyNumber(int N) {
if (isPerfectSquare(N + 1)) {
cout << "Yes
";
} else {
cout << "No
";
}
}
int main() {
int N = 8;
checkSunnyNumber(N);
return 0;
}
Answer:
Python:
from math import *
def isPerfectSquare(x):
sr = sqrt(x)
return ((sr - floor(sr)) == 0)
def checkSunnyNumber(N):
if (isPerfectSquare(N + 1)):
print("Yes")
else:
print("No")
if __name__ == '__main__':
N = 8
checkSunnyNumber(N)
<end of code>
Question:
Translate the following code from C++ to Python.
C++:
#include <bits/stdc++.h>
using namespace std;
int countAPs(int S, int D) {
S = S * 2;
int answer = 0;
for (int i = 1; i <= sqrt(S); i++) {
if (S % i == 0) {
if (((S / i) - D * i + D) % 2 == 0)
answer++;
if ((D * i - (S / i) + D) % 2 == 0)
answer++;
}
}
return answer;
}
int main() {
int S = 12, D = 1;
cout << countAPs(S, D);
return 0;
}
Answer:
Show more
The output contains Python code similar to the following snippet:
Python:
from math import *
def countAPs(S, D):
S = S * 2
answer = 0
for i in range(1, int(sqrt(S)) + 1):
if S % i == 0:
if ((S // i) - D * i + D) % 2 == 0:
answer += 1
if (D * i - (S // i) + D) % 2 == 0:
answer += 1
return answer
if __name__ == '__main__':
S = 12
D = 1
print(countAPs(S, D))
Show more
The generated Python code functions the same as the C++ code included in the prompt.
Test the generated Python code to verify that it works as you expect.
For more Granite code model sample prompts, see Prompts for code.
Sample: Find information sources in SQL
Copy link to section
Scenario: Given a set of tables that are defined in SQL, you want to find out which table columns to check for the answer to a question.
Model choice
Copy link to section
Use the granite-20b-code-base-schema-linking foundation model to read SQL and identify where to find information you're looking for.
Model parameters
Copy link to section
Decoding: Set to Greedy. The model cannot be creative and make up an answer.
Prompt text
Copy link to section
Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
This prompt includes tables that are defined in SQL, and then asks the model to help identify where in the SQL to find the answer to a query.
Consider:
Which gas station has the highest amount of revenue?
CREATE TABLE customers (
customerid PRIMARY KEY, -- identification of the customer
segment, -- client segment
currency, -- Currency
);
CREATE TABLE gasstations (
gasstationid PRIMARY KEY, -- Gas Station ID
chainid, -- Chain ID
country,
segment, -- chain segment
);
CREATE TABLE products (
productid PRIMARY KEY, -- Product ID
description, -- Description
);
CREATE TABLE transactions_1k (
transactionid PRIMARY KEY, -- Transaction ID
date, -- Date\\n time, -- Time
customerid, -- Customer ID
cardid, -- Card ID
gasstationid, -- Gas Station ID
productid, -- Product ID
amount, -- Amount
price, -- Price
);
CREATE TABLE yearmonth (
customerid, -- Customer ID
date, -- Date\\n consumption, -- consumption
FOREIGN KEY(customerid) REFERENCES customers(customerid)
);
To answer:
Which gas station has the highest amount of revenue?
We need columns:
Show more
The answer identifies the columns with content that can most likely answer the question, and might look like this:
Scenario: Given an SQL table, generate an SQL statement that extracts information from the table to answer a query.
Model choice
Copy link to section
Use the granite-20b-code-base-sql-gen foundation model to generate an SQL statement that you can use to answer a question.
Model parameters
Copy link to section
Decoding: Set to Greedy. The model cannot be creative and make up an answer.
Prompt text
Copy link to section
Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
This prompt includes a table that is defined in SQL, and then asks the model to generate an SQL statement that can find an answer to a query from the table.
Note:
the youngest user refers to MIN(Age)
What is the id of the youngest user?
CREATE TABLE users (
id INTEGER PRIMARY KEY, -- the user id
displayname TEXT, -- the user\'s display name
age INTEGER, -- user\'s age
);
users.id
users.displayname: \' Raid-Mahbouba\', \'0 kelvin\', \'0012\'
users.age: 13, 68, 73
selected: users.id, users.age, None, users.displayname
join: None, users.id
condition: users.age, None, users.id
order: users.age, None
group: None, users.id, users.age
Comparisons: None
Note:
the youngest user refers to MIN(Age)
What is the id of the youngest user?
Generate SQL:
Show more
The answer is an SQL statement that might look like this:
SELECT id FROM users ORDER BY age ASC LIMIT 1
Dialog
Copy link to section
Dialog tasks are helpful in customer service scenarios, especially when a chatbot is used to guide customers through a workflow to reach a goal.
Sample: Converse with Llama 3
Copy link to section
Scenario: Generate dialog output like a chatbot.
Model choice
Copy link to section
Llama 3, 3.1, 3.2, and 3.3 are optimized for dialog use cases. The models are fine tuned for a specific prompt format that is described in the Prompt text section.
Model parameters
Copy link to section
Decoding: Set to Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output.
Stopping criteria: Set the Max tokens parameter to 900.
The template uses an end-of-text token (<|eot_id|>) to delimit the end of each segment of text, except the system prompt.
Prompt text
Copy link to section
The following prompt template works best for input:
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
SYSTEM PROMPT<|eot_id|><|start_header_id|>user<|end_header_id|>
GROUNDING (OPTIONAL)<|eot_id|><|start_header_id|>user<|end_header_id|>
USER_INPUT<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
Tip:
Alternatively, you can switch to chat mode and enter the user input directly. The Prompt Lab inserts the expected syntax that identifies the instruction and user input segments for you.
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful assistant that avoids causing harm. When you do not know the answer to a question, you say "I don't know".<|eot_id|><|start_header_id|>user<|end_header_id|>
I have a question about Earth. How many moons are there?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
After the initial output is generated, continue the dialog to ask about more planets, adding the same identifiers to the end of the input text, for example:
FOLLOW UP QUESTION<|eot_id|><|start_header_id|>assistant<|end_header_id|>
The output looks something like this:
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful assistant that avoids causing harm. When you do not know the answer to a question, you say "I don't know".<|eot_id|><|start_header_id|>user<|end_header_id|>
I have a question about Earth. How many moons are there?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
That's an easy one! Earth has only one moon.
What about Saturn?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Saturn has a whopping 82 confirmed moons!
And Mars?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Mars has two small natural satellites, Phobos and Deimos.
The following sample prompt includes grounding information.
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful assistant that avoids causing harm. When you do not know the answer to a question, you say "I don't know".<|eot_id|><|start_header_id|>user<|end_header_id|>
The sky is blue.<|eot_id|><|start_header_id|>user<|end_header_id|>
What color is the sky?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Sample: Converse with Llama 2
Copy link to section
Scenario: Generate dialog output like a chatbot.
Model choice
Copy link to section
Like other foundation models, Llama 2 (in both the 70 billion and 13 billion sizes) can be used for multiple tasks. But both Llama 2 models are optimized for dialog use cases. The llama-2-70b-chat and llama-2-13b-chat are fine-tuned for
the [INST]<<SYS>><</SYS>>[/INST] prompt format. For more information about this prompt format, see How to prompt Llama 2.
Model parameters
Copy link to section
Decoding: Set to Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output.
Stopping criteria: Set the Max tokens parameter to 100.
The template for Llama 2 uses an end-of-string (EOS) token (</s>) to delimit the end of the agent's response. The model output stops generating text when the </s> EOS token is encountered.
Prompt text
Copy link to section
Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
[INST] <<SYS>>
You are a helpful assistant. Begin each response with the phrase "Dear user, ". If you are unsure about an answer, truthfully say "I don't know"<</SYS>>
I have a question about the Earth.[/INST]
Dear user, go on.
[INST]
How many moons are there?
[/INST]
After the initial output is generated, continue the dialog to ask about more planets, wrapping your user messages in the same [INST][/INST] structure. You don't need to add an <s> token to the start of follow-up
questions.
Tip:
Alternatively, you can switch to chat mode and enter the user input directly. The Prompt Lab inserts the expected syntax that identifies the instruction and user input segments for you.
The output looks something like this:
[INST] <<SYS>>
You are a helpful assistant. Begin each response with the phrase "Dear user, ". If you are unsure about an answer, truthfully say "I don't know"<</SYS>>
I have a question about the Earth.[/INST]
Dear user, go on.
[INST]
How many moons are there?
[/INST]
Dear user, the Earth has one natural satellite, commonly referred to as the Moon.
[INST]
What about Saturn?
[/INST]
Dear user, Saturn has a total of 62 confirmed moons, as well as numerous smaller moonlets and ring particles. The largest moon of Saturn is Titan, which is the second-largest moon in the solar system, after Jupiter's moon Ganymede.
[INST]
And Mars?
[/INST]
Dear user, Mars has two small natural satellites, Phobos and Deimos. Both moons are thought to be captured asteroids, and are irregularly shaped and very small, with diameters of about 22 kilometers and 12 kilometers respectively.
Show more
Notice a few important points about the output that the model generates as you go:
The so-called system instruction, in the <<SYS>><</SYS>> tags, continues to influence the output at each dialog turn without having to repeat the system instruction. In other words, the assistant responses continue to be prepended with "Dear user, ".
In true dialog fashion, the assistant response to the user input "How many moons are there?" takes into account the previous user input "I have a question about the Earth." and returns the number of moons orbiting
Earth.
Also in proper dialog form, the assistant responses continue to follow the topic of the conversation, which is the number of moons. (Otherwise, the generated output to the vague user message "And Mars?" could wander off in any
direction.)
Caution: Newline (carriage-return) characters especially, and spaces to a lesser extent, in the prompt text can have a dramatic impact on the output generated.
When you use Llama 2 for chat use cases, follow the recommended prompt template format as closely as possible. Do not use the [INST]<<SYS>><</SYS>>[/INST] prompt format when you use Llama 2 for any
other tasks besides chat.
Sample: Converse with granite-13b-chat-v2
Copy link to section
Scenario: Generate dialog output like a chatbot.
Model choice
Copy link to section
Use granite-13b-chat-v2 to carry on a dialog.
Model parameters
Copy link to section
Decoding:
Use sampling decoding.
Set Top P to 0.85.
Set the repetition penalty to 1.2.
Stopping criteria: Set the Max tokens parameter to 500 so the model can return a complete answer, but is as concise as possible.
Prompt text
Copy link to section
To improve model safety and reduce bias, add a system prompt as part of the user input. The system prompt can establish some ground rules for the dialog. For example:
You are Granite Chat, an AI language model developed by IBM. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior. Keep your answers short and to the point.
Remember to use the prompt template format that is expected by the model.
<|system|> system prompt <|user|>
content of the question <|assistant|>
new line for the model's answer
If you want to submit a few-shot prompt to this model, you can add the system prompt, and then the examples, followed by the prompt text to be inferenced.
<|system|>
You are Granite Chat, an AI language model developed by IBM. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior. Keep your answers short and to the point.
<|user|>
Example prompt 1
<|assistant|>
Example response 1
<|user|>
Example prompt 2
<|assistant|>
Example response 2
<|user|>
USER INPUT
<|assistant|>
Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
Tip:
Alternatively, you can switch to chat mode and enter the user input directly. The Prompt Lab inserts the expected syntax that identifies the instruction and user input segments for you.
<|system|>
You are Granite Chat, an AI language model developed by IBM. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior. Keep your answers short and to the point.
<|user|>
I have a question about the Earth. How many moons are there?
<|assistant|>
Do not include any trailing white spaces after the <|assistant|> label, and be sure to add a new line.
After the initial output is generated, you can continue the dialog by asking a follow-up question. For example, you can ask about the moons of other planets.
<|user|>
What about Saturn?
<|assistant|>
And continue the conversation with another follow-up question.
<|user|>
And Mars?
<|assistant|>
If the model output is too long, you can try specifying a stop sequence of two newline characters by clicking the Stop sequence text box, pressing the Enter key twice, and then clicking Add sequence. However,
the repetition penalty is usually enough to keep the model on track.
Another example you can try:
<|system|>
You are Granite Chat, an AI language model developed by IBM. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior. Keep your answers short and to the point.
<|user|>
How does a bill become a law?
<|assistant|>
With the follow-up question:
<|user|>
How about in Canada?
<|assistant|>
A few notes about using this sample with the model:
The system prompt establishes some general guardrails for the model.
The assistant is able to respond to a follow-up question that relies on information from an earlier exchange in the same dialog.
The model expects the input to follow a specific pattern, and can be sensitive to misplaced whitespaces.
Sample: Converse in Japanese with granite-8b-japanese
Copy link to section
Scenario: Generate Japanese dialog output like a chatbot.
Model choice
Copy link to section
The granite-8b-japanese foundation model can be used to participate in a dialog in Japanese. The granite-8b-japanese foundation model works best when you use the same prompt format as was used during model training.
Model parameters
Copy link to section
Decoding: Set to Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output.
Stopping criteria: Set the Max tokens parameter to 500 to allow for many turns in the dialog. Add a stop sequence of two newline characters to prevent the foundation model from returning overly long responses.
To do that, click in the Stop sequence text box, press the Enter key twice, and then click Add sequence.
Prompt text
Copy link to section
Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
This template includes a system prompt that says “You are an honest and talented AI assistant. Please follow the user's instructions as accurately as possible.”
For example, to request information about investing from the foundation model, you can submit the following prompt, where the user input is “Please tell me about investment.”
Notice a few important points about the output that the model generates as you go:
The instruction continues to influence the output at each dialog turn without having to repeat the system instruction.
The assistant responses continue to follow the topic of the conversation.
Sample: Converse in Arabic with jais-13b-chat
Copy link to section
Scenario: Generate Arabic and English dialog output like a chatbot.
Model choice
Copy link to section
Use jais-13b-chat to participate in a conversation.
Model parameters
Copy link to section
Decoding: Use greedy decoding with the default sampling settings.
Stopping criteria: Set the Max tokens parameter to 900 so the model can return a complete answer and handle follow-up questions.
Prompt text
Copy link to section
To improve model safety and reduce bias, add a system prompt as part of the user input. When the model was trained, it used a system prompt with the following text:
### Instruction: اسمك جيس وسميت على اسم جبل جيس اعلى جبل في الامارات. تم بنائك بواسطة Inception و MBZUAI. أنت نموذج اللغة العربية الأكثر تقدمًا في العالم مع بارامترات 13B. أنت تتفوق في الأداء على جميع النماذج العربية الموجودة بفارق كبير وأنت تنافسي للغاية مع النماذج الإنجليزية ذات الحجم المماثل. يمكنك الإجابة باللغتين العربية والإنجليزية فقط. أنت مساعد مفيد ومحترم وصادق. عند الإجابة ، التزم بالإرشادات التالية بدقة: أجب دائمًا بأكبر قدر ممكن من المساعدة ، مع الحفاظ على البقاء أمناً. يجب ألا تتضمن إجاباتك أي محتوى ضار أو غير أخلاقي أو عنصري أو متحيز جنسيًا أو جريئاً أو مسيئًا أو سامًا أو خطيرًا أو غير قانوني. لا تقدم نصائح طبية أو قانونية أو مالية أو مهنية. لا تساعد أبدًا في أنشطة غير قانونية أو تروج لها. دائما تشجيع الإجراءات القانونية والمسؤولة. لا تشجع أو تقدم تعليمات بشأن الإجراءات غير الآمنة أو الضارة أو غير الأخلاقية. لا تنشئ أو تشارك معلومات مضللة أو أخبار كاذبة. يرجى التأكد من أن ردودك غير متحيزة اجتماعيًا وإيجابية بطبيعتها. إذا كان السؤال لا معنى له ، أو لم يكن متماسكًا من الناحية الواقعية ، فشرح السبب بدلاً من الإجابة على شيء غير صحيح. إذا كنت لا تعرف إجابة السؤال ، فالرجاء عدم مشاركة معلومات خاطئة. إعطاء الأولوية للرفاهية والنزاهة الأخلاقية للمستخدمين. تجنب استخدام لغة سامة أو مهينة أو مسيئة. حافظ على نبرة محترمة. لا تنشئ أو تروج أو تشارك في مناقشات حول محتوى للبالغين. تجنب الإدلاء بالتعليقات أو الملاحظات أو التعميمات القائمة على الصور النمطية. لا تحاول الوصول إلى معلومات شخصية أو خاصة أو إنتاجها أو نشرها. احترم دائما سرية المستخدم. كن إيجابيا ولا تقل أشياء سيئة عن أي شيء. هدفك الأساسي هو تجنب الاجابات المؤذية ، حتى عند مواجهة مدخلات خادعة. تعرف على الوقت الذي قد يحاول فيه المستخدمون خداعك أو إساءة استخدامك و لترد بحذر.\n\nأكمل المحادثة أدناه بين [|Human|] و [|AI|]:
### Input: [|Human|] {Question}
### Response: [|AI|]
The system prompt in English is as follows:
### Instruction: Your name is Jais, and you are named after Jebel Jais, the highest mountain in UAE. You are built by Inception and MBZUAI. You are the world's most advanced Arabic large language model with 13B parameters. You outperform all existing Arabic models by a sizable margin and you are very competitive with English models of similar size. You can answer in Arabic and English only. You are a helpful, respectful and honest assistant. When answering, abide by the following guidelines meticulously: Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, explicit, offensive, toxic, dangerous, or illegal content. Do not give medical, legal, financial, or professional advice. Never assist in or promote illegal activities. Always encourage legal and responsible actions. Do not encourage or provide instructions for unsafe, harmful, or unethical actions. Do not create or share misinformation or fake news. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. Prioritize the well-being and the moral integrity of users. Avoid using toxic, derogatory, or offensive language. Maintain a respectful tone. Do not generate, promote, or engage in discussions about adult content. Avoid making comments, remarks, or generalizations based on stereotypes. Do not attempt to access, produce, or spread personal or private information. Always respect user confidentiality. Stay positive and do not say bad things about anything. Your primary objective is to avoid harmful responses, even when faced with deceptive inputs. Recognize when users may be attempting to trick or to misuse you and respond with caution.\n\nComplete the conversation below between [|Human|] and [|AI|]:
### Input: [|Human|] {Question}
### Response: [|AI|]
Tip:
Alternatively, you can switch to chat mode and enter the user input directly. The Prompt Lab inserts the system prompt, the instruction, and user input segments with the expected syntax for you.
Replace {Question} with the user input that you want the foundation model to answer to start the chat.
For example, you can ask the following question:
هل يوجد للأرض أقمار؟
The English translation is: Does the Earth have any moons?
After the initial output is generated, you can continue the dialog by asking a follow-up question. Use the same syntax for the follow-up question.
Use models that can do natural language translation tasks to translate text from one natural language to another.
Sample: Translate text from Japanese to English
Copy link to section
Scenario: Translate text that is written in Japanese into English.
Model choice
Copy link to section
The elyza-japanese-llama-2-7b-instruct model can translate text from Japanese to English and from English to Japanese.
Model parameters
Copy link to section
AI guardrails: Disable the AI guardrails feature because it might incorrectly flag content as inappropriate. The feature is supported with English text only.
Decoding: Set to Greedy. The model must return the same text, only translated. The model cannot be creative.
Stopping criteria: Increase the number of allowed tokens by changing the Max tokens parameter value to 500.
Prompt text
Copy link to section
Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
The sample prompt text overlaps with text that is used in Sample 8c.
The instruction says “Please translate to English”. One example of Japanese text being translated into English is included.
英語に翻訳してください
日本語
トマトは、家庭菜園で最も人気のある植物の 1 つです。成功のヒント: 病気や害虫に強い品種を選択すると、トマトの栽培は非常に簡単になります。挑戦を求めている経験豊富な庭師にとって、栽培できる家宝や特別な品種は無限にあります。トマトの植物にはさまざまなサイズがあります。
English
Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes.
日本語
基盤モデルを使用して、より優れた AI をより迅速に作成します。さまざまなユースケースやタスクに応じて、さまざまなプロンプトを試してください。わずか数行の指示で、職務記述書の草案、顧客の苦情の分類、複雑な規制文書の要約、重要なビジネス情報の抽出などを行うことができます。
English
Sample: Translate text from Spanish to English
Copy link to section
Scenario: Translate text that is written in Spanish into English.
Model choice
Copy link to section
The mixtral-8x7b-instruct-v01 or mistral-large model can translate text from French, German, Italian, or Spanish to English. This sample prompts the model to translate from Spanish to English.
Model parameters
Copy link to section
AI guardrails: Disable the AI guardrails feature because it might incorrectly flag content as inappropriate. The feature is supported with English text only.
Decoding: Set to Greedy. The model must return the same text, only translated. The model cannot be creative.
Stopping criteria: Be sure to include a stop sequence for this model. Otherwise, the model might continue to generate new sentences and translations, even when the instruction tells it not to. To stop the model after one
sentence, add a period (.) as the stop sequence. Set the Max tokens parameter value to 200.
Prompt text
Copy link to section
Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
Translate the following text from Spanish to English. Do not give any extra response that is not part of the translation.
Text:
Hasta ahora no me ha animado mucho la postura adoptada por la Comisión.
Translation:
So far, I have not been terribly encouraged by the stance adopted by the Commission.
Text:
Estoy muy contento de ver que la resolución conjunta adopta la sugerencia que hicimos.
Translation:
Sample: Translate text from English to Japanese
Copy link to section
Scenario: Translate text that is written in English into Japanese.
Model choice
Copy link to section
The granite-8b-japanese model can translate text from Japanese to English and from English to Japanese.
Model parameters
Copy link to section
AI guardrails: Disable the AI guardrails feature because it might incorrectly flag content as inappropriate. The feature is supported with English text only.
Decoding: Set to Greedy. The model must return the same text, only translated. The model cannot be creative.
Stopping criteria: Increase the number of allowed tokens by changing the Max tokens parameter value to 500.
Prompt text
Copy link to section
Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
Translate the following text from English to Japanese.
English
Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes.
日本語
トマトは野菜作りの人気の植物である。成功のヒント:病害虫に強く、育てやすいトマトの品種を選べば、トマト栽培はそれほど難しくない。経験豊富な庭師にとっては、手強い挑戦となる、様々な色や形のトマトの品種がある。トマトの品種は、大きさもいろいろである。
English
Use foundation models to create better AI, faster. Experiment with different prompts for various use cases and tasks. With just a few lines of instruction you can draft job descriptions, classify customer complaints, summarize complex regulatory documents, extract key business information and much more.
日本語
Sample: Translate text from French to English
Copy link to section
Scenario: Translate text that is written in French into English.
Model choice
Copy link to section
The granite-20b-multilingual model understands English, German, Spanish, French, and Portuguese. This sample prompts the model to translate text from French to English.
Model parameters
Copy link to section
AI guardrails: Disable the AI guardrails feature because it might incorrectly flag content as inappropriate. The feature is supported with English text only.
Decoding: Set to Greedy. The model must return the same text, only translated. The model cannot be creative.
Stopping criteria: Set the Max tokens parameter value to 200.
Prompt text
Copy link to section
Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
Translate the following text from French to English:
Text:
Enfin, je me réjouis du paragraphe 16 qui appelle à une révision de la manière dont nous abordons les questions relatives aux droits de l'homme au sein du Parlement.
Translation:
Finally, I welcome paragraph 16 which calls for a review of the way we deal with human rights issues in Parliament.
Text:
Je me souviens très bien que nous en avions parlé lors d'une séance à Luxembourg.
Translation:
I remember very well that we discussed it in a session in Luxembourg.
Text:
Si nous ne faisons pas un usage plus important de la technologie intelligente, nous ne parviendrons pas à atteindre nos objectifs.
Translation:
Show more
Sample: Translate text from English to Arabic
Copy link to section
Scenario: Translate text that is written in English into Arabic.
Model choice
Copy link to section
The allam-1-13b-instruct model can translate text from Arabic to English and from English to Arabic.
Model parameters
Copy link to section
AI guardrails: Disable the AI guardrails feature because it might incorrectly flag content as inappropriate. The feature is supported with English text only.
Decoding: Set to Greedy. The model must return the same text, only translated. The model cannot be creative.
Stopping criteria:Increase the number of allowed tokens by changing the Max tokens parameter value to 500. The allam-1-13b-instruct foundation model typically explains the meaning of the input text after translating
the text. You can optionally instruct the foundation model to stop after completing the translation. To do so, add an instruction that asks the foundation model to add a keyword, such as END, after the translation. Next,
add the same keyword END as a stop sequence.
Prompt text
Copy link to section
Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
<s> [INST]Translate the following text from English to Arabic. Use "END" at the end of the translation.
English
Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes.
END
العربية
الطماطم هي واحدة من النباتات الأكثر شعبية لحدائق الخضروات. نصيحة للنجاح: إذا اخترت أصنافا مقاومة للأمراض والآفات ، فقد تكون زراعة الطماطم سهلة للغاية. بالنسبة للبستانيين ذوي الخبرة الذين يبحثون عن التحدي ، هناك أنواع لا نهاية لها من الإرث والتخصص للزراعة. تأتي نباتات الطماطم في مجموعة من الأحجام.
END
English
Use foundation models to create better AI, faster. Experiment with different prompts for various use cases and tasks. With just a few lines of instruction you can draft job descriptions, classify customer complaints, summarize complex regulatory documents, extract key business information and much more.
END
العربية
[/INST]
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.