There are many factors to consider when you choose a foundation model to use for inferencing from a generative AI project.
Determine which factors are most important for you and your organization.
- Tasks the model can do
- Multimodal foundation models
- Languages supported
- Tuning options for customizing the model
- License and IP indemnity terms
- Model attributes, such as size, architecture, and context window length
After you have a short list of models that best fit your needs, you can test the models to see which ones consistently return the results you want.
Foundation models that support your use case
To get started, find foundation models that can do the type of task that you want to complete.
The following table shows the types of tasks that the foundation models in IBM watsonx.ai support. A checkmark (✓) indicates that the task that is named in the column header is supported by the foundation model. For some of the tasks, you can click a link to go to a sample prompt for the task. Alternatively, see Sample prompts to review various prompt samples that are grouped by task type.
Model | Conversation | Code generation and conversion | Tool interaction from Chat API | Generation (Classification, Extraction, Q&A, Summarization) |
Retrieval-augmented generation (RAG) | Translation |
---|---|---|---|---|---|---|
granite-13b-instruct-v2 | ✓ Chat from Prompt Lab |
✓ Generation sample |
✓ RAG from Prompt Lab |
|||
granite-7b-lab | ✓ Chat from Prompt Lab |
✓ Summarization sample |
✓ • RAG from Prompt Lab • RAG from AutoAI |
|||
granite-8b-japanese | ✓ Q&A sample |
✓ Translation sample |
||||
granite-20b-multilingual | ✓ Chat from Prompt Lab |
✓ | ✓ RAG from Prompt Lab |
✓ Translation sample |
||
granite-3-2b-instruct | ✓ Samples: • Chat from Prompt Lab • From Chat API: Sample |
✓ Code sample |
✓ | ✓ | ||
granite-3-8b-instruct | ✓ Samples: • Chat from Prompt Lab • From Chat API: Sample |
✓ Code sample |
✓ Tool-calling sample |
✓ | ✓ | |
granite-guardian-3-2b | ✓ Chat from Prompt Lab |
✓ | ✓ RAG from Prompt Lab |
|||
granite-guardian-3-8b | ✓ Chat from Prompt Lab |
✓ | ✓ RAG from Prompt Lab |
|||
granite-3b-code-instruct | ✓ Chat from Prompt Lab |
✓ Code sample |
||||
granite-8b-code-instruct | ✓ Chat from Prompt Lab |
✓ Code sample |
||||
granite-20b-code-instruct | ✓ Samples: • Chat from Prompt Lab • From Chat API: Sample |
✓ Code sample |
||||
granite-20b-code-base-schema-linking | ✓ Text-to-SQL code |
|||||
granite-20b-code-base-sql-gen | ✓ Text-to-SQL code |
|||||
granite-34b-code-instruct | ✓ Samples: • Chat from Prompt Lab • From Chat API: Sample |
✓ Code sample |
||||
allam-1-13b-instruct | ✓ Chat from Prompt Lab |
✓ Classification sample |
✓ Translation sample |
|||
codellama-34b-instruct-hf | ✓ Code sample |
|||||
elyza-japanese-llama-2-7b-instruct | ✓ Classification sample |
✓ Translation sample |
||||
flan-t5-xl-3b | ✓ | ✓ RAG from Prompt Lab |
||||
flan-t5-xxl-11b | ✓ Samples: • Q&A • Classification • Summarization |
✓ RAG from Prompt Lab |
✓ | |||
flan-ul2-20b | ✓ Samples: • Q&A • Classification • Extraction • Summarization |
✓ RAG from Prompt Lab • RAG from AutoAI |
||||
jais-13b-chat | ✓ Chat from Prompt Lab: Sample chat |
✓ | ✓ | |||
llama-3-3-70b-instruct | ✓ Samples: • Chat from Prompt Lab: Sample chat • From Chat API: Sample |
✓ Tool-calling sample |
✓ | ✓ RAG from Prompt Lab |
||
llama-3-2-1b-instruct | ✓ • Chat from Prompt Lab: Sample chat • From Chat API: Sample |
✓ Code sample |
✓ Tool-calling sample |
✓ | ✓ RAG from Prompt Lab |
|
llama-3-2-3b-instruct | ✓ • Chat from Prompt Lab: Sample chat • From Chat API: Sample |
✓ Code sample |
✓ | ✓ RAG from Prompt Lab |
||
llama-3-2-11b-vision-instruct | ✓ Samples: • Chat from Prompt Lab: Chat with image example • From Chat API: Sample |
✓ Tool-calling sample |
✓ | ✓ RAG from Prompt Lab |
||
llama-3-2-90b-vision-instruct | ✓ Samples: • Chat from Prompt Lab: Chat with image example • From Chat API: Sample |
✓ Tool-calling sample |
✓ RAG from Prompt Lab |
|||
llama-3-1-8b | ✓ Chat from Prompt Lab: Sample chat |
✓ | ✓ | ✓ Samples: • RAG from Prompt Lab |
||
llama-3-1-8b-instruct | ✓ Chat from Prompt Lab: Sample chat |
✓ Tool-calling sample |
✓ | ✓ Samples: • RAG from Prompt Lab • RAG from AutoAI |
||
llama-3-1-70b-instruct | ✓ Samples: • Chat from Prompt Lab: Sample chat • From Chat API: Sample |
✓ Tool-calling sample |
✓ | ✓ • RAG from Prompt Lab • RAG from AutoAI |
||
llama-3-405b-instruct | ✓ • Chat from Prompt Lab: Sample chat • From Chat API: Sample |
✓ Tool-calling sample |
✓ | ✓ RAG from Prompt Lab |
||
llama-3-8b-instruct | ✓ Samples: • Chat from Prompt Lab: Sample chat • From Chat API: Sample |
✓ RAG from Prompt Lab |
||||
llama-3-70b-instruct | ✓ Samples: • Chat from Prompt Lab: Sample chat • From Chat API: Sample |
✓ | ✓ • RAG from Prompt Lab • RAG from AutoAI |
|||
llama-2-13b-chat | ✓ Chat from Prompt Lab: Sample chat |
✓ | ✓ RAG from Prompt Lab |
|||
llama-guard-3-11b-vision | ✓ Samples: • Chat from Prompt Lab: Chat with image example • From Chat API: Sample |
✓ Classification sample |
✓ RAG from Prompt Lab |
|||
mistral-large | ✓ Samples: • Chat from Prompt Lab • From Chat API: Sample |
✓ Code sample |
✓ Tool-calling sample |
✓ Samples: • Classification • Extraction • Summarization |
✓ • RAG from Prompt Lab • RAG from AutoAI |
✓ Translation |
mixtral-8x7b-base | ✓ Chat from Prompt Lab |
✓ Code sample |
✓ Samples: • Classification • Extraction • Generation • Summarization |
✓ • RAG from Prompt Lab |
✓ Translation sample |
|
mixtral-8x7b-instruct-v01 | ✓ Chat from Prompt Lab |
✓ Code sample |
✓ Samples: • Classification • Extraction • Generation • Summarization |
✓ • RAG from Prompt Lab • RAG from AutoAI |
✓ Translation sample |
|
mixtral-nemo-instruct-2407 | ✓ Chat from Prompt Lab |
✓ Code sample |
✓ Samples: • Classification • Extraction • Generation • Summarization |
✓ • RAG from Prompt Lab |
✓ Translation sample |
|
mt0-xxl-13b | ✓ Samples: • Classification • Q&A |
✓ RAG from Prompt Lab |
||||
pixtral-12b | ✓ Chat from Prompt Lab: Chat with image example |
✓ | ✓ Samples: • Classification • Extraction • Summarization |
✓ RAG from Prompt Lab |
Multimodal foundation models
Multimodal foundation models are capable of processing and integrating information from many modalities or types of data. These modalities can include text, images, audio, video, and other forms of sensory input.
The multimodal foundation models that are available from watsonx.ai can do the following types of tasks:
- Image-to-text generation
- Useful for visual question answering, interpretation of charts and graphs, captioning of images, and more.
The following table lists the available foundation models that support modalities other than text-in and text-out.
Model | Input modalities | Output modalities |
---|---|---|
llama-3-2-11b-vision-instruct | image, text | text |
llama-3-2-90b-vision-instruct | image, text | text |
llama-guard-3-11b-vision | image, text | text |
pixtral-12b | image, text | text |
Foundation models that support your language
Many foundation models work well only in English. But some model creators include multiple languages in the pretraining data sets to fine-tune their model on tasks in different languages, and to test their model's performance in multiple languages. If you plan to build a solution for a global audience or a solution that does translation tasks, look for models that were created with multilingual support in mind.
The following table lists natural languages that are supported in addition to English by foundation models in watsonx.ai. For more information about the languages that are supported for multilingual foundation models, see the model card for the foundation model.
Model | Languages other than English |
---|---|
granite-8b-japanese | Japanese |
granite-20b-multilingual | German, Spanish, French, and Portuguese |
Granite Instruct 3.1 (granite-3-2b-instruct, granite-3-8b-instruct) | English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese (Simplified) |
Granite 3 (granite-3-8b-base) | English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified) |
allam-1-13b-instruct | Arabic |
elyza-japanese-llama-2-7b-instruct | Japanese |
flan-t5-xl-3b | Multilingual (See model card) |
flan-t5-xxl-11b | French, German |
jais-13b-chat | Arabic |
Llama 3.3 (llama-3-3-70b-instruct, llama-3-3-70b-instruct-hf) | English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai |
Llama 3.2 (llama-3-2-1b-instruct, llama-3-2-3b-instruct. Also llama-3-2-11b-vision-instruct, llama-3-2-90b-vision-instruct, and llama-guard-3-11b-vision with text-only inputs) | English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai |
Llama 3.1 (llama-3-1-8b-instruct, llama-3-1-70b-instruct, llama-3-405b-instruct) | English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai |
mistral-large | Multilingual (See model card) |
mixtral-8x7b-base, mixtral-8x7b-instruct-v01 | French, German, Italian, Spanish |
mistral-nemo-instruct-2407 | Multiple languages, especially English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. |
mt0-xxl-13b | Multilingual (See model card) |
Foundation models that you can tune
Some of the foundation models that are available in watsonx.ai can be tuned to better suit your needs.
The following tuning method is supported:
- Prompt tuning: Runs tuning experiments that adjust the prompt vector that is included with the foundation model input. After several runs, finds the prompt vector that can best guide the foundation model to return output that suits your task.
The following table shows the methods for tuning foundation models that are available in IBM watsonx.ai. A checkmark (✓) indicates that the tuning method that is named in the column header is supported by the foundation model.
Model name | Prompt tuning |
---|---|
flan-t5-xl-3b | ✓ |
granite-13b-instruct-v2 | ✓ |
For more information, see Tuning Studio.
Model types and IP indemnification
Review the intellectual property indemnification policy for the foundation model that you want to use. Some third-party foundation model providers require you to exempt them from liability for any IP infringement that might result from the use of their AI models.
IBM-developed foundation models that are available from watsonx.ai have standard intellectual property protection, similar to what IBM provides for hardware and software products.
IBM extends its standard intellectual property indemnification to the output that is generated by covered models. Covered Models include IBM-developed and some third-party foundation models that are available from watsonx.ai. Third-Party Covered Models are identified in table 4.
The following table describes the different foundation model types and their indemnification policies. See the reference materials for full details.
Foundation model type | Indemnification policy | Foundation models | Details | Reference materials |
---|---|---|---|---|
IBM Covered Model | Uncapped IBM indemnification | • IBM Granite • IBM Slate |
IBM-developed foundation models that are available from watsonx.ai. To retain the IBM IP indemnification coverage for the model output, you must take the following measures: • Apply AI guardrails to inference requests • Use watsonx.governance, which is offered as a separate service, to log and monitor foundation model output |
Service description |
Third-Party Covered Model | Capped IBM indemnification | Mistral Large | Third-party covered models that are available from watsonx.ai. To retain the IBM IP indemnification coverage for the model output, you must take the following measures: • Apply AI guardrails to inference requests • Use watsonx.governance, which is offered as a separate service, to log and monitor foundation model output |
Service description |
Non-IBM Product | No IBM indemnification | Various | Third-party models that are available from watsonx.ai and are subject to their respective license terms, including associated obligations and restrictions. | See model information. |
Custom Model | No IBM indemnification | Various | Foundation models that you import to use in watsonx.ai are Client content. | Client is solely responsible for the selection and use of the model and output and compliance with third-party license terms, obligations, and restrictions. |
For more information, read the following topics:
- AI Guardrails
- Overview of watsonx.governance
- Supported foundation models (includes links to third-party model license terms)
More considerations for choosing a model
Model attribute | Considerations |
---|---|
Context length | Sometimes called context window length, context window, or maximum sequence length, context length is the maximum allowed value for the number of tokens in the input prompt plus the number of tokens in the generated output. When you generate output with models in watsonx.ai, the number of tokens in the generated output is limited by the Max tokens parameter. |
Cost | The cost of using foundation models is measured in resource units. The price of a resource unit is based on the rate of the pricing tier for the foundation model. |
Fine-tuned | After a foundation model is pretrained, many foundation models are fine tuned for specific tasks, such as classification, information extraction, summarization, responding to instructions, answering questions, or participating in a back-and-forth dialog chat. A model that undergoes fine tuning on tasks similar to your planned use typically do better with zero-shot prompts than models that are not fine tuned in a way that fits your use case. One way to improve results for a fine-tuned model is to structure your prompt in the same format as prompts in the data sets that were used to fine tune that model. |
Instruction-tuned | Instruction-tuned means that the model was fine tuned with prompts that include an instruction. When a model is instruction tuned, it typically responds well to prompts that have an instruction even if those prompts don't have examples. |
IP indemnity | In addition to license terms, review the intellectual property indemnification policy for the model. For more information, see Model types and IP indemnification. |
License | In general, each foundation model comes with a different license that limits how the model can be used. Review model licenses to make sure that you can use a model for your planned solution. |
Model architecture | The architecture of the model influences how the model behaves. A transformer-based model typically has one of the following architectures: • Encoder-only: Understands input text at the sentence level by transforming input sequences into representational vectors called embeddings. Common tasks for encoder-only models include classification and entity extraction. • Decoder-only: Generates output text word-by-word by inference from the input sequence. Common tasks for decoder-only models include generating text and answering questions. • Encoder-decoder: Both understands input text and generates output text based on the input text. Common tasks for encoder-decoder models include translation and summarization. |
Regional availability | You can work with models that are available in the same IBM Cloud regional data center as your watsonx services. |
Supported programming languages | Not all foundation models work well for programming use cases. If you are planning to create a solution that summarizes, converts, generates, or otherwise processes code, review which programming languages were included in a model's pretraining data sets and fine-tuning activities to determine whether that model is a fit for your use case. |
Learn more
- Tokens and tokenization
- Model parameters for prompting
- Prompt tips
- Supported encoder models
- Billing details for generative AI assets
- Regional availability for foundation models
Parent topic: Supported foundation models