0 / 0
Supported foundation models available with watsonx.ai

Supported foundation models available with watsonx.ai

A collection of open source and IBM foundation models are deployed in IBM watsonx.ai. You can prompt the deployed foundation models in the Prompt Lab or programmatically.

The following models are available in watsonx.ai:

To understand how the model provider, instruction tuning, token limits, and other factors can affect which model you choose, see Choosing a model.

IBM foundation models

The following table lists the supported foundation models that IBM provides for inferencing. All IBM models are instruction-tuned. For more information about contractual protections that are related to IBM indemnification, see the IBM Client Relationship Agreement and IBM watsonx.ai service description.

Table 1. IBM foundation models in watsonx.ai
Model name IBM indemnification Billing class Maximum tokens
Context (input + output)
Supported tasks More information
granite-13b-chat-v2 Yes Class 1 8192 • classification
• extraction
• generation
• question answering
• summarization
Model card
Website
Research paper
granite-13b-instruct-v2 Yes Class 1 8192 • classification
• extraction
• generation
• question answering
• summarization
This foundation model can be tuned in Tuning Studio
Model card
Website
Research paper
granite-7b-lab Yes Class 1 8192 • classification
• extraction
• generation
• question answering
• retrieval-augmented generation
• summarization
Model card
Research paper (LAB)
granite-8b-japanese Yes Class 1 8192 • classification
• extraction
• generation
• question answering
• summarization
Model card
Website
Research paper
granite-20b-multilingual Yes Class 1 8192 • classification
• extraction
• generation
• question answering
• summarization
Model card
Website
Research paper

 

For more information about the supported foundation models that IBM provides for embedding text, see Supported embedding models.

Third-party foundation models

The following table lists the supported foundation models that third parties provide through Hugging Face. All third-party models are instruction-tuned. IBM indemnification does not apply to any third-party models.

Table 2. Supported third-party foundation models in watsonx.ai
Model name Provider Billing class Maximum tokens
Context (input + output)
Supported tasks More information
codellama-34b-instruct Code Llama Class 2 16,384 • code Model card
Meta AI Blog
elyza-japanese-llama-2-7b-instruct ELYZA, Inc Class 2 4096 • classification
• extraction
• generation
• question answering
• retrieval augmented generation
• summarization
• translation
Model card
Blog on note.com
flan-t5-xl-3b Google Class 1 4096 • classification
• extraction
• generation
• question answering
• retrieval augmented generation
• summarization
This foundation model can be tuned in Tuning Studio
Model card
Research paper
flan-t5-xxl-11b Google Class 2 4096 • classification
• extraction
• generation
• question answering
• retrieval augmented generation
• summarization
Model card
Research paper
flan-ul2-20b Google Class 3 4096 • classification
• extraction
• generation
• question answering
• retrieval augmented generation
• summarization
Model card
UL2 research paper
Flan research paper
jais-13b-chat Inception, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), and Cerebras Systems Class 2 2048 • classification
• extraction
• generation
• question answering
• retrieval augmented generation
• summarization
• translation
Model card
Research paper
llama-3-8b-instruct Meta Class 1 8192 • classification
• code
• extraction
• generation
• question answering
• retrieval-augmented generation
• summarization
Model card
Meta AI website
llama-3-70b-instruct Meta Class 2 8192 • classification
• code
• extraction
• generation
• question answering
• retrieval augmented generation
• summarization
Model card
Meta AI website
llama-2-13b-chat Meta Class 1 4096 • classification
• code
• extraction
• generation
• question answering
• retrieval augmented generation
• summarization
This foundation model can be tuned in Tuning Studio
Model card
Research paper
llama-2-70b-chat Meta Class 2 4096 • classification
• code
• extraction
• generation
• question answering
• retrieval augmented generation
• summarization
Model card
Research paper
llama2-13b-dpo-v7 Meta Class 2 4096 • classification
• code
• extraction
• generation
• question answering
• retrieval augmented generation
• summarization
Model card
Research paper (DPO)
merlinite-7b Mistral AI and IBM Class 1 32,768 • classification
• extraction
• generation
• retrieval_augmented_generation
• summarization
Model card
Research paper (LAB)
mixtral-8x7b-instruct-v01 Mistral AI Class 1 32,768 • classification
• code
• extraction
• generation
• retrieval_augmented_generation
• summarization
• translation
Model card
Research paper
mixtral-8x7b-instruct-v01-q (Deprecated) Mistral AI and IBM Class 1 32,768 • classification
• code
• extraction
• generation
• retrieval_augmented_generation
• summarization
• translation
Model card
Research paper
mt0-xxl-13b BigScience Class 2 4096 • classification
• generation
• question answering
• summarization
Model card
Research paper

 

Foundation model details

The available foundation models support a range of use cases for both natural languages and programming languages. To see the types of tasks that these models can do, review and try the sample prompts.

codellama-34b-instruct

A programmatic code generation model that is based on Llama 2 from Meta. Code Llama is fine-tuned for generating and discussing code.

When you inference this model from the Prompt Lab, disable AI guardrails.

Usage: Use Code Llama to create prompts that generate code based on natural language inputs, explain code, or that complete and debug code.

Cost: Class 2. For pricing details, see Watson Machine Learning plans.

Size: 34 billion parameters

Token limits

  • Context window length (input + output): 16,384
  • Note: The maximum new tokens, which means the tokens generated by the foundation model, is limited to 8192.

Supported natural languages: English

Supported programming languages: The codellama-34b-instruct-hf foundation model supports many programming languages, including Python, C++, Java, PHP, Typescript (Javascript), C#, Bash, and more.

Instruction tuning information: The instruction fine-tuned version was fed natural language instruction input and the expected output to guide the model to generate helpful and safe answers in natural language.

Model architecture: Decoder

License: License

Learn more

elyza-japanese-llama-2-7b-instruct

The elyza-japanese-llama-2-7b-instruct model is provided by ELYZA, Inc on Hugging Face. The elyza-japanese-llama-2-7b-instruct foundation model is a version of the Llama 2 model from Meta that is trained to understand and generate Japanese text. The model is fine-tuned for solving various tasks that follow user instructions and for participating in a dialog.

Note: This foundation model is available only in the Tokyo data center. When you inference this model from the Prompt Lab, disable AI guardrails.

Usage: General use with zero- or few-shot prompts. Works well for classification and extraction in Japanese and for translation between English and Japanese. Performs best when prompted in Japanese.

Cost: Class 2. For pricing details, see Watson Machine Learning plans.

Try it out

Size: 7 billion parameters

Token limits: Context window length (input + output): 4096

Supported natural languages: Japanese, English

Instruction tuning information: For Japanese language training, Japanese text from many sources were used, including Wikipedia and the Open Super-large Crawled ALMAnaCH coRpus (a multilingual corpus that is generated by classifying and filtering language in the Common Crawl corpus). The model was fine-tuned on a dataset that was created by ELYZA. The ELYZA Tasks 100 dataset contains 100 diverse and complex tasks that were created manually and evaluated by humans. The ELYZA Tasks 100 dataset is publicly available from HuggingFace.

Model architecture: Decoder

License: License

Learn more

flan-t5-xl-3b

The flan-t5-xl-3b model is provided by Google on Hugging Face. This model is based on the pretrained text-to-text transfer transformer (T5) model and uses instruction fine-tuning methods to achieve better zero- and few-shot performance. The model is also fine-tuned with chain-of-thought data to improve its ability to perform reasoning tasks.

Note: This foundation model can be tuned by using the Tuning Studio.

Usage: General use with zero- or few-shot prompts.

Cost: Class 1. For pricing details, see Watson Machine Learning plans.

Try it out: Sample prompts

Size: 3 billion parameters

Token limits

  • Context window length (input + output): 4096

    Note: Lite plan output is limited to 700

Supported natural languages: Multilingual

Instruction tuning information: The model was fine-tuned on tasks that involve multiple-step reasoning from chain-of-thought data in addition to traditional natural language processing tasks. Details about the training data sets used are published.

Model architecture: Encoder-decoder

License: Apache 2.0 license

Learn more

flan-t5-xxl-11b

The flan-t5-xxl-11b model is provided by Google on Hugging Face. This model is based on the pretrained text-to-text transfer transformer (T5) model and uses instruction fine-tuning methods to achieve better zero- and few-shot performance. The model is also fine-tuned with chain-of-thought data to improve its ability to perform reasoning tasks.

Usage: General use with zero- or few-shot prompts.

Cost: Class 2. For pricing details, see Watson Machine Learning plans.

Try it out

Size: 11 billion parameters

Token limits

  • Context window length (input + output): 4096

    Note: Lite plan output is limited to 700

Supported natural languages: English, German, French

Instruction tuning information: The model was fine-tuned on tasks that involve multiple-step reasoning from chain-of-thought data in addition to traditional natural language processing tasks. Details about the training data sets used are published.

Model architecture: Encoder-decoder

License: Apache 2.0 license

Learn more

flan-ul2-20b

The flan-ul2-20b model is provided by Google on Hugging Face. This model was trained by using the Unifying Language Learning Paradigms (UL2). The model is optimized for language generation, language understanding, text classification, question answering, common sense reasoning, long text reasoning, structured-knowledge grounding, and information retrieval, in-context learning, zero-shot prompting, and one-shot prompting.

Usage: General use with zero- or few-shot prompts.

Cost: Class 3. For pricing details, see Watson Machine Learning plans.

Try it out

Size: 20 billion parameters

Token limits

  • Context window length (input + output): 4096

    Note: Lite plan output is limited to 700

Supported natural languages: English

Instruction tuning information: The flan-ul2-20b model is pretrained on the colossal, cleaned version of Common Crawl's web crawl corpus. The model is fine-tuned with multiple pretraining objectives to optimize it for various natural language processing tasks. Details about the training data sets used are published.

Model architecture: Encoder-decoder

License: Apache 2.0 license

Learn more

granite-13b-chat-v2

The granite-13b-chat-v2 model is provided by IBM. This model is optimized for dialog use cases and works well with virtual agent and chat applications.

Usage: Generates dialog output like a chatbot. Uses a model-specific prompt format. Includes a keyword in its output that can be used as a stop sequence to produce succinct answers.

Cost: Class 1. For pricing details, see Watson Machine Learning plans.

Try it out: Sample prompt

Size: 13 billion parameters

Token limits: Context window length (input + output): 8192

Supported natural languages: English

Instruction tuning information: The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and data sets used.

Model architecture: Decoder

License

Learn more

granite-13b-instruct-v2

The granite-13b-instruct-v2 model is provided by IBM. This model was trained with high-quality finance data, and is a top-performing model on finance tasks. Financial tasks evaluated include: providing sentiment scores for stock and earnings call transcripts, classifying news headlines, extracting credit risk assessments, summarizing financial long-form text, and answering financial or insurance-related questions.

Note: This foundation model can be tuned by using the Tuning Studio.

Usage: Supports extraction, summarization, and classification tasks. Generates useful output for finance-related tasks. Uses a model-specific prompt format. Accepts special characters, which can be used for generating structured output.

Cost: Class 1. For pricing details, see Watson Machine Learning plans.

Try it out

Size: 13 billion parameters

Token limits: Context window length (input + output): 8192

Supported natural languages: English

Instruction tuning information: The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and data sets used.

Model architecture: Decoder

License

Learn more

granite-7b-lab

The granite-7b-lab foundation model is provided by IBM. The granite-7b-lab foundation model uses a novel alignment tuning method from IBM Research. Large-scale Alignment for chatBots, or LAB is a method for adding new skills to existing foundation models by generating synthetic data for the skills, and then using that data to tune the foundation model.

Usage: Supports general purpose tasks, including extraction, summarization, classification, and more.

Cost: Class 1. For pricing details, see Watson Machine Learning plans.

Size: 7 billion parameters

Token limits:

  • Context window length (input + output): 8192
  • Note: The maximum new tokens, which means the tokens generated by the foundation model, is limited to 4096.

Supported natural languages: English

Instruction tuning information: The granite-7b-lab foundation model is trained iteratively by using the large-scale alignment for chatbots (LAB) methodology.

Model architecture: Decoder

License

Learn more

granite-8b-japanese

The granite-8b-japanese model is provided by IBM. The granite-8b-japanese foundation model is based on the IBM Granite Instruct foundation model and is trained to understand and generate Japanese text.

Note: This foundation model is available only in the Tokyo data center. When you inference this model from the Prompt Lab, disable AI guardrails.

Usage: Useful for general purpose tasks in the Japanese language, such as classification, extraction, question-answering, and for language translation between Japanese and English.

Cost: Class 1. For pricing details, see Watson Machine Learning plans.

Try it out

Size: 8 billion parameters

Token limits: Context window length (input + output): 8192

Supported natural languages: English, Japanese

Instruction tuning information: The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. The granite-8b-japanese model was pretrained on 1 trillion tokens of English and 0.5 trillion tokens of Japanese text.

Model architecture: Decoder

License

Learn more

granite-20b-multilingual

A foundation model from the IBM Granite family. The granite-20b-multilingual foundation model is based on the IBM Granite Instruct foundation model and is trained to understand and generate text in English, German, Spanish, French, and Portuguese.

Usage: English, German, Spanish, French, and Portuguese closed-domain question answering, summarization, generation, extraction, and classification.

Cost: Class 1. For pricing details, see Watson Machine Learning plans.

Size: 13 billion parameters

Token limits: Context window length (input + output): 8192

Supported natural languages: English, German, Spanish, French, and Portuguese

Instruction tuning information: The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and data sets used.

Model architecture: Decoder

License: Terms of use

Learn more

jais-13b-chat

The jais-13b-chat foundation model is a bilingual large language model for Arabic and English that is fine-tuned to support conversational tasks.

Note: This foundation model is available only in the Frankfurt data center. When you inference this model from the Prompt Lab, disable AI guardrails.

Usage: Supports Q&A, summarization, classification, generation, extraction, and translation in Arabic.

Cost: Class 2. For pricing details, see Watson Machine Learning plans.

Try it out

Size: 13 billion parameters

Token limits: Context window length (input + output): 2048

Supported natural languages: Arabic (Modern Standard Arabic) and English

Instruction tuning information: Jais-13b-chat is based on the Jais-13b model, which is a foundation model that is trained on 116 billion Arabic tokens and 279 billion English tokens. Jais-13b-chat is fine-tuned with a curated set of 4 million Arabic and 6 million English prompt-and-response pairs.

Model architecture: Decoder

License: Apache 2.0

Learn more

Llama 3 Chat

Meta Llama 3 foundation models are accessible, open large language model that are built with Meta Llama 3 and provided by Meta on Hugging Face. The Llama 3 foundation models are instruction fine-tuned language models that can support various use cases.

Usage: Generates dialog output like a chatbot.

Cost

Try it out

Available sizes

  • 8 billion parameters
  • 70 billion parameters

Token limits

  • Context window length (input + output): 8192
  • Note: The maximum new tokens, which means the tokens generated by the foundation model, is limited to 4096.

Supported natural languages: English

Instruction tuning information: Llama 3 features improvements in post-training procedures that reduce false refusal rates, improve alignment, and increase diversity in the foundation model output. The result is better reasoning, code generation, and instruction-following capabilities. Llama 3 has more training tokens (15T) that result in better language comprehension.

Model architecture: Decoder-only

License: META LLAMA 3 Community License

Learn more

Llama 2 Chat

The Llama 2 Chat model is provided by Meta on Hugging Face. The fine-tuned model is useful for chat generation. The model is pretrained with publicly available online data and fine-tuned using reinforcement learning from human feedback.

You can choose to use the 13 billion parameter or 70 billion parameter version of the model.

Note: The 13 billion parameter version of this foundation model can be tuned by using the Tuning Studio.

Usage: Generates dialog output like a chatbot. Uses a model-specific prompt format.

Cost

Try it out

Available sizes

  • 13 billion parameters
  • 70 billion parameters

Token limits

  • Context window length (input + output): 4096

  • Lite plan output is limited as follows:

    • 70b version: 900
    • 13b version: 2048

Supported natural languages: English

Instruction tuning information: Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction data sets and more than one million new examples that were annotated by humans.

Model architecture: Decoder-only

License: License

Learn more

llama2-13b-dpo-v7

The llama2-13b-dpo-v7 foundation model is provided by Minds & Company. The llama2-13b-dpo-v7 foundation model is a version of llama2-13b foundation model from Meta that is instruction-tuned and fine-tuned by using the direct preference optimzation method to handle Korean.

Note: This foundation model is available only in the Tokyo data center. When you inference this model from the Prompt Lab, disable AI guardrails.

Usage: Suitable for many tasks, including classification, extraction, summarization, code creation and conversion, question-answering, generation, and retreival-augmented generation in Korean.

Cost: Class 2. For pricing details, see Watson Machine Learning plans.

Try it out

Size: 13.2 billion parameters

Token limits: Context window length (input + output): 4096

Supported natural languages: English, Korean

Instruction tuning information: Direct preference optimzation (DPO) is an alternative to reinforcement learning from human feedback. With reinforcement learning from human feedback, responses must be sampled from a language model and an intermediate step of training a reward model is required. The direct preference optimzation uses a binary method of reinforcement learning where the model chooses the best of two answers based on preference data.

Model architecture: Decoder-only

License: License

Learn more

merlinite-7b

The merlinite-7b foundation model is provided by Mistral AI and tuned by IBM. The merlinite-7b foundation model is a derivative of the Mistral-7B-v0.1 model that is tuned with a novel alignment tuning method from IBM Research. Large-scale Alignment for chatBots, or LAB is a method for adding new skills to existing foundation models by generating synthetic data for the skills, and then using that data to tune the foundation model.

Usage: Supports general purpose tasks, including extraction, summarization, classification, and more.

Cost: Class 1. For pricing details, see Watson Machine Learning plans.

Size: 7 billion parameters

Token limits

  • Context window length (input + output): 32,768
  • Note: The maximum new tokens, which means the tokens generated by the foundation model, is limited to 8192.

Supported natural languages:

Instruction tuning information: The merlinite-7b foundation model is trained iteratively by using the large-scale alignment for chatbots (LAB) methodology.

Model architecture: Decoder

License: Apache 2.0 license

Learn more

mixtral-8x7b-instruct-v01

The mixtral-8x7b-instruct-v01 foundation model is provided by Mistral AI. The mixtral-8x7b-instruct-v01 foundation model is a pretrained generative sparse mixture-of-experts network that groups the model parameters, and then for each token chooses a subset of groups (referred to as experts) to process the token. As a result, each token has access to 47 billion parameters, but only uses 13 billion active parameters for inferencing, which reduces costs and latency.

Usage: Suitable for many tasks, including classification, summarization, generation, code creation and conversion, and language translation. Due to the model's unusually large context window, use the max tokens parameter to specify a token limit when prompting the model.

Cost: Class 1. For pricing details, see Watson Machine Learning plans.

Size: 46.7 billion parameters

Token limits

  • Context window length (input + output): 32,768
  • Note: The maximum new tokens, which means the tokens generated by the foundation model, is limited to 16,384.

Supported natural languages: English, French, German, Italian, Spanish

Instruction tuning information: The Mixtral foundation model is pretrained on internet data. The Mixtral 8x7B Instruct foundation model is fine-tuned to follow instructions.

Model architecture: Decoder-only

License: Apache 2.0 license

Learn more

mixtral-8x7b-instruct-v01-q (Deprecated)

Warning icon This model is deprecated. For more information, see Foundation model lifecycle.

The mixtral-8x7b-instruct-v01-q model is provided by IBM. The mixtral-8x7b-instruct-v01-q foundation model is a quantized version of the Mixtral 8x7B Instruct foundation model from Mistral AI.

The underlying Mixtral 8x7B foundation model is a sparse mixture-of-experts network that groups the model parameters, and then for each token chooses a subset of groups (referred to as experts) to process the token. As a result, each token has access to 47 billion parameters, but only uses 13 billion active parameters for inferencing, which reduces costs and latency.

Usage: Suitable for many tasks, including classification, summarization, generation, code creation and conversion, and language translation. Due to the model's unusually large context window, use the max tokens parameter to specify a token limit when prompting the model.

Cost: Class 1. For pricing details, see Watson Machine Learning plans.

Try it out: Sample prompts

Size: 8 x 7 billion parameters

Token limits

  • Context window length (input + output): 32,768
  • Note: The maximum new tokens, which means the tokens generated by the foundation model, is limited to 4096.

Supported natural languages: English, French, German, Italian, Spanish

Instruction tuning information: The Mixtral foundation model is pretrained on internet data. The Mixtral 8x7B Instruct foundation model is fine-tuned to follow instructions.

The IBM-tuned model uses the AutoGPTQ (Post-Training Quantization for Generative Pre-Trained Transformers) method to compress the model weight values from 16-bit floating point data types to 4-bit integer data types during data transfer. The weights decompress at computation time. Compressing the weights to transfer data reduces the GPU memory and GPU compute engine size requirements of the model.

Model architecture: Decoder-only

License: Apache 2.0 license

Learn more

mt0-xxl-13b

The mt0-xxl-13b model is provided by BigScience on Hugging Face. The model is optimized to support language generation and translation tasks with English, languages other than English, and multilingual prompts.

Usage: General use with zero- or few-shot prompts. For translation tasks, include a period to indicate the end of the text you want translated or the model might continue the sentence rather than translate it.

Cost: Class 2. For pricing details, see Watson Machine Learning plans.

Try it out

Size: 13 billion parameters

Supported natural languages: Multilingual

Token limits

  • Context window length (input + output): 4096

    Note: Lite plan output is limited to 700

Supported natural languages: The model is pretrained on multilingual data in 108 languages and fine-tuned with multilingual data in 46 languages to perform multilingual tasks.

Instruction tuning information: BigScience publishes details about its code and data sets.

Model architecture: Encoder-decoder

License: Apache 2.0 license

Learn more

Any deprecated foundation models are highlighted with a warning icon Warning icon. For more information about deprecation, including foundation model withdrawal dates, see Foundation model lifecycle.

Learn more

Parent topic: Developing generative AI solutions

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more