0 / 0
Foundation model IDs

Foundation model IDs

When you write code that inferences a foundation model, you need to reference the foundation model by its model ID.

Use the List the available foundation models REST method to get the {model_id} for a foundation model, and then specify the model ID as a string in your code.

For information about how to get model IDs by using the Python library, see Getting information about available foundation models programmatically.

Foundation model IDs for APIs

The following list shows the values to use in the {model_id} parameter when you reference a foundation model from the API.

  • all-minilm-l12-v2

    sentence-transformers/all-minilm-l12-v2
    
  • allam-1-13b-instruct

    sdaia/allam-1-13b-instruct
    
  • codellama-34b-instruct-hf

    codellama/codellama-34b-instruct-hf
    
  • elyza-japanese-llama-2-7b-instruct

    elyza/elyza-japanese-llama-2-7b-instruct
    
  • flan-t5-xxl-11b

    google/flan-t5-xxl
    
  • flan-ul2-20b

    google/flan-ul2
    
  • granite-8b-japanese

    ibm/granite-8b-japanese
    
  • granite-13b-chat-v2

    ibm/granite-13b-chat-v2
    
  • granite-13b-instruct-v2

    ibm/granite-13b-instruct-v2
    
  • granite-20b-multilingual

    ibm/granite-20b-multilingual
    
  • granite-3b-code-instruct

    ibm/granite-3b-code-instruct
    
  • granite-8b-code-instruct

    ibm/granite-8b-code-instruct
    
  • granite-20b-code-instruct

    ibm/granite-20b-code-instruct
    
  • granite-34b-code-instruct

    ibm/granite-34b-code-instruct
    
  • jais-13b-chat

    core42/jais-13b-chat
    
  • llama-3-2-1b-instruct

    meta-llama/llama-3-2-1b-instruct
    
    • llama-3-2-3b-instruct
    meta-llama/llama-3-2-3b-instruct
    
    • llama-3-2-11b-vision-instruct
    meta-llama/llama-3-2-11b-vision-instruct
    
    • llama-3-2-90b-vision-instruct
    meta-llama/llama-3-2-90b-vision-instruct
    
    • llama-guard-3-11b-instruct
    meta-llama/llama-guard-3-11b-vision
    
  • llama3-llava-next-8b-hf

    meta-llama/llama3-llava-next-8b-hf
    
  • llama-3-1-8b-instruct

    meta-llama/llama-3-1-8b-instruct
    
  • llama-3-1-70b-instruct

    meta-llama/llama-3-1-70b-instruct
    
  • llama-3-405b-instruct

    meta-llama/llama-3-405b-instruct
    
  • llama-3-8b-instruct

    meta-llama/llama-3-8b-instruct
    
  • llama-3-70b-instruct

    meta-llama/llama-3-70b-instruct
    
  • llama-2-13b-chat

    meta-llama/llama-2-13b-chat
    
  • llama2-13b-dpo-v7

    mnci/llama2-13b-dpo-v7
    
  • mistral-large

    mistralai/mistral-large
    
  • mixtral-8x7b-instruct-v01

    mistralai/mixtral-8x7b-instruct-v01
    
  • mt0-xxl-13b

    bigscience/mt0-xxl
    
  • multilingual-e5-large

    intfloat/multilingual-e5-large
    
  • slate-30m-english-rtrvr

    ibm/slate-30m-english-rtrvr
    
  • slate-30m-english-rtrvr-v2

    ibm/slate-30m-english-rtrvr-v2
    
  • slate-125m-english-rtrvr

    ibm/slate-30m-english-rtrvr
    
  • slate-125m-english-rtrvr-v2

    ibm/slate-30m-english-rtrvr-v2
    

Parent topic: Coding generative AI solutions

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more