0 / 0
Foundation models built by IBM

Foundation models built by IBM

In IBM watsonx.ai, you can use IBM foundation models that are built with integrity and designed for business.

The following families of IBM foundation models are available in watsonx.ai:

Granite foundation models

The Granite family of IBM foundation models includes decoder-only models that can efficiently predict and generate language.

The models were built with trusted data that has the following characteristics:

  • Sourced from quality data sets in domains such as finance (SEC Filings), law (Free Law), technology (Stack Exchange), science (arXiv, DeepMind Mathematics), literature (Project Gutenberg (PG-19)), and more.
  • Compliant with rigorous IBM data clearance and governance standards.
  • Scrubbed of hate, abuse, and profanity, data duplication, and blocklisted URLs, among other things.

IBM is committed to building AI that is open, trusted, targeted, and empowering. For more information about contractual protections related to IBM indemnification, see the IBM Client Relationship Agreement and IBM watsonx.ai service description.

The following Granite models are available in watsonx.ai today:

granite-7b-lab
General use model that is built with a novel alignment tuning method from IBM Research. Large-scale Alignment for chatBots, or LAB is a method for adding new skills to existing foundation models by generating synthetic data for the skills, and then using that data to tune the foundation model.
granite-13b-chat-v2
General use model that is optimized for dialog use cases. This version of the model is able to generate longer, higher-quality responses with a professional tone. The model can recognize mentions of people and can detect tone and sentiment.
granite-13b-instruct-v2
General use model. This version of the model is optimized for classification, extraction, and summarization tasks. The model can recognize mentions of people and can summarize longer inputs.
granite-8b-japanese
General use model that supports the Japanese language. This version of the model is based on the Granite Instruct model and is optimized for classification, extraction, and question-answering tasks in Japanese. You can also use the model for translation between English and Japanese.
granite-20b-multilingual
General use model that supports the English, German, Spanish, French, and Portuguese languages. This version of the model is based on the Granite Instruct model and is optimized for classification, extraction, and question-answering tasks in multiple languages. You can also use the model for translation tasks.

Prompt samples

To get started with the models, try these samples:

Slate foundation models

The Slate family of IBM foundation models includes encoder-only models that specialize in natural language processing and text embedding tasks.

The following Slate embedding models are available in watsonx.ai today:

slate-125m-english-rtrvr
A 768-dimension embedding model that converts text into text embeddings.
slate-30m-english-rtrvr
A 384-dimension embedding model that converts text into text embeddings.

For more information about using Slate models to convert sentences and passages into text embeddings, see Text embedding generation.

IBM Slate models power a set of libraries that you can use for common natural language processing (NLP) tasks, such as classification, entity extraction, sentiment analysis, and more.

For more information about how to use the NLP capabilities of the Slate models, see Watson NLP library.

Parent topic: Supported foundation models

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more