In IBM watsonx.ai, you can use IBM foundation models that are built with integrity and designed for business.
Granite foundation models
Copy link to section
The Granite family of IBM foundation models includes decoder-only models that can efficiently predict and generate language.
The models were built with trusted data that has the following characteristics:
Sourced from quality data sets in domains such as finance (SEC Filings), law (Free Law), technology (Stack Exchange), science (arXiv, DeepMind Mathematics), literature (Project Gutenberg (PG-19)), and more.
Compliant with rigorous IBM data clearance and governance standards.
Scrubbed of hate, abuse, and profanity, data duplication, and blocklisted URLs, among other things.
The following sections provide a short description and a few resources for learning about each model. For more information, see Supported foundation models.
granite-13b-chat-v2
Copy link to section
General use model that is optimized for dialog use cases. This version of the model is able to generate longer, higher-quality responses with a professional tone. The model can recognize mentions of people and can detect tone and sentiment.
This foundation model is available for you to deploy on demand on dedicated hardware for the exclusive use of your organization.
General use model. This version of the model is optimized for classification, extraction, and summarization tasks. The model can recognize mentions of people and can summarize longer inputs.
General use model that supports the Japanese language. This version of the model is based on the Granite Instruct model and is optimized for classification, extraction, and question-answering tasks in Japanese. You can also use the model for
translation between English and Japanese.
General use model that supports the English, German, Spanish, French, and Portuguese languages. This version of the model is based on the Granite Instruct model and is optimized for classification, extraction, and question-answering tasks
in multiple languages. You can also use the model for translation tasks.
Instruction fine-tuned models that support code discussion, generation, and conversion. Use these foundation models for programmatic coding tasks. The Granite Code models are fine-tuned on a combination of instruction data to enhance instruction-following
capabilities including logical reasoning and problem solving.
granite-3b-code-instruct
granite-8b-code-instruct
granite-20b-code-instruct
granite-34b-code-instruct
The Granite Code foundation models support 116 programming languages.
The following Granite Code foundation models are instruction-tuned versions of the granite-20b-code-base foundation model that are designed for text-to-SQL generation tasks.
Lightweight and open-source third generation Granite models that are fine tuned on a combination of permissively licensed open-source and proprietary instruction data. The Granite Instruct language models designed to excel in instruction following
tasks such as summarization, problem-solving, text translation, reasoning, funcion-calling, and more.
granite-3-2b-instruct
granite-3-8b-instruct
For more information, see the following resources:
Granite Guardian models are fine tuned third generation Granite Instruct models trained on unique data comprising human annotations and synthetic data. The foundation models are useful for risk detection use cases which are applicable across
a wide-range of enterprise applications.
IBM Granite time series foundation models are compact, pretrained models for multivariate time series forecasting from IBM Research, also known as Tiny Time Mixers (TTM).
The Granite time series models were trained on almost a billion samples of time series data from various domains, including electricity, traffic, manufacturing, and more. You can apply one of these pretrained models on your target data to get
an initial forecast without having to train the model on your data. When given a set of historic, timed data observations, the Granite time series foundation models can apply their understanding of dynamic systems to forecast future data values.
The following time series foundation models are available for use in watsonx.ai:
granite-ttm-512-96-r2: Requires at least 512 data points per channel in the request.
granite-ttm-1024-96-r2: Requires at least 1,024 data points per channel in the request.
granite-ttm-1536-96-r2: Requires at least 1,536 data points per channel in the request.
The Granite time series models work best with data points in minute or hour intervals and generate a forecast dataset with up to 96 data points per time series, per target channel.
For more information about using IBM embedding models to convert sentences and passages into text embeddings, see Text embedding generation.
Natural Language Processing capabilities
Copy link to section
IBM Slate models also power a set of libraries that you can use for common natural language processing (NLP) tasks, such as classification, entity extraction, sentiment analysis, and more.
For more information about how to use the NLP capabilities of the Slate models, see Watson NLP library.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.