IBM Granite 107m multilingual embedding model card
Granite Embedding 107m Multilingual model card
Last updated: Jan 16, 2025
Granite Embedding 107m Multilingual model card
The granite-embedding-107m-multilingual model is a 107M parameter dense biencoder embedding model from the Granite Embeddings suite that can be used to generate high quality text embeddings. This model produces embedding vectors of size 384 and
is trained using a combination of open source relevance-pair datasets with permissive, enterprise-friendly license, and IBM collected and generated datasets. This model is developed using contrastive finetuning, knowledge distillation and model
merging for improved performance.
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may fine tune the granite-embedding-107m-multilingual model for languages beyond these 12 languages.
Intended use
Copy link to section
The model is designed to produce fixed length vector representations for a given text, which can be used for text similarity, retrieval, and search applications.
Usage with Sentence Transformers
Copy link to section
The model is compatible with SentenceTransformer library and is very easy to use:
First, install the sentence transformers library.
pip install sentence_transformers
The model can then be used to encode pairs of text and find the similarity between their representations.
from sentence_transformers import SentenceTransformer, util
model_path = "ibm-granite/granite-embedding-107m-multilingual"# Load the Sentence Transformer model
model = SentenceTransformer(model_path)
input_queries = [
' Who made the song My achy breaky heart? ',
'summit define'
]
input_passages = [
"Achy Breaky Heart is a country song written by Don Von Tress. Originally titled Don't Tell My Heart and performed by The Marcy Brothers in 1991. ",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
# encode queries and passages
query_embeddings = model.encode(input_queries)
passage_embeddings = model.encode(input_passages)
# calculate cosine similarityprint(util.cos_sim(query_embeddings, passage_embeddings))
Show more
Usage with Huggingface Transformers
Copy link to section
This is a simple example of how to use the granite-embedding-107m-multilingual model with the Transformers library and PyTorch.
First, install the required libraries.
pip install transformers torch
The model can then be used to encode pairs of text.
import torch
from transformers import AutoModel, AutoTokenizer
model_path = "ibm-granite/granite-embedding-107m-multilingual"# Load the model and tokenizer
model = AutoModel.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model.eval()
input_queries = [
' Who made the song My achy breaky heart? ',
'summit define'
]
# tokenize inputs
tokenized_queries = tokenizer(input_queries, padding=True, truncation=True, return_tensors='pt')
# encode querieswith torch.no_grad():
# Queries
model_output = model(**tokenized_queries)
# Perform pooling. granite-embedding-107m-multilingual uses CLS Pooling
query_embeddings = model_output[0][:, 0]
# normalize the embeddings
query_embeddings = torch.nn.functional.normalize(query_embeddings, dim=1)
Show more
Evaluation
Copy link to section
The average performance of the granite-embedding-107m-multilingual model on Multilingual Miracl (across 18 langauges), Mintaka Retrieval (across 8 languages) and MTEB Retrieval for English (across 15 tasks), German (across 4 tasks), Spanish
(across 2 tasks), Frenc (across 5 tasks), Japanese (across 2 tasks), Arabic (1 task), Korean (1 task) and Chinese (across 8 tasks) is reported below. The granite-embedding-107m-multilingual model is twice as fast as other models with similar
embedding dimensions.
Table 1. Benchmark scores for the granite-embedding-107m-multilingual model
Model
Paramters (M)
Embedding Dimension
Miracl (18)
Mintaka Retrieval (8)
MTEB English (15)
MTEB German (4)
MTEB Spanish (2)
MTEB French (5)
MTEB Japanese (2)
MTEB Arabic (1)
MTEB Korean (1)
MTEB Chinese (8)
granite-embedding-107m-multilingual
107
384
55.9
22.6
45.3
70.3
48.7
51.1
59.0
63.2
70.5
40.8
Model Architecture
Copy link to section
The granite-embedding-107m-multilingual model is based on an encoder-only XLM-RoBERTa like transformer architecture, trained internally at IBM Research.
Table 2. Granite Embedding model architecture details
Model
granite-embedding-30m-english
granite-embedding-125m-english
granite-embedding-107m-multilingual
granite-embedding-278m-multilingual
Embedding size
384
768
384
768
Number of layers
6
12
6
12
Number of attention heads
12
12
12
12
Intermediate size
1536
3072
1536
3072
Activation Function
GeLU
GeLU
GeLU
GeLU
Vocabulary Size
50265
50265
250002
250002
Max. Sequence Length
512
512
512
512
Number of parameters
30M
125M
107M
278M
Training Data
Copy link to section
Overall, the training data consists of four key sources: (1) unsupervised title-body paired data scraped from the web, (2) publicly available paired with permissive, enterprise-friendly license, (3) IBM-internal paired data targetting specific
technical domains, and (4) IBM-generated synthetic data. The data is listed below:
Table 3. Training data for the granite-embedding-107m-multilingual model
Dataset
Num. Pairs
Multilingual MC4
52,823,484
Multilingual Webhose
12,369,322
English Wikipedia
20,745,403
Multilingual Wikimedia
2,911,090
Miracl Corpus (Title-Body)
10,120,398
Stack Exchange Duplicate questions (titles)
304,525
Stack Exchange Duplicate questions (titles)
304,525
Stack Exchange Duplicate questions (bodies)
250,519
Machine Translations of Stack Exchange Duplicate questions (titles)
187,195
Stack Exchange (Title, Answer) pairs
4,067,139
Stack Exchange (Title, Body) pairs
23,978,013
Stack Exchange (Title, Body) pairs
23,978,013
Machine Translations of Stack Exchange (Title+Body, Answer) pairs
1,827,15
SearchQA
582,261
S2ORC (Title, Abstract)
41,769,185
WikiAnswers Duplicate question pairs
77,427,422
CCNews
614,664
XSum
226,711
SimpleWiki
102,225
Machine Translated Cross Lingual Parallel Corpora
28,376,115
SPECTER citation triplets
684,100
Machine Translations of SPECTER citation triplets
4,104,600
Natural Questions (NQ)
100,231
SQuAD2.0
87,599
HotpotQA
85,000
Fever
109,810
PubMed
20,000,000
Multilingual Miracl Triples
81,409
Multilingual MrTydi Triples
48,715
Sadeeem Question Asnwering
4,037
DBPedia Title-Body Pairs
4,635,922
Synthetic: English Query-Wikipedia Passage
1,879,093
Synthetic: English Fact Verification
9,888
Synthetic: Multilingual Query-Wikipedia Passage
300,266
Synthetic: Multilingual News Summaries
37,489
IBM Internal Triples
40,290
IBM Internal Title-Body Pairs
1,524,586
Notably, we do not use the popular MS-MARCO retrieval dataset in our training corpus due to its non-commercial license, while other open-source models train on this dataset due to its high quality.
Infrastructure
Copy link to section
We train Granite Embedding Models using IBM's computing cluster, Cognitive Compute Cluster, which is outfitted with NVIDIA A100 80gb GPUs. This cluster provides a scalable and efficient infrastructure for training our models over multiple GPUs.
Ethical Considerations and Limitations
Copy link to section
The English data used to train the base language model was filtered to remove text containing hate, abuse, and profanity.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.