0 / 0
Using vectorized text with retrieval-augmented generation tasks
Last updated: Nov 27, 2024
Using vectorized text with retrieval-augmented generation tasks

Use embedding models to create text embeddings that capture the meaning of a sentence or passage to help with retrieval-augmented generation tasks.

Retrieval-augmented generation (RAG) is a technique in which a foundation model prompt is augmented with knowledge from external sources. You can use text embeddings to find higher-quality relevant information to include with the prompt to help the foundation model answer factually.

The following diagram illustrates the retrieval-augmented generation pattern with embedding support.

Diagram that shows adding search results derived from a vector store to the input for retrieval-augmented generation

The retrieval-augmented generation pattern with embedding support involves the following steps:

  1. Convert your content into text embeddings and store them in a vector data store.
  2. Use the same embedding model to convert the user input into text embeddings.
  3. Run a similarity or semantic search in your knowledge base for content that is related to a user's question.
  4. Pull the most relevant search results into your prompt as context and add an instruction, such as “Answer the following question by using only information from the following passages.”
  5. Send the combined prompt text (instruction + search results + question) to the foundation model.
  6. The foundation model uses contextual information from the prompt to generate a factual answer.

Augmenting foundation model input from Prompt Lab

The Prompt Lab has a built-in function in chat mode that helps you to implement a RAG use case. To start, you associate relevant documents with a prompt. The documents that you add are vectorized and stored in a vector database. When a query is submitted to the chat, the database is searched and related results are included in the input that is submitted to the foundation model. For more information, see Grounding foundation model prompts in contextual information.

Sample notebook

The Use watsonx Granite Model Series, Chroma, and LangChain to answer questions (RAG) sample notebook walks you through the steps to follow to enhance a RAG use case with embeddings.

Learn more

Parent topic: Retrieval-augmented generation

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more