Developing generative AI solutions with foundation models
Last updated: Apr 16, 2025
Developing generative AI solutions with foundation models
You can develop generative AI solutions with foundation models in IBM watsonx.ai. You can create prompts to generate, classify, summarize, or extract content from your input content. Choose from IBM models or open source models. You can tune foundation
models to customize your prompt output or optimize inferencing performance.
Generative AI capabilities
Copy link to section
With watsonx.ai, you can create generative AI solutions that include the following capabilities and resources.
Prompting
Build prompts that instruct a foundation model to generate a response. You can chat with documents and other media, include variables for reusing prompts, remove harmful content, and control other prompt and model settings.
Build a RAG pattern to ground the model in facts from your documents. You can customize your RAG pattern to extract text from documents, create vector indexes, and rerank retrieved content. You can automate the search for an optimized, production-quality,
RAG pattern based on your data and use-case.
Foundation models are large AI models that have billions of parameters and are trained on terabytes of data. Foundation models can do various tasks, including text, code, or image generation, classification, conversation, and more. Large language
models are a subset of foundation models that can do tasks that are related to text and code.
Foundation models represent a fundamentally different model architecture and purpose for AI systems. The following diagram illustrates the difference between traditional machine learning AI models and foundation models for generative AI.
As shown in the diagram, traditional AI models specialize in specific tasks. Most traditional AI models are built by using machine learning, which requires a large, structured, well-labeled data set that encompasses a specific task that you
want to tackle. Often these datasets must be sourced, curated, and labeled by hand, a job that requires people with domain knowledge and takes time. After it is trained, a traditional AI model can do a single task well. The traditional AI
model uses what it learns from patterns in the training data to predict outcomes in unknown data. You can create machine learning models for your specific use cases with tools like AutoAI and Jupyter notebooks, and then deploy them.
In contrast, foundation models are trained on large, diverse, unlabeled datasets and can be used for many different tasks. Foundation models were first used to generate text by calculating the most-probable next word in natural language translation
tasks. However, model providers are learning that, when prompted with the right input, foundation models can do various other tasks well. Instead of creating your own foundation models, you use existing deployed models and engineer prompts
to generate the results that you need.