Find the right foundation model to customize for your task.
Foundation models for prompt tuning
You can prompt tune the following models from the Tuning Studio in watsonx.ai:
- flan-t5-xl-3b
- granite-13b-instruct-v2
The following table shows the foundation models to experiment with before you choose a foundation model to tune.
Model for prompt engineering | Model for tuning |
---|---|
flan-t5-xxl-11b flan-ul2-20b |
flan-t5-xl-3b |
granite-13b-instruct-v2 | granite-13b-instruct-v2 |
Choosing a foundation model for tuning
To help you choose the best foundation model to tune, follow these steps:
-
Consider whether any measures were taken to curate the data that was used to train the foundation model to improve the quality of the foundation model output.
-
Review other general considerations for choosing a model.
For more information, see Choosing a foundation model.
-
Consider the costs that are associated with the foundation model, both at inference time and at tuning time. A smaller model, such as a 3 billion parameter model, costs less to tune and is a good place to start.
Tuning incurs compute resource consumption costs that are measured in capacity unit hours (CUH). The larger the model, the longer it takes to tune the model. A foundation model that is four times the size takes four times as long to tune.
For example, on a data set with 10,000 examples and that is 1.25 MB in size, it takes 3 hours 25 minutes to prompt tune the flan-t5-xl-3b foundation model.
For more information about CUH costs, see watsonx.ai Runtime plans and compute usage.
-
Experiment with the models in the Prompt Lab.
Use the largest version (meaning the version with the most parameters) of the model in the same model family for testing purposes. By testing with a larger, more powerful model you can establish the best prompt pattern for getting the output you want. Then, you can tune a smaller version of the same model type to save costs. A prompt-tuned version of a smaller model can generate similar, if not better results and costs less to inference.
Craft and try prompts until you find the input pattern that generates the best results from the large foundation model.
For more information, see Prompt Lab.
Parent topic: Tuning Studio