You can prompt-tune foundation models in IBM watsonx.ai programmatically by using the Python library.
To prompt-tune a foundation model, you run an experiment that uses training data provided by you. The experiment is a machine learning process that shows the foundation model the output you expect the model to return for your prompt input. The tuning process is complex and involves a data asset, a training asset, and a deployment asset.
The python library has methods and helper classes for tuning foundation models. For more information about the library, see Prompt tuning.
Use functions that are available in the watsonx.ai Python library from notebooks in watsonx.ai to prompt-tune foundation models.
Sample notebook
The Use watsonx to tune IBM granite-13b-instruct-v2 model with Car Rental Company customer satisfaction document sample Python notebook contains code for prompt-tuning foundation models in watsonx.ai.
The sample notebook helps you with the two main phases of tuning:
- Finding the optimal tuning parameter values
- Prompting the tuned model to evaluate the quality of tuned model output
The sample notebook is designed to prompt-tune the granite-13b-instruct-v2 model. But you can use it for tuning other foundation models also. To do so, replace the base_model
references as follows:
base_model='google/flan-t5-xl'
If you change the foundation model, you must replace the training data also. Replace the file path in the Data loading section of the notebook.
url = "{path to your file}"
if not os.path.isfile(filename):
wget.download(url)
You can also use a sample notebook that tunes the other foundation models that can be prompt-tuned.
-
Tune a model to classify CFPB documents in watsonx
The flan-t5 notebook has steps to tune the foundation model, but does not include a step for hyperparameter optimization.
Using the sample notebook to optimize tuning parameter values
The sample notebook has code that optimizes the learning_rate
parameter value. The sample notebook systematically changes the learning rate value and reruns the experiment 10 times, so the loss can be compared across the 10 runs.
The sample notebook calculates the optimal learning rate value for you.
The sample notebook generates 10 separate experiments; it does not run the same experiment 10 times.
The parameter to optimize is defined in the Search space and optimization section of the notebook.
You can edit or add to the sample notebook to run automated code to optimize the following parameters in addition to learning rate:
accumulate_steps
batch_size
num_epochs
To check for the optimal values for many parameters at once, you can change the sample notebook to use code like this, for example:
SPACE = [
skopt.space.Real(0.001, 0.09, name='learning_rate', prior='log-uniform'),
skopt.space.Integer(1, 50, name='num_epochs', prior='uniform'),
skopt.space.Integer(1, 16, name='batch_size', prior='uniform')
]
Optimizing many parameters at once can save time because the parameters work together. Their values affect one another and the right balance of values among them leads to the best results.
The sample notebook uses methods from the scikit-optimize library. For more information, see the scikit-optimize API reference.
Using the sample notebook to evaluate the tuned model
The sample notebook has code that deploys the tuned model, inferences the tuned model, and then calculates the accuracy of the tuned model output. It also inferences the underlying foundation model and calculates the accuracy of the base model output, so that you can see a comparison.
If you want to use the sample notebook to tune and assess other models, you can replace the value of the model_id
parameter in the following code.
base_model = ModelInference(
model_id='ibm/granite-13b-instruct-v2',
params=generate_params,
api_client=client
)
For example, specify google/flan-t5-xl
.
You must also replace the prompt text with a prompt from your own training data set.
response = tuned_model.generate_text(prompt="{your text here}")
If the accuracy score for your tuned model is low, review some ideas for ways to improve your training data in Addressing data quality problems in tuned model output.
Remember, optimization of the tuning parameters is specific to the model and training data that you are using. If you change either the model or the training data, you need to reassess the tuning experiment. Adjust the tuning parameters again to optimize them for your augmented data set.
Parent topic: Python library