Tuning a foundation model in watsonx.ai is an iterative process. You run a tuning experiment and then evaluate the results. If necessary, you change experiment variables and rerun the experiment repeatedly until you are satisfied with the output from the tuned foundation model.
Check your progress after each experiment run. Find any limitations in your tuning experiment configuration and address them before you assess your training data for potential problems.
Workflow for improving tuning experiment results
There is no one right set of tuning parameters or training data examples to use. The best tuning parameter settings and data set sizes vary based on your data, the foundation model you use, and the type of task you want the model to do. Follow these steps to save time and stay on track as you experiment.
You can use the Tuning Studio to complete these steps or use sample notebooks to do them programmatically.
-
Before you begin your experimentation, create or preserve a subset of tuning training data to use as a test data set.
-
Run a tuning experiment with the default tuning parameters.
-
Check the loss function for the experiment run.
The tuned model is performing well when your loss function has a downward-sloping curve that levels off near zero.
-
If necessary, adjust parameter values and rerun the experiment until the loss function levels off to near zero.
-
Test the quality of the tuned model by submitting prompts from the test data set.
-
If necessary, revise or augment the training data.
When new data is introduced, more tuning parameter optimizations might be possible. Rerun the experiment, and then repeat the steps in this workflow starting from Step 3.
Parent topic: Tuning Studio