There are a couple of reasons to tune your foundation model. By tuning a model on many labeled examples, you can enhance the model performance compared to prompt engineering alone. By tuning a base model to perform similarly to a bigger model
in the same model family, you can reduce costs by deploying that smaller model.
Required services
watsonx.ai Studio
watsonx.ai Runtime
Your basic workflow includes these tasks:
Open a project. Projects are where you can collaborate with others to work with data.
Add your data to the project. You can upload data files, or add data from a remote data source through a connection.
Create a Tuning experiment in the project. The tuning experiment uses the Tuning Studio experiment builder.
Review the results of the experiment and the tuned model. The results include a Loss Function chart and the details of the tuned model.
Deploy and test your tuned model. Test your model in the Prompt Lab.
Read about tuning a foundation model
Copy link to section
Prompt tuning adjusts the content of the prompt that is passed to the model. The underlying foundation model and its parameters are not edited. Only the prompt input is altered. You tune a model with the Tuning Studio to guide an AI foundation
model to return the output you want.
Watch this video to see when and why you should tune a foundation model.
This video provides a visual method to learn the concepts and tasks in this documentation.
Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface that is shown in the video.
The video is intended to be a companion to the written tutorial.
This video provides a visual method to learn the concepts and tasks in this documentation.
Tips for completing this tutorial Here are some tips for successfully completing this tutorial.
Use the video picture-in-picture
Copy link to section
Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture
mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.
The following animated image shows how to use the video picture-in-picture and table of contents features:
For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser
windows side-by-side to make it easier to follow along.
Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.
Task 1: Open a project
To preview this task, watch the video beginning at 00:04.
You need a project to store the tuning experiment. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a project.
Verify an existing project or create a new project
If you don't see any projects, you can watch this video, and then follow the steps to create a project.
This video provides a visual method to learn the concepts and tasks in this documentation.
Click Create a sandbox project. When the project is created, you see the sandbox in the Projects section.
Open an existing project or the new sandbox project.
Associate the watsonx.ai Runtime service with the project
Copy link to section
You use watsonx.ai Runtime to tune the foundation model, so follow these steps to associate your watsonx.ai Runtime service instance with your project.
In the project, click the Manage tab.
Click the Services & Integrations page.
Check whether this project has an associated watsonx.ai Runtime service. If there is no associated service, then follow these steps:
Click Associate service.
Check the box next to your watsonx.ai Runtime service instance.
Click Associate.
If necessary, click Cancel to return to the Services & Integrations page.
The following image shows the Manage tab with the associated service.
Task 2: Test your base model
To preview this task, watch the video beginning at 00:19.
You can test your tuned model in the Prompt Lab. Follow these steps to test your tuned model:
Return to the watsonx home screen.
Verify that your sandbox project is selected.
Click the Open Prompt Lab tile.
Select your tuned model.
Click the model drop-down list, and select View all foundation models.
Select the granite-13b-instruct-v2 model.
Click Select model.
Click the Structured tab.
For the Instruction, type:
Summarize customer complaints
Copy to clipboardCopied to clipboard
Provide the examples and test input.
Example input and output
Example input
Example output
I forgot in my initial date I was using Capital One and this debt was in their hands and never was done.
Debt collection, sub-product: credit card debt, issue: took or threatened to take negative or legal action sub-issue
I am a victim of identity theft and this debt does not belong to me. Please see the identity theft report and legal affidavit.
Debt collection, sub-product, I do not know, issue. attempts to collect debt not owed. sub-issue debt was a result of identity theft
In the Try text field, copy and paste the following prompt:
After I reviewed my credit report, I am still seeing information that is reporting on my credit file that is not mine. please help me in getting these items removed from my credit file.
Copy to clipboardCopied to clipboard
Click Generate, and review the results. Note the output for the base model so that you can compare this output to the output from the tuned model.
Click Save work > Save as.
Select Prompt template.
For the name, type Base model promptCopied to clipboard.
For the Task, select Summarization.
Select View in project after saving.
Click Save.
Check your progress
Copy link to section
The following image shows results in the Prompt Lab.
Task 3: Add your data to the project
To preview this task, watch the video beginning at 01:12.
You need to add the training data to your project. On the Resource hub page, you can find the customer complaints data set. This data set includes fictitious data of typical customer complaints regarding credit reports. Follow these steps
to add the data set from the Resource hub to the project:
Click View project to see the asset in your project.
Check your progress
Copy link to section
The following image shows the data asset added to the project. The next step is to create the Tuning experiment.
Task 4: Create a Tuning experiment in the project
To preview this task, watch the video beginning at 01:32.
Now you are ready to create a tuning experiment in your sandbox project that uses the data set you just added to the project. Follow these steps to create a Tuning experiment:
Return to the watsonx home screen.
Verify that your sandbox project is selected.
Click Tune a foundation model with labeled data.
For the name, type:
Summarize customer complaints tuned model
Copy to clipboardCopied to clipboard
For the description, type:
Tuning Studio experiment to tune a foundation model to handle customer complaints.
Copy to clipboardCopied to clipboard
Click Create. The Tuning Studio displays.
Check your progress
Copy link to section
The following image shows the Tuning experiment open in Tuning Studio. Now you are ready to configure the tuning experiment.
Task 5: Configure the Tuning experiment
To preview this task, watch the video beginning at 01:47.
In the Tuning Studio, you can configure the tuning experiment. The foundation model to tune is completed for you. Follow these steps to configure the tuning experiment:
For the foundation model to tune, click Select a foundation model.
Select granite-13b-instruct-v2.
Click Select.
Select Text for the method to initialize the prompt. There are two options:
Text: Uses text that you specify.
Random: Uses values that are generated for you as part of the tuning experiment.
For the Text field, type:
Summarize the complaint provided into one sentence.
Copy to clipboardCopied to clipboard
The following table shows example text for each task type:
title
Task type
Example
Classification
Classify whether the sentiment of each comment is Positive or Negative
Generation
Make the case for allowing employees to work from home a few days a week
Summarization
Summarize the main points from a meeting transcript
Select Summarization for the task type that most closely matches what you want the model to do. There are three task types:
Summarization generates text that describes the main ideas that are expressed in a body of text.
Generation generates text such as a promotional email.
Classification predicts categorical labels from features. For example, given a set of customer comments, you might want to label each statement as a question or a problem. When you use the classification task, you need to
list the class labels that you want the model to use. Specify the same labels that are used in your tuning training data.
Select your training data from the project.
Click Select from project.
Click Data asset.
Select the customer complaints training data.json file.
Click Select asset.
Click Start tuning.
Check your progress
Copy link to section
The following image shows the configured tuning experiment. Next, you review the results and deploy the tuned model.
Task 6: Deploy your tuned model to a deployment space
To preview this task, watch the video beginning at 03:17.
When the experiment run is complete, you see the tuned model and the Loss function chart. Loss function measures the difference between predicted and actual results with each training run. Follow these steps to view the loss function chart
and the tuned model:
Review the Loss function chart. A downward sloping curve means that the model is getting better at generating the expected output.
Below the chart, click the Summarize customer complaints tuned model.
Scroll through the model details.
Click Deploy.
For the name, type: Summarize customer complaints tuned modelCopied to clipboard
For the Deployment container, select Deployment space.
For the Target deployment space, select an existing deployment space. If you don't have an existing deployment space, follow these steps:
For the Target deployment space, select Create a new deployment space.
For the deployment space name, type: Foundation models deployment spaceCopied to clipboard
Select a storage service from the list.
Select your provisioned machine learning service from the list.
Click Create.
Click Close.
For the Target deployment space, verify that Foundation models deployment space is selected.
Check the View deployment in deployment space after creating option.
Click Create.
On the Deployments page, click the Summarize customer complaints tuned model deployment to view the details.
Check your progress
Copy link to section
The following image shows the deployment in the deployment space. You are now ready to test the deployed model.
Task 7: Test your tuned model
To preview this task, watch the video beginning at 04:04.
You can test your tuned model in the Prompt Lab. Follow these steps to test your tuned model:
From the model deployment page, click Open in prompt lab, and then select your sandbox project. The Prompt Lab displays.
Select your tuned model.
Click the model drop-down list, and select View all foundation models.
Select the Summarize customer complaints tuned model model.
Click Select model.
On the Structured mode page, type the Instruction: Summarize customer complaintsCopied to clipboard
On the Structured mode page, provide the examples and test input.
Example input and output
Example input
Example output
I forgot in my initial date I was using Capital One and this debt was in their hands and never was done.
Debt collection, sub-product: credit card debt, issue: took or threatened to take negative or legal action sub-issue
I am a victim of identity theft and this debt does not belong to me. Please see the identity theft report and legal affidavit.
Debt collection, sub-product, I do not know, issue. attempts to collect debt not owed. sub-issue debt was a result of identity theft
In the Try text field, copy and paste the following prompt:
After I reviewed my credit report, I am still seeing information that is reporting on my credit file that is not mine. please help me in getting these items removed from my credit file.
Copy to clipboardCopied to clipboard
Click Generate, and review the results. Compare the output from the base model with this output from the tuned model.
Check your progress
Copy link to section
The following image shows results in the Prompt Lab.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.