0 / 0
Quick start: Tune a foundation model
Last updated: Nov 27, 2024
Quick start: Tune a foundation model

There are a couple of reasons to tune your foundation model. By tuning a model on many labeled examples, you can enhance the model performance compared to prompt engineering alone. By tuning a base model to perform similarly to a bigger model in the same model family, you can reduce costs by deploying that smaller model.

Required services
watsonx.ai Studio
watsonx.ai Runtime

Your basic workflow includes these tasks:

  1. Open a project. Projects are where you can collaborate with others to work with data.
  2. Add your data to the project. You can upload data files, or add data from a remote data source through a connection.
  3. Create a Tuning experiment in the project. The tuning experiment uses the Tuning Studio experiment builder.
  4. Review the results of the experiment and the tuned model. The results include a Loss Function chart and the details of the tuned model.
  5. Deploy and test your tuned model. Test your model in the Prompt Lab.

Read about tuning a foundation model

Prompt tuning adjusts the content of the prompt that is passed to the model. The underlying foundation model and its parameters are not edited. Only the prompt input is altered. You tune a model with the Tuning Studio to guide an AI foundation model to return the output you want.

Watch this video to see when and why you should tune a foundation model.

This video provides a visual method to learn the concepts and tasks in this documentation.

Read more about Tuning Studio

Watch a video about tuning a foundation model

Watch Video Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface that is shown in the video. The video is intended to be a companion to the written tutorial.

This video provides a visual method to learn the concepts and tasks in this documentation.


Try a tutorial to tune a foundation model

In this tutorial, you will complete these tasks:





Tips for completing this tutorial
Here are some tips for successfully completing this tutorial.

Use the video picture-in-picture

Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.

The following animated image shows how to use the video picture-in-picture and table of contents features:

How to use picture-in-picture and chapters

Get help in the community

If you need help with this tutorial, you can ask a question or find an answer in the watsonx Community discussion forum.

Set up your browser windows

For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along.

Side-by-side tutorial and UI

Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.



Task 1: Open a project

preview tutorial video To preview this task, watch the video beginning at 00:04.

You need a project to store the tuning experiment. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a project.

Verify an existing project or create a new project

  1. From the watsonx home screen, scroll to the Projects section. If you see any projects that are listed, then skip to Associate the watsonx.ai Runtime service.

    If you don't see any projects, you can watch this video, and then follow the steps to create a project.

    This video provides a visual method to learn the concepts and tasks in this documentation.

  2. Click Create a sandbox project. When the project is created, you see the sandbox in the Projects section.

  3. Open an existing project or the new sandbox project.

Associate the watsonx.ai Runtime service with the project

You use watsonx.ai Runtime to tune the foundation model, so follow these steps to associate your watsonx.ai Runtime service instance with your project.

  1. In the project, click the Manage tab.

  2. Click the Services & Integrations page.

  3. Check whether this project has an associated watsonx.ai Runtime service. If there is no associated service, then follow these steps:

    1. Click Associate service.

    2. Check the box next to your watsonx.ai Runtime service instance.

    3. Click Associate.

    4. If necessary, click Cancel to return to the Services & Integrations page.

For more information or to watch a video, see Creating a project and Adding associated services to a project.

Checkpoint icon Check your progress

The following image shows the Manage tab with the associated service. You are now ready to add the sample notebook to your project.

Manage tab in the project




Task 2: Test your base model

preview tutorial video To preview this task, watch the video beginning at 00:19.

You can test your tuned model in the Prompt Lab. Follow these steps to test your tuned model:

  1. Return to the watsonx home screen.

  2. Verify that your sandbox project is selected.

    Select the sandbox project

  3. Click the Open Prompt Lab tile.

  4. Select your tuned model.

    1. Click the model drop-down list, and select View all foundation models.
    2. Select the granite-13b-instruct-v2 model.
    3. Click Select model.
  5. Click the Structured tab.

  6. For the Instruction, type:

    Summarize customer complaints
    
  7. Provide the examples and test input.

    Example input and output
    Example input Example output
    I forgot in my initial date I was using Capital One and this debt was in their hands and never was done. Debt collection, sub-product: credit card debt, issue: took or threatened to take negative or legal action sub-issue
    I am a victim of identity theft and this debt does not belong to me. Please see the identity theft report and legal affidavit. Debt collection, sub-product, I do not know, issue. attempts to collect debt not owed. sub-issue debt was a result of identity theft

  8. In the Try text field, copy and paste the following prompt:

    After I reviewed my credit report, I am still seeing information that is reporting on my credit file that is not mine. please help me in getting these items removed from my credit file.
    
  9. Click Generate, and review the results. Note the output for the base model so that you can compare this output to the output from the tuned model.

  10. Click Save work > Save as.

  11. Select Prompt template.

  12. For the name, type Base model prompt.

  13. For the Task, select Summarization.

  14. Select View in project after saving.

  15. Click Save.

Checkpoint icon Check your progress

The following image shows results in the Prompt Lab.

The following image shows results in the Prompt Lab.




Task 3: Add your data to the project

preview tutorial video To preview this task, watch the video beginning at 01:12.

You need to add the training data to your project. On the Resource hub page, you can find the customer complaints data set. This data set includes fictitious data of typical customer complaints regarding credit reports. Follow these steps to add the data set from the Resource hub to the project:

  1. Access the Customer complaints data set on the Resource hub page.
  2. Click Add to project.
  3. Select your sandbox project.
  4. Click Add.
  5. Click View project to see the asset in your project.

Checkpoint icon Check your progress

The following image shows the data asset added to the project. The next step is to create the Tuning experiment.

The following image shows the data asset added to the project. The next step is to create the Tuning experiment.




Task 4: Create a Tuning experiment in the project

preview tutorial video To preview this task, watch the video beginning at 01:32.

Now you are ready to create a tuning experiment in your sandbox project that uses the data set you just added to the project. Follow these steps to create a Tuning experiment:

  1. Return to the watsonx home screen.

  2. Verify that your sandbox project is selected.

    Select the sandbox project

  3. Click Tune a foundation model with labeled data.

  4. For the name, type:

    Summarize customer complaints tuned model
    
  5. For the description, type:

    Tuning Studio experiment to tune a foundation model to handle customer complaints.
    
  6. Click Create. The Tuning Studio displays.

Checkpoint icon Check your progress

The following image shows the Tuning experiment open in Tuning Studio. Now you are ready to configure the tuning experiment.

The following image shows the Tuning experiment open in Tuning Studio. Now you are ready to configure the tuning experiment.




Task 5: Configure the Tuning experiment

preview tutorial video To preview this task, watch the video beginning at 01:47.

In the Tuning Studio, you can configure the tuning experiment. The foundation model to tune is completed for you. Follow these steps to configure the tuning experiment:

  1. For the foundation model to tune, click Select a foundation model.

    1. Select granite-13b-instruct-v2.

    2. Click Select.

  2. Select Text for the method to initialize the prompt. There are two options:

    • Text: Uses text that you specify.
    • Random: Uses values that are generated for you as part of the tuning experiment.
  3. For the Text field, type:

    Summarize the complaint provided into one sentence.
    

    The following table shows example text for each task type:

    title
    Task type Example
    Classification Classify whether the sentiment of each comment is Positive or Negative
    Generation Make the case for allowing employees to work from home a few days a week
    Summarization Summarize the main points from a meeting transcript

  4. Select Summarization for the task type that most closely matches what you want the model to do. There are three task types:

    • Summarization generates text that describes the main ideas that are expressed in a body of text.
    • Generation generates text such as a promotional email.
    • Classification predicts categorical labels from features. For example, given a set of customer comments, you might want to label each statement as a question or a problem. When you use the classification task, you need to list the class labels that you want the model to use. Specify the same labels that are used in your tuning training data.
  5. Select your training data from the project.

    1. Click Select from project.
    2. Click Data asset.
    3. Select the customer complaints training data.json file.
    4. Click Select asset.
    5. Click Start tuning.

Checkpoint icon Check your progress

The following image shows the configured tuning experiment. Next, you review the results and deploy the tuned model.

The following image shows the configured tuning experiment.




Task 6: Deploy your tuned model to a deployment space

preview tutorial video To preview this task, watch the video beginning at 03:17.

When the experiment run is complete, you see the tuned model and the Loss function chart. Loss function measures the difference between predicted and actual results with each training run. Follow these steps to view the loss function chart and the tuned model:

  1. Review the Loss function chart. A downward sloping curve means that the model is getting better at generating the expected output.

    Completed tuning experiment

  2. Below the chart, click the Summarize customer complaints tuned model.

  3. Scroll through the model details.

  4. Click Deploy.

  5. For the name, type: Summarize customer complaints tuned model

  6. For the Deployment container, select Deployment space.

  7. For the Target deployment space, select an existing deployment space. If you don't have an existing deployment space, follow these steps:

    1. For the Target deployment space, select Create a new deployment space.
    2. For the deployment space name, type: Foundation models deployment space
    3. Select a storage service from the list.
    4. Select your provisioned machine learning service from the list.
    5. Click Create.
    6. Click Close.
    7. For the Target deployment space, verify that Foundation models deployment space is selected.
  8. Check the View deployment in deployment space after creating option.

  9. Click Create.

  10. On the Deployments page, click the Summarize customer complaints tuned model deployment to view the details.

Checkpoint icon Check your progress

The following image shows the deployment in the deployment space. You are now ready to test the deployed model.

The following image shows the deployment in the deployment space.




Task 7: Test your tuned model

preview tutorial video To preview this task, watch the video beginning at 04:04.

You can test your tuned model in the Prompt Lab. Follow these steps to test your tuned model:

  1. From the model deployment page, click Open in prompt lab, and then select your sandbox project. The Prompt Lab displays.

  2. Select your tuned model.

    1. Click the model drop-down list, and select View all foundation models.
    2. Select the Summarize customer complaints tuned model model.
    3. Click Select model.
  3. On the Structured mode page, type the Instruction: Summarize customer complaints

  4. On the Structured mode page, provide the examples and test input.

    Example input and output
    Example input Example output
    I forgot in my initial date I was using Capital One and this debt was in their hands and never was done. Debt collection, sub-product: credit card debt, issue: took or threatened to take negative or legal action sub-issue
    I am a victim of identity theft and this debt does not belong to me. Please see the identity theft report and legal affidavit. Debt collection, sub-product, I do not know, issue. attempts to collect debt not owed. sub-issue debt was a result of identity theft

  5. In the Try text field, copy and paste the following prompt:

    After I reviewed my credit report, I am still seeing information that is reporting on my credit file that is not mine. please help me in getting these items removed from my credit file.
    
  6. Click Generate, and review the results. Compare the output from the base model with this output from the tuned model.

Checkpoint icon Check your progress

The following image shows results in the Prompt Lab.

The following image shows results in the Prompt Lab.



Next steps

Try these other tutorials:

Additional resources

Parent topic: Quick start tutorials

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more