0 / 0
Deploying AI services with templates
Last updated: Dec 18, 2024
Deploying AI services with templates

You can use pre-defined templates to deploy your AI services in watsonx.ai. AI service templates provide a pre-built foundation for AI applications, enabling developers to focus on the core logic of their application, rather than starting from scratch.

Standardizing AI service deployments with templates

AI service templates are pre-built, reusable, and customizable components that provide a structured approach to deploying and managing generative AI applications. They offer a standardized way to package, deploy, and integrate AI models with other applications and systems, enabling developers to focus on building and training models without worrying about the underlying infrastructure and deployment logistics. By providing a pre-defined structure, configuration, and set of tools, AI service templates simplify the process of deploying AI services, reduce the risk of errors, and improve the overall efficiency and consistency of AI development and deployment.

Components of AI service templates

The components of an AI service template are as follows:

  1. Source directory: The source directory contains the code used by the deployed functions (from the ai_service.py file). Upon deployment, the source directory is packaged and sent to cloud as package extension.

  2. Core application logic: The core application logic is contained in the ai_service.py file. This file encompasses the functions to be deployed, including the application's core logic, input schema definition, and authentication code.

  3. Configuration file: The config.toml file stores the deployment metadata and configuration settings for the model.

  4. Tests: The tests/ contains the unit tests for the template, including tests for tools and utility functions.

  5. Deployment scripts: The scripts/deploy.py deploys the template on IBM Cloud. The examples/execute_ai_service_locally.py can be used to run the AI service locally. The examples/query_existing_deployment.py can be used to inference an existing deployment.

  6. Project configuration: The pyproject.toml file manages dependencies and packages for the project.

Deploying AI services with templates

Follow these steps to deploy AI services with templates:

  1. Prepare the template: To prepare the template, you must clone the template repository, install the required dependencies and tools, such as Pipx or Poetry, set up the environment on your local system, and activate the virtual environment. This ensures that the template is properly configured and ready for deployment.

  2. Configure the template: Configure the template by filling in the config.toml file with the necessary credentials and configuration settings. This includes customizing the model with the application logic as needed to suit the specific requirements of the AI service. The configuration file stores deployment metadata and configuration settings for the model, and is used to tweak the model for local runs.

    You can also provide additional key-value data to the app by using Parameter sets or the CUSTOM object in the config.toml file. To learn more about storing and manging parameter sets, see Parameter sets in watsonx.ai Python client library documentation.

    For handling external credentials, you can use IBM Secrets Manager. Secrets Manager is a service that enables you to securely store and manage sensitive information, such as API keys and passwords. By using Secrets Manager, you can keep your credentials out of your code and configuration files, which helps to improve the security of your application. For more information, see IBM Cloud Secrets Manager API documentation.

  3. Test the template: Before deploying the template, it is essential to test it to ensure that it is working correctly. This involves running unit tests to verify that the template is functioning as expected, by testing the application locally by running the examples/execute_ai_service_locally.py script. You can run this script to run the AI service locally and test it with sample inputs.

  4. Deploy the template: Once the template has been tested and validated, it can be deployed by using the scripts/deploy.py script. This script automates the deployment process and creates a package extension. You can use the Secrets Manager to handle external credentials during deployment. To do this, create a secret that contains the credentials you want to use. Once you have created your secret, you can reference it during deployment. This will deploy your app with the credentials from the secret. The deployment process may take a few minutes to complete, after which you will receive a deployment ID.

  5. Inference the template: You can inference the deployed AI service by using the examples/query_existing_deployment.py script. Use this script to test the AI service with sample inputs and verify the output. You can also use the user interface to inference the deployment and interact with the AI service. To provide additional key-value data to the application during inference, you can use Parameter sets or the CUSTOM object in the config.toml file. These key-value pairs will be passed to your application during deployment and can be accessed during inference.

Sample template

To learn how to deploy AI services by using templates, see the following sample template:

Sample templates
Template Description
A LangGraph LLM app template with function calling capabilities
  • Prerequisites
  • Cloning and setting up the template locally
  • Modifying and configuring the template
    • Configuration file
    • Providing additional key-value data to the app
    • Handling external credentials
    • LangGraph's graph architecture
    • Core app logic
    • Adding new tools
    • Enhancing tests suite
  • Unit testing the template
  • Executing the app locally
  • Deploying on IBM Cloud
  • Inferencing the deployment

Parent topic: Deploying AI services

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more