You can deploy Python functions in watsonx.ai Runtime the same way that you can deploy models. Your tools and apps can use the watsonx.ai Python client or REST API to send data to your deployed functions the same way that they send data to deployed
models. Deploying Python functions gives you the ability to hide details (such as credentials). You can also preprocess data before you pass it to models. Additionally, you can handle errors and include calls to multiple models, all within the
deployed function instead of in your application.
Sample notebooks for creating and deploying Python functions
Copy link to section
For examples of how to create and deploy Python functions by using the watsonx.ai Python client library, refer to these sample notebooks:
Set up an AI definition Prepare the data Create a Keras model by using Tensorflow Deploy and score the model Define, store, and deploy a Python function
When you deploy a function from a deployment space or programmatically, a single copy of the function is deployed by default. To increase scalability, you can increase the number of replicas by editing the configuration of the deployment. More
replicas allow for a larger volume of scoring requests.
The following example uses the Python client API to set the number of replicas to 3.