You can deploy Python functions in watsonx.ai Runtime the same way that you can deploy models. Your tools and apps can use the watsonx.ai Python client or REST API to send data to your deployed functions the same way that they send data to deployed
models. Deploying Python functions gives you the ability to hide details (such as credentials). You can also preprocess data before you pass it to models. Additionally, you can handle errors and include calls to multiple models, all within the
deployed function instead of in your application.
Sample notebooks for creating and deploying Python functions
Copy link to section
For examples of how to create and deploy Python functions by using the watsonx.ai Python client library, refer to these sample notebooks:
Set up an AI definition Prepare the data Create a Keras model by using Tensorflow Deploy and score the model Define, store, and deploy a Python function
When you deploy a function from a deployment space or programmatically, a single copy of the function is deployed by default. To increase scalability, you can increase the number of replicas by editing the configuration of the deployment. More
replicas allow for a larger volume of scoring requests.
The following example uses the Python client API to set the number of replicas to 3.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.