About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Last updated: Nov 27, 2024
Follow these rules when you are specifying input details for batch deployments of Python functions.
Data type summary table:
Data | Description |
---|---|
Type | inline |
File formats | N/A |
You can deploy Python functions in watsonx.ai Runtime the same way that you can deploy models. Your tools and apps can use the watsonx.ai Python client or REST API to send data to your deployed functions in the same way that they send data to deployed models. Deploying functions gives you the ability to:
- Hide details (such as credentials)
- Preprocess data before you pass it to models
- Handle errors
- Include calls to multiple models All of these actions take place within the deployed function, instead of in your application.
Data sources
If you are specifying input/output data references programmatically:
- Data source reference
depends on the asset type. Refer to the Data source reference types section in Adding data assets to a deployment space.type
Notes:
- For connections of type Cloud Object Storage or Cloud Object Storage (infrastructure), you must configure Access key and Secret key, also known as HMAC credentials.
- The environment variables parameter of deployment jobs is not applicable.
- Make sure that the output is structured to match the output schema that is described in Execute a synchronous deployment prediction.
Learn more
Parent topic: Batch deployment input details by framework