For SPSS deployments, these data sources are not compliant with Federal Information Processing Standard (FIPS):
Cloud Object Storage
Cloud Object Storage (infrastructure)
Storage volumes
Table names that are provided in input and output data references are ignored. Table names that are referred to in the SPSS model are used during the batch deployment.
Use SQL PushBack to generate SQL statements for IBM SPSS Modeler operations that can be “pushed back” to or run in the database to improve performance. SQL Pushback is only supported by:
Db2
SQL Server
Netezza Performance Server
Using connected data for a batch deployment
Copy link to section
An SPSS Modeler flow can have a number of import and export nodes for data. If the nodes use database connections, they must be configured with the table names in the data sources and targets. These table names are used later for batch jobs.
Use Data Asset nodes for importing data and Data Asset Export nodes for exporting data. When you are configuring the nodes, choose the table name from Connections; don't choose a data asset in your project. Set the nodes and table names
before you save and deploy the model to watsonx.ai Runtime.
When you deploy the model to a deployment space, check the nodes connect to a supported database in the deployment space. In a batch deployment of the model, the connection details are selected from the input and output data references, but
the input and output table names are selected from the SPSS Modeler model. The input and output table names that are provided in the connected data references are ignored.
For batch deployment of an SPSS model that uses a Cloud Object Storage connection, make sure that the SPSS model has a single input and output data asset node.
Supported combinations of input and output sources
Copy link to section
You must specify compatible data sources and targets for the batch job input and the output. If you specify incompatible data sources and targets, you get an error when you try to run the batch job.
These combinations are supported for batch jobs:
SPSS model input/output
Batch deployment job input
Batch deployment job output
File
Local, managed, or referenced data asset or connection asset (file)
Remote data asset or connection asset (file) or name
Database
Remote data asset or connection asset (database)
Remote data asset or connection asset (database)
Specifying multiple inputs
Copy link to section
If you are specifying multiple inputs for an SPSS model deployment with no schema, specify an ID for each element in input_data_references.
In this example, when you create the job, provide three input entries with IDs: sample_db2_conn, sample_teradata_conn, and sample_googlequery_conn and select the required connected data for each input.
SPSS jobs support multiple data source inputs and a single output. If the schema is not in the metadata for the model when you saved it, you must enter id manually and select a data asset for each connection. If the schema is
provided in the metadata for the model, id names are populated automatically by using metadata. You select the data asset for the corresponding ids in watsonx.ai Studio. For more information, see Using multiple data sources for an SPSS job.
To create a local or managed asset as an output data reference, the name field must be specified for output_data_reference so that a data asset is created with the specified name. You cannot specify an href that refers to an existing local data asset.
Note:
Connected data assets that refer to supported databases can be created in the output_data_references only when the input_data_references also refers to one of these sources.
If you are creating a job by using the Python client, you must provide the connection name that is referred in the data nodes of the SPSS model model in the id field, and the data asset href in location.href for
input/output data references of the deployment jobs payload. For example, you can construct the job payload like this:
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.