Use this information to resolve questions or issues that you have with SPSS Modeler.
- Running multiple flows
- Execution interrupted
- File does not exist
- Cannot export data to SPSS Statistics .sav file
- Unnamed fields in migrated streams
- KDE nodes with unsupported Python version
- Differences in how having no line separators is handled
- Values for Predictor Importance can vary between SPSS Modeler flows and SPSS Modeler desktop streams
- It's hard to tell the difference between models generated from Text Analytics
- Internal error occurred: SCAPI error: The value on row 1,029 is not a valid string
Running multiple flows
Avoid running multiple flows that use the same username under one project at the same time. If you must run multiple flows, be sure the memory limit (8 GiB, by default) is not exceeded. If too many flows are running at the same time under the same username and project, SPSS Modeler might run out of memory and return an error message, such as Execution was interrupted.
If you get this error message, complete the following steps:
- Wait for the completion of one or more flow runs.
- Close any of your browser tabs that contain successfully-completed flow runs.
- Wait for 15 minutes.
- If you use caching in your flow, flush the cache.
- Click Run on the interrupted flow that returned an error.
Execution interrupted
If an SPSS Modeler flow becomes unresponsive or gives an
error message such as Execution was interrupted
, you can try
restarting the session. While in the SPSS Modeler flow,
complete the following steps:
- Click Flow information.
- Click Restart session.
File does not exist
If you renamed a file or moved a file from the project to a folder, you might receive this error message when you run an SPSS Modeler flow:
WDP Connector Error: CDICO2015E: The filepath/content.csv file does not exist or you do not have sufficient permissions.
This error occurs because the flow has not updated the name or location of the file. To fix the error, you can restart the session to update the SPSS Modeler flow:
- Click Flow information.
- Click Restart session.
You can also restart the runtime to fix the issue.
Cannot export data to SPSS Statistics .sav file
You tried to use a Data Asset Export node to export data to an SPSS Statistics .sav file, but the file was not created. You also received this error message:
WDP Connector Error: CDICO9999E: Internal error occurred: IO error: Invalid variable name error: Invalid character found in field name 'AGE YOUN'. Field names can only include any letter, any digit or the symbols @, #, ., _, or $ for export.
Check whether any field names contain spaces. The .sav file format does not support spaces in field names.
Unnamed fields in migrated streams
By default, unnamed data fields in SPSS Modeler desktop
are named field1
, field2
, field3
, and so on. In
SPSS Modeler in watsonx.ai Studio, unnamed data fields are named
COLUMN1
, COLUMN2
, COLUMN3
, and so on. If you
create a flow from a stream file (.str) that was created in SPSS Modeler desktop and it contains such fields, the output is different.
# TO DO: run this script once after importing the stream into CP4D
import modeler.api
stream = modeler.script.stream()
# map "COLUMN" to "field" for data sources without field names (csv without headers)
source_node = stream.findByID("...") # TO DO: provide ID of existing source node (csv file without headers)
filter_node = stream.findByID("...") # TO DO: provide ID of existing filter node (where field names are provided)
new_node = stream.create("filter", 'new node') # creates new filter node between source and filter
stream.linkBetween(new_node, source_node, filter_node)
# change field names from "COLUMN1" to "field1" etc.
for number in range(1,1000): # change max value if necessary
old_name = 'COLUMN' + str(number)
new_name = 'field' + str(number)
new_node.setKeyedPropertyValue("new_name", old_name, new_name)
KDE nodes with unsupported Python version
If you run a flow that contains an old KDE node, you might receive an error. The error says that the model uses a Python package that is no longer supported. In such a case, remove the old KDE node and add a new one.
Differences in how having no line separators is handled
If a line of a data record does not have a separator, that line is discarded in watsonx.ai Studio.
Values for Predictor Importance can vary between SPSS Modeler flows and SPSS Modeler desktop streams
To avoid inconsistent results on different platforms, a new random sampling method is used to compute Predictor Importance in SPSS Modeler on watsonx.ai Studio. This causes new Predictive Importance results to vary from the original Predictive Importance results in SPSS Modeler desktop if the data is not uniformly distributed. Random sampling is triggered when the number of records exceeds 200. SPSS Modeler desktop will be upgraded in a future version to match the results in SPSS Modeler on watsonx.ai Studio.
It's hard to tell the difference between models generated from Text Analytics
In the Text Analytics Workbench, each time that you click Generate new model, a new model nugget is created in your flow. If you generate multiple models, they all have the same name, so it can be difficult to differentiate them. One recommendation is to use annotations to help identify them (double-click a model nugget to open its properties, then go to Annotations).
Internal error occurred: SCAPI error: The value on row 1,029 is not a valid string
For example, while SPSS Modeler is reading a data set in the Data Asset node, the following error occurs:
Internal error occurred: SCAPI error: The value on row 1,029 is not a valid string of the Bit data type for the SecurityDelay column.
This behavior is expected. For most flat files, SPSS Modeler reads the 1st 1000 records to infer the data type. In this case, the 1st 1000 rows were 0's or 1's, so SPSS Modeler inferred that the column contained binary values (0 or 1). The value at row 1,029 was 3. When SPSS Modeler read a value of 3 at row 1,029, it causes an error, as 3 is not a binary value.
Suggested workarounds:
- Adjust the Infer record count parameter to include more data, choosing 2000 rows instead (or more).
- If this problem comes from an error in the data, update any value in the first 1000 rows that is causing the error.