Avoid running multiple flows that use the same username under one project at the same
time. If you must run multiple flows, be sure the memory limit (8 GiB, by default) is not exceeded.
If too many flows are running at the same time under the same username and project, SPSS Modeler might run out of memory and return an error message, such as
Execution was interrupted.
If you get this error message, complete the following steps:
Wait for the completion of one or more flow runs.
Close any of your browser tabs that contain successfully-completed flow runs.
Wait for 15 minutes.
If you use caching in your flow, flush the cache.
Click Run on the interrupted flow that returned an error.
Execution interrupted
Copy link to section
If an SPSS Modeler flow becomes unresponsive or gives an
error message such as Execution was interrupted, you can try
restarting the session. While in the SPSS Modeler flow,
complete the following steps:
Click Flow information.
Click Restart session.
File does not exist
Copy link to section
If you renamed a file or moved a file from the project to a folder, you might receive
this error message when you run an SPSS Modeler
flow:
WDP Connector Error: CDICO2015E: The filepath/content.csv
file does not exist or you do not have sufficient permissions.
This error occurs because the flow has not updated the name or location of the file.
To fix the error, you can restart the session to update the SPSS Modeler flow:
Click Flow information.
Click Restart session.
You can also restart the runtime to fix the issue.
Cannot export data to SPSS Statistics .sav file
Copy link to section
You tried to use a Data Asset Export node to export data to an
SPSS Statistics .sav file, but the file was not created. You
also received this error message:
WDP Connector Error: CDICO9999E: Internal error occurred: IO error: Invalid
variable name error: Invalid character found in field name 'AGE YOUN'. Field
names can only include any letter, any digit or the symbols @, #, ., _, or $ for
export.
Check whether any field names contain spaces. The .sav file
format does not support spaces in field names.
Unnamed fields in migrated streams
Copy link to section
By default, unnamed data fields in SPSS Modeler desktop
are named field1, field2, field3, and so on. In
SPSS Modeler in watsonx.ai, unnamed data fields are named
COLUMN1, COLUMN2, COLUMN3, and so on. If you
create a flow from a stream file (.str) that was created in SPSS Modeler desktop and it contains such fields, the output is different.
As a workaround, you can add a script such as the following to the flow that you
created from the imported
stream:
# TO DO: run this script once after importing the stream into CP4D
import modeler.api
stream = modeler.script.stream()
# map "COLUMN" to "field"for data sources without field names (csv without headers)
source_node = stream.findByID("...") # TO DO: provide ID of existing source node (csv file without headers)
filter_node = stream.findByID("...") # TO DO: provide ID of existing filter node (where field names are provided)
new_node = stream.create("filter", 'new node') # creates new filter node between source and filter
stream.linkBetween(new_node, source_node, filter_node)
# change field names from "COLUMN1" to "field1" etc.
for number in range(1,1000): # change max value if necessary
old_name = 'COLUMN' + str(number)
new_name = 'field' + str(number)
new_node.setKeyedPropertyValue("new_name", old_name, new_name)
Copy to clipboardCopied to clipboard
KDE nodes with unsupported Python version
Copy link to section
If you run a flow that contains an old KDE node, you might receive an error. The
error says that the model uses a Python package that is no longer supported. In such
a case, remove the old KDE node and add a new one.
Differences in how having no line separators is handled
Copy link to section
If a line of a data record does not have a separator, that line is discarded in
watsonx.ai.
Values for Predictor Importance can vary between SPSS Modeler flows and SPSS
Modeler desktop streams
Copy link to section
To avoid inconsistent results on different platforms, a new random sampling method is
used to compute Predictor Importance in SPSS Modeler on
watsonx.ai. This causes new
Predictive Importance results to vary from the original Predictive Importance
results in SPSS Modeler desktop if the data is not
uniformly distributed. Random sampling is triggered when the number of records
exceeds 200. SPSS Modeler desktop will be upgraded in a
future version to match the results in SPSS Modeler on
watsonx.ai.
It's hard to tell the difference between models generated from Text
Analytics
Copy link to section
In the Text Analytics Workbench, each time that you click Generate new
model, a new model nugget is created in your flow. If you generate
multiple models, they all have the same name, so it can be difficult to
differentiate them. One recommendation is to use annotations to help identify them
(double-click a model nugget to open its properties, then go to
Annotations).
Internal error occurred: SCAPI error: The value on row 1,029 is not a valid
string
Copy link to section
For example, while SPSS Modeler is reading a data set in
the Data Asset node, the following error occurs:
Internal error occurred: SCAPI error: The value on row 1,029 is not a valid
string of the Bit data type for the SecurityDelay column.
This behavior is expected. For most flat files, SPSS Modeler reads the 1st 1000 records to infer the data
type. In this case, the 1st 1000 rows were 0's or 1's, so SPSS Modeler inferred that the column contained binary values
(0 or 1). The value at row 1,029 was 3. When SPSS Modeler read a value of 3 at row 1,029, it causes an error, as 3 is not a binary
value.
Suggested workarounds:
Adjust the Infer record count parameter to include more
data, choosing 2000 rows instead (or more).
If this problem comes from an error in the data, update any value in the first
1000 rows that is causing the error.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.