The following limitations and known issues apply to watsonx.
- Regional limitations
- Notebooks
- Data Refinery
- Visualizations
- watsonx.ai Runtime
- SPSS Modeler
- Connections
- Orchestration Pipelines
- watsonx.governance
watsonx.ai Studio
You might encounter some of these issues when getting started with and using notebooks:
Failure to export a notebook to HTML in the Jupyter Notebook editor
When you are working with a Jupyter Notebook created in a tool other than watsonx.ai Studio, you might not be able to export the notebook to HTML. This issue occurs when the cell output is exposed.
Workaround
-
In the Jupyter Notebook UI, go to Edit and click Edit Notebook Metadata.
-
Remove the following metadata:
"widgets": { "state": {}, "version": "1.1.2" }
-
Click Edit.
-
Save the notebook.
Manual installation of some tensor libraries is not supported
Some tensor flow libraries are preinstalled, but if you try to install additional tensor flow libraries yourself, you get an error.
Connection to notebook kernel is taking longer than expected after running a code cell
If you try to reconnect to the kernel and immediately run a code cell (or if the kernel reconnection happened during code execution), the notebook doesn't reconnect to the kernel and no output is displayed for the code cell. You need to manually reconnect to the kernel by clicking Kernel > Reconnect. When the kernel is ready, you can try running the code cell again.
Using the predefined sqlContext object in multiple notebooks causes an error
You might receive an Apache Spark error if you use the predefined sqlContext object in multiple notebooks. Create a new sqlContext object for each notebook. See this Stack Overflow explanation.
Connection failed message
If your kernel stops, your notebook is no longer automatically saved. To save it, click File > Save manually, and you should get a Notebook saved message in the kernel information area, which appears before the Spark version. If you get a message that the kernel failed, to reconnect your notebook to the kernel click Kernel > Reconnect. If nothing you do restarts the kernel and you can't save the notebook, you can download it to save your changes by clicking File > Download as > Notebook (.ipynb). Then you need to create a new notebook based on your downloaded notebook file.
Hyperlinks to notebook sections don't work in preview mode
If your notebook contains sections that you link to from an introductory section at the top of the notebook for example, the links to these sections will not work if the notebook was opened in view-only mode in Firefox. However, if you open the notebook in edit mode, these links will work.
Can't connect to notebook kernel
If you try to run a notebook and you see the message Connecting to Kernel
, followed by Connection failed. Reconnecting
and finally by a connection failed error message, the reason might be that your firewall is blocking
the notebook from running.
If watsonx.ai Studio is installed behind a firewall, you must add the WebSocket connection wss://dataplatform.cloud.ibm.com
to the firewall settings. Enabling this WebSocket connection is required when you're using notebooks and
RStudio.
Files that are uploaded through the watsonx.ai Studio UI are not validated or scanned for potentially malicious content
Files that you upload through the watsonx.ai Studio UI are not validated or scanned for potentially malicious content. It is strongly recommended that you run security software, such as an anti-virus application on all files before uploading your files to ensure the security of your content.
Data Refinery known issues
Target table loss and job failure when you use the Update option in a Data Refinery flow
Using the Update option for the Write mode target property for relational data sources (for example Db2) replaces the original target table and the Data Refinery job might fail.
Workaround: Use the Merge option as the **Write **mode and Append as the Table action.
Data Refinery limitations
Data column headers cannot contain special characters
Data with column headers that contain special characters might cause Data Refinery jobs to fail, and give the error Supplied values don't match positional vars to interpolate
.
Workaround: Remove the special characters from the column headers.
Data Refinery does not support the Satellite Connector
You cannot use a Satellite Connector to connect to a database with Data Refinery
Error opening a Data Refinery flow with connection with personal credentials
When you open a Data Refinery flow that uses a data asset that is based on a connection with personal credentials, you might see an error.
Workround: To open a Data Refinery flow that has assets which use connections with personal credentials, you must unlock the connection. You can unlock the connection either by editing the connection and entering your personal credentials, or by previewing the asset in the Project where you are prompted to enter your personal credentials. When you have unlocked the connection, you can then open the Data Refinery flow.
Visualizations issues
You might encounter some of these issues when working with the Visualization tab in a Data asset in a project.
The column-level profile information for a connected data asset with a column of type DATE, does not show rows
In the column-level profile information for a connected data asset with a column of type DATE, no rows are displayed when you click show rows in the tabs Data Classes, Format or Types.
watsonx.ai Runtime issues
You might encounter some of these issues when working with watsonx.ai Runtime.
Region requirements
You can only associate a watsonx.ai Runtime service instance with your project when the watsonx.ai Runtime service instance and the watsonx.ai Studio instance are located in the same region.
Accessing links if you create a service instance while associating a service with a project
While you are associating a watsonx.ai Runtime service to a project, you have the option of creating a new service instance. If you choose to create a new service, the links on the service page might not work. To access the service terms, APIs, and documentation, right click the links to open them in new windows.
Federated Learning assets cannot be searched in All assets, search results, or filter results in the new projects UI
You cannot search Federated Learning assets from the All assets view, the search results, or the filter results of your project.
Workaround: Click the Federated Learning asset to open the tool.
Deployment issues
- A deployment that is inactive (no scores) for a set time (24 hours for the free plan or 120 hours for a paid plan) is automatically hibernated. When a new scoring request is submitted, the deployment is reactivated and the score request is served. Expect a brief delay of 1 to 60 seconds for the first score request after activation, depending on the model framework.
- For some frameworks, such as SPSS Modeler, the first score request for a deployed model after hibernation might result in a 504 error. If this error happens, submit the request again; subsequent requests succeed.
watsonx.ai Runtime limitations
AutoAI known limitations
-
Currently, AutoAI experiments do not support double-byte character sets. AutoAI supports only CSV files with ASCII characters. Users must convert any non-ASCII characters in the file name or content, and provide input data as a CSV as defined in this CSV standard.
-
To interact programmatically with an AutoAI model, use the REST API instead of the Python client. The APIs for the Python client required to support AutoAI are not generally available currently.
Data module not found in IBM Federated Learning
The data handler for IBM Federated Learning is trying to extract a data module from the FL library but is unable to find it. You might see the following error message:
ModuleNotFoundError: No module named 'ibmfl.util.datasets'
The issue possibly results from using an outdated DataHandler. Review and update your DataHandler to conform to the latest spec. Here is the link to the most recent MNIST data handler or ensure your sample versions are up to date.
SPSS Modeler issues
You might encounter some of these issues when working in SPSS Modeler.
SPSS Modeler runtime restrictions
watsonx.ai Studio does not include SPSS functionality in Peru, Ecuador, Colombia, and Venezuela.
Timestamp data measured in microseconds
If you have timestamp data that is measured in microseconds, you can use the more precise data in your flow. However, you can import data that is measured in microseconds only from connectors that support SQL pushback. For more information about which connectors support SQL pushback, see Supported data sources for SPSS Modeler.
SPSS Modeler limitations
Languages supported by Text Analytics
The Text Analytics feature in SPSS Modeler supports the following languages:
- Dutch
- English
- French
- German
- Italian
- Japanese
- Portuguese
- Spanish
SPSS Modeler doesn't support Satellite Connector
You cannot use a Satellite Connector to connect to a database with SPSS Modeler.
Merge node and Unicode characters
The Merge node treats the following similar Japanese characters as the same character.
Connection issues
You might encounter this issue when working with connections.
Apache Impala connection does not work with LDAP authentication
If you create a connection to a Apache Impala data source and the Apache Impala server is set up for LDAP authentication, the username and password authentication method in IBM watsonx will not work.
Workaround: Disable the Enable LDAP Authentication option on the Impala server. See Configuring LDAP Authentication in the Cloudera documentation.
Orchestration Pipelines known issues
The issues pertain to Orchestration Pipelines.
Asset browser does not always reflect the count for total numbers of asset type
When selecting an asset from the asset browser, such as choosing a source for a Copy node, you see that some of the assets list the total number of that asset type available, but notebooks do not.
Cannot delete pipeline versions
Currently, you cannot delete saved versions of pipelines that you no longer need. All versions are deleted when the pipeline is deleted.
Deleting an AutoAI experiment fails under some conditions
Using a Delete AutoAI experiment node to delete an AutoAI experiment that was created from the Projects UI does not delete the AutoAI asset. However, the rest of the flow can complete successfully.
Cache appears enabled but is not enabled
If the Copy assets Pipelines node's Copy mode is set to Overwrite
, cache is displayed as enabled but remains disabled.
Pipelines cannot save some SQL statements
Pipelines cannot save when SQL statements with parentheses are passed in a script or user variable.
To resolve this issue, replace all instances of parentheses with their respective ASCII code ((
with #40
and )
with #41
) and replace the code when you set it as a user variable.
For example, the statement select CAST(col1 as VARCHAR(30)) from dbo.table
in a Run Bash script node causes an error. Instead, use the statement select CAST#40col1 as VARCHAR#4030#41#41 from dbo.table
and replace the instances when setting it as a user variable.
Orchestration Pipelines abort when limit for annotations is reached
Pipeline expressions require annotations, which have a limit due to the limit for annotations in Kubernetes. If you reach this limit, your pipeline will abort without displaying logs.
Orchestration Pipelines limitations
These limitations apply to Orchestration Pipelines.
- Single pipeline limits
- Input and output size limits
- Batch input limited to data assets
- Bash script throws errors with curl commands
Single pipeline limits
These limitations apply to a single pipeline, regardless of configuration.
- Any single pipeline cannot contain more than 120 standard nodes
- Any pipeline with a loop cannot contain more than 600 nodes across all iterations (for example, 60 iterations - 10 nodes each)
Input and output size limits
Input and output values, which include pipeline parameters, user variables, and generic node inputs and outputs, cannot exceed 10 KB of data.
Batch input limited to data assets
Currently, input for batch deployment jobs is limited to data assets. This means that certain types of deployments, which require JSON input or multiple files as input, are not supported. For example, SPSS models and Decision Optimization solutions that require multiple files as input are not supported.
Bash scripts throws errors with curl commands
The Bash scripts in your pipelines might cause errors if you implement curl commands in them. To prevent this issue, set your curl commands as parameters. To save a pipeline that causes error when saving, try exporting the isx file and importing them into a new project.
Issues with Cloud Object Storage
These issues apply to working with Cloud Object Storage.
Issues with Cloud Object Storage when Key Protect is enabled
Key Protect with Cloud Object Storage is not supported for working with watsonx.ai Runtime assets. If you are using Key Protect, you might encounter these issues when you are working with assets in watsonx.ai Studio.
- Training or saving these watsonx.ai Runtime assets might fail:
- Auto AI
- Federated Learning
- Pipelines
- You might be unable to save an SPSS model or a notebook model to a project
Issues with watsonx.governance
Integration limitation with OpenPages
When the AI Factsheets is integrated with OpenPages, the fields created in the field groups MRG-UserFacts-Model
or MRG-UserFact-Model
and MRG-UserFacts-ModelEntry
or MRG-UserFact-ModelUseCase
are synced to modelfacts_user_op
and model_entry_user_op
asset type definitions. However, when the fields are created from the OpenPages application, avoid specifying the fields as required, and do not specify a
range of values. If you mark them as required or assign a range of values, the sync will fail.
Delay showing prompt template deployment data in a factsheet
When a deployment is created for a prompt template, the facts for the deployment are not added to factsheet immediately. You must first evaluate the deployment or view the lifecycle tracking page to add the facts to the factsheet.
Display issues for existing Factsheet users
If you previously used factsheets with IBM Knowledge Catalog and you create a new AI use case in watsonx.governance, you might see some display issues, such as duplicate Risk level fields in the General information and Details section of the AI use case interface.
To resolve display problems, update the model_entry_user
asset type definition. For details on updating a use case programmatically, see Customizing details for a use case or factsheet.
Redundant attachment links in factsheet
A factsheet tracks all of the events for an asset over all phases of the lifecycle. Attachments show up in each stage, creating some redundancy in the factsheet.
Attachments for prompt templates are not saved on import or export
If your AI use case contains attachments for a prompt template, the attachments are not preserved when the prompt template asset is exported from a project or imported into a project or space. You must reattach any files after the import operation.
Unable to open OpenPages after you enable the integration with Governance console
When you enable the integration with Governance console, you get the following error message:
Error occurred while setting OpenPages.
Operation failed due to an unexpected error.
Information is not synced to the Governance console.
This issue occurs when you enable the integration with Governance console and you have foundation models that support the Translation task. The Translation task is missing from the following fields in Governance console:
- MRG-Model:Approved Tasks
- MRG-Model:Supported Tasks
- MRG-Model:Task Type
To resolve this issue, do the following steps to update the fields:
- Log in to Governance console as an administrator.
- Enable System Admin Mode.
- Click the Administration menu and select Solution Configuration > Object Types.
- Click Model.
- Click Fields, and then find the MRG-Model field group.
- Click the Approved Tasks field.
- Under Enumerated string values, click New Value.
- Type Translation for both the name and label, and then click Create.
- Click Done.
- Repeat steps 6-10 for the Supported Tasks and Task Type fields.
- Disable System Admin Mode.
Error setting up watsonx.governance and default inventory
A service ID for the watsonx.governance service is required for setting up watsonx.governance and the default inventory. If the ID is missing at initial setup, you are prompted to provision the service to create the ID. If the ID is deleted,
you see the following error message: Something went wrong during setup: The tenant service ID required for setup is missing and might have been deleted.
To resolve the issue, make sure the watsonx.governance
Issues with watsonx.governance on AWS
Watsonx.governance on AWS sends email notification to Governance console users for application events such as the workflow assignment. The From address is always set to [email protected]
.