0 / 0
What's new

What's new

Check back each week to learn about new features and updates for IBM watsonx.ai.

Tip: Occasionally, you must take a specific action after an update. To see all required actions, search this page for “Action required”.

Week ending 10 November 2023

A smaller version of the Llama-2 Chat model is available

9 Nov 2023

You can now choose between using the 13b or 70b versions of the Llama-2 Chat model. Consider these factors when you make your choice:

  • Cost
  • Performance

The 13b version is a Class 2 model, which means it is cheaper to use than the 70b version. To compare benchmarks and other factors, such as carbon emissions for each model size, see the Model card.

Use prompt variables to build reusable prompts

Add flexibility to your prompts with prompt variables. Prompt variables function as placeholders in the static text of your prompt input that you can replace with text dynamically at inference time. You can save prompt variable names and default values in a prompt template asset to reuse yourself or share with collaborators in your project. For more information, see Building reusable prompts.

Announcing support for Python 3.10 and R4.2 frameworks and software specifications on runtime 23.1

9 Nov 2023

You can now use IBM Runtime 23.1, which includes the latest data science frameworks based on Python 3.10 and R 4.2, to run Watson Studio Jupyter notebooks and R scripts, train models, and run Watson Machine Learning deployments. Update your assets and deployments to use IBM Runtime 23.1 frameworks and software specifications.

Use Apache Spark 3.4 to run notebooks and scripts

Spark 3.4 with Python 3.10 and R 4.2 is now supported as a runtime for notebooks and RStudio scripts in projects. For details on available notebook environments, see Compute resource options for the notebook editor in projects and Compute resource options for RStudio in projects.

Week ending 27 October 2023

Use a Satellite Connector to connect to an on-prem database

26 Oct 2023

Use the new Satellite Connector to connect to a database that is not accessible via the internet (for example, behind a firewall). Satellite Connector uses a lightweight Docker-based communication that creates secure and auditable communications from your on-prem environment back to IBM Cloud. For instructions, see Connecting to data behind a firewall.

Secure Gateway is deprecated

26 Oct 2023

IBM Cloud announced the deprecation of Secure Gateway. For information, see the Overview and timeline.

If you currently have connections that are set up with Secure Gateway, plan to use an alternative communication method. In IBM watsonx, you can use the Satellite Connector as a replacement for Secure Gateway. See Connecting to data behind a firewall.

Week ending 20 October 2023

Maximum token sizes increased

16 Oct 2023

Limits that were previously applied to the maximum number of tokens allowed in the output from foundation models are removed from paid plans. You can use larger maximum token values during prompt engineering from both the Prompt Lab and the Python library. The exact number of tokens allowed differs by model. For more information about token limits for paid and Lite plans, see Supported foundation models.

Week ending 13 October 2023

New notebooks in Samples

12 Oct 2023

Two new notebooks are available that use a vector database from Elasticsearch in the retrieval phase of the retrieval-augmented generation pattern. The notebooks demonstrate how to find matches based on the semantic similarity between the indexed documents and the query text that is submitted from a user.

Intermediate solutions in Decision Optimization

12 Oct 2023

You can now choose to see a sample of intermediate solutions while a Decision Optimization experiment is running. This can be useful for debugging or to see how the solver is progressing. For large models that take longer to solve, with intermediate solutions you can now quickly and easily identify any potential problems with the solve, without having to wait for the solve to complete. Graphical display showing run statistics with intermediate solutions. You can configure the Intermediate solution delivery parameter in the Run configuration and select a frequency for these solutions. For more information, see Run models and Run configuration parameters

New Decision Optimization saved model dialog

When you save a model for deployment from the Decision Optimization user interface, you can now review the input and output schema, and more easily select the tables that you want to include. You can also add, modify or delete run configuration parameters, review the environment, and the model files used. All these items are displayed in the same Save as model for deployment dialog. For more information, see Deploying a Decision Optimization model by using the user interface.

Week ending 6 October 2023

Additional foundation models in Frankfurt

5 Oct 2023

All foundation models that are available in the Dallas data center are now also available in the Frankfurt data center. The watsonx.ai Prompt Lab and foundation model inferencing are now supported in the Frankfurt region for these models:

  • granite-13b-chat-v1
  • granite-13b-instruct-v1
  • llama-2-70b-chat
  • gpt-neox-20b
  • mt0-xxl-13b
  • starcoder-15.5b

For more information on these models, see Supported foundation models available with watsonx.ai.

For pricing details, see Watson Machine Learning plans.

Control the placement of a new column in the Concatenate operation (Data Refinery)

6 Oct 2023

You now have two options to specify the position of the new column that results from the Concatenate operation: As the right-most column in the data set or next to the original column.

Concatenate operation column position

Previously, the new column was placed at the beginning of the data set.


Edit the Concatenate operation in any of your existing Data Refinery flows to specify the new column position. Otherwise, the flow might fail.

For information about Data Refinery operations, see GUI operations in Data Refinery.

Week ending 29 September 2023

IBM Granite foundation models for natural language generation

28 Sept 2023

The first two models from the Granite family of IBM foundation models are now available in the Dallas region:

  • granite-13b-chat-v1: General use model that is optimized for dialogue use cases
  • granite-13b-instruct-v1: General use model that is optimized for question answering

Both models are 13B-parameter decoder models that can efficiently predict and generate language in English. They, like all models in the Granite family, are designed for business. Granite models are pretrained on multiple terabytes of data from both general-language sources, such as the public internet, and industry-specific data sources from the academic, scientific, legal, and financial fields.

Try them out today in the Prompt Lab or run a sample notebook that uses the granite-13b-instruct-v1 model for sentiment analysis.

Read the Building AI for business: IBM’s Granite foundation models blog post to learn more.

Week ending 22 September 2023

Decision Optimization Java models

20 Sept 2023

Decision Optimization Java models can now be deployed in Watson Machine Learning. By using the Java worker API, you can create optimization models with OPL, CPLEX, and CP Optimizer Java APIs. You can now easily create your models locally, package them and deploy them on Watson Machine Learning by using the boilerplate that is provided in the public Java worker GitHub. For more information, see Deploying Java models for Decision Optimization.

New notebooks in Samples

21 Sept 2023

You can use the following new notebooks in Samples:

Week ending 15 September 2023

Prompt engineering and synthetic data quick start tutorials

14 Sept 2023

Try the new tutorials to help you learn how to:

  • Prompt foundation models: There are usually multiple ways to prompt a foundation model for a successful result. In the Prompt Lab, you can experiment with prompting different foundation models, explore sample prompts, as well as save and share your best prompts. One way to improve the accuracy of generated output is to provide the needed facts as context in your prompt text using the retrieval-augmented generation pattern.
  • Generate synthetic data: You can generate synthetic tabular data in watsonx.ai. The benefit to synthetic data is that you can procure the data on-demand, then customize to fit your use case, and produce it in large quantities.
New tutorials
Tutorial Description Expertise for tutorial
Prompt a foundation model using Prompt Lab Experiment with prompting different foundation models, explore sample prompts, and save and share your best prompts. Prompt a model using Prompt Lab without coding.
Prompt a foundation model with the retrieval-augmented generation pattern Prompt a foundation model by leveraging information in a knowledge base. Use the retrieval-augmented generation pattern in a Jupyter notebook that uses Python code.
Generate synthetic tabular data Generate synthetic tabular data using a graphical flow editor. Select operations to generate data.

Watsonx.ai Community

14 Sept 2023

You can now join the watsonx.ai Community for AI architects and builders to learn, share ideas, and connect with others.

Week ending 8 September 2023

Generate synthetic tabular data with Synthetic Data Generator

7 Sept 2023

Now available in the Dallas and Frankfurt regions, Synthetic Data Generator is a new graphical editor tool on watsonx.ai that you can use to generate tabular data to use for training models. Using visual flows and a statistical model, you can create synthetic data based on your existing data or a custom data schema. You can choose to mask your original data and export your synthetic data to a database or as a file.

To get started, see Synthetic data.

Llama-2 Foundation Model for natural language generation and chat

7 Sept 2023

The Llama-2 Foundation Model from Meta is now available in the Dallas region. Llama-2 Chat model is an auto-regressive language model that uses an optimized transformer architecture. The model is pretrained with publicly available online data, and then fine-tuned using reinforcement learning from human feedback. The model is intended for commercial and research use in English-language assistant-like chat scenarios.

LangChain extension for the foundation models Python library

7 Sept 2023

You can now use the LangChain framework with foundation models in watsonx.ai with the new LangChain extension for the foundation models Python library.

This sample notebook demonstrates how to use the new extension: Sample notebook

Introductory sample for the retrieval-augmented generation pattern

7 Sept 2023

Retrieval-augmented generation is a simple, powerful technique for leveraging a knowledge base to get factually accurate output from foundation models.

See: Introduction to retrieval-augmented generation

Week ending 1 September 2023

Deprecation of comments in notebooks

31 Aug 2023

As of today it is not possible to add comments to a notebook from the notebook action bar. Any existing comments were removed.

Comments icon in the notebook action bar

StarCoder Foundation Model for code generation and code translation

31 Aug 2023

The StarCoder model from Hugging Face is now available in the Dallas region. Use StarCoder to create prompts for generating code or for transforming code from one programming language to another. One sample prompt demonstrates how to use StarCoder to generate Python code from a set of instruction. A second sample prompt demonstrates how to use StarCoder to transform code written in C++ to Python code.

IBM watsonx.ai is available in the Frankfurt region

31 Aug 2023

Watsonx.ai is now generally available in the Frankfurt data center and can be selected as the preferred region when signing-up. The Prompt Lab and foundation model inferencing are supported in the Frankfurt region for these models:

Week ending 25 August 2023

Additional cache enhancements available for Watson Pipelines

21 August 2023

More options are available for customizing your pipeline flow settings. You can now exercise greater control over when the cache is used for pipeline runs. For details, see Managing default settings.

Week ending 18 August 2023

Plan name updates for Watson Machine Learning service

18 August 2023

Starting immediately, plan names are updated for the IBM Watson Machine Learning service, as follows:

  • The v2 Standard plan is now the Essentials plan. The plan is designed to give your organization the resources required to get started working with foundation models and machine learning assets.

  • The v2 Professional plan is now the Standard plan. This plan provides resources designed to support most organizations through asset creation to productive use.

Changes to the plan names do not change your terms of service. That is, if you are registered to use the v2 Standard plan, it will now be named Essentials, but all of the plan details will remain the same. Similarly, if you are registered to use the v2 Professional plan, there are no changes other than the plan name change to Standard.

For details on what is included with each plan, see Watson Machine Learning plans. For pricing information, find your plan on the Watson Machine Learning plan page in the IBM Cloud catalog.

Week ending 11 August 2023

Deprecation of comments in notebooks

7 August 2023

On 31 August 2023, you will no longer be able to add comments to a notebook from the notebook action bar. Any existing comments that were added that way will be removed.

Comments icon in the notebook action bar

Week ending 4 August 2023

Increased token limit for Lite plan

4 August 2023

If you are using the Lite plan to test foundation models, the token limit for prompt input and output is now increased from 25,000 to 50,000 per account per month. This gives you more flexibility for exploring foundation models and experimenting with prompts.

Custom text analytics template (SPSS Modeler)

4 August 2023

For SPSS Modeler, you can now upload a custom text analytics template to a project. This provides you with more flexibility to capture and extract key concepts in a way that is unique to your context.

Week ending 28 July 2023

Foundation models Python library available

27 July 2023

You can now prompt foundation models in watsonx.ai programmatically using a Python library.

See: Foundation models Python library

Week ending 14 July 2023

Control AI guardrails

14 July 2023

You can now control whether AI guardrails are on or off in the Prompt Lab. AI guardrails remove potentially harmful text from both the input and output fields. Harmful text can include hate speech, abuse, and profanity. To prevent the removal of potentially harmful text, set the AI guardrails switch to off. See Hate speech, abuse, and profanity.

The Prompt Lab with AI guardrails set on

Microsoft Azure SQL Database connection supports Azure Active Directory authentication (Azure AD)

14 July 2023

You can now select Active Directory for the Microsoft Azure SQL Database connection. Active Directory authentication is an alternative to SQL Server authentication. With this enhancement, administrators can centrally manage user permissions to Azure. For more information, see Microsoft Azure SQL Database connection.

Week ending 7 July 2023

Welcome to IBM watsonx.ai!

7 July 2023

IBM watsonx.ai delivers all the tools that you need to work with machine learning and foundation models.

Get started:

Try generative AI search and answer in this documentation

7 July 2023

You can see generative AI in action by trying the new generative AI search and answer option in the watsonx.ai documentation. The answers are generated by a large language model running in watsonx.ai and based on the documentation content. This feature is only available when you are viewing the documentation while logged in to watsonx.ai.

Enter a question in the documentation search field and click the Try generative AI search and answer icon (Try generative AI search and answer icon). The Generative AI search and answer pane opens and answers your question.

Shows the generative AI search and answer pane

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more