0 / 0
IBM watsonx.ai use case
Last updated: Nov 21, 2024
IBM watsonx.ai use case

To transform your business processes with AI-driven solutions, your enterprise needs to integrate both machine learning and generative AI into your operational framework. watsonx.ai provides the processes and technologies to enable your enterprise to develop and deploy machine learning models and generative AI solutions.

Challenges

You can solve the following challenges for your enterprise by using watsonx.ai:

Accessing high-quality data
Organizations need to provide easy access to high-quality data for data science teams who use the data to build machine learning models.
Operationalizing machine learning model building and deploying
Organizations need to implement repeatable processes to quickly and efficiently build and deploy machine learning models to production environments.
Finding answers with foundation models
Organizations need to get information from unstructured data in documents.

Example: Golden Bank's challenges

Follow the story of Golden Bank as it uses watsonx.ai to implement a process to evaluate fairness in approving mortgage applicants. The team needs to:

  • Prepare the data to ensure the correct format to train the model.
  • Build and deploy a machine learning model to evaluate the fairness of mortgage approval predictions.
  • Find similar promotions for their competitors.
  • Construct prompt templates to perform generation and question-answering tasks.
  • Tune the foundation model with retraining data to ensure the best performance and cost effectiveness.
  • Create a pipeline to simplify the retraining process.

Process

To leverage watsonx.ai for your enterprise, your organization can follow this process:

  1. Prepare the data
  2. Build and train models
  3. Deploy models
  4. Prompt a foundation model
  5. Tune a foundation model
  6. Automate the AI lifecycle

The watsonx.ai component provides the tools and processes that your organization needs to implement an AI solution.

Image showing the flow of the watsonx.ai use case

1. Prepare the data

Data scientists can prepare their own data sets. The data scientist teams can add those data assets to a project, where they collaborate to prepare, analyze, and model the data.

What you can use What you can do Best to use when
Data Refinery Access and refine data from diverse data source connections.

Materialize the resulting data sets as snapshots in time that might combine, join, or filter data for other data scientists to analyze and explore.
You need to visualize the data when you want to shape or cleanse it.

You want to simplify the process of preparing large amounts of raw data for analysis.
Synthetic Data Generator Generate synthetic tabular data based on production data or a custom data schema by using visual flows and modeling algorithms. You want to mask or mimic production data or you want to generate synthetic data from a custom data schema.

Example: Golden Bank's data preparation

In their project, the data scientists refine the data to prepare it for training a machine learning model by ensuring that the data is in the correct format. The machine learning engineers use the structured and sequential training data in the AutoAI experiment that builds the model pipelines.


2. Build and train machine learning models

To get predictive insights based on your data, data scientists, business analysts, and machine learning engineers can build and train machine learning models. Data scientists use watsonx.ai tools to build the AI models, ensuring that the right algorithms and optimizations are used to make predictions that help to solve business problems.

What you can use What you can do Best to use when
AutoAI Use AutoAI to automatically select algorithms, engineer features, generate pipeline candidates, and train machine learning model pipeline candidates.

Then, evaluate the ranked pipelines and save the best as machine learning models.

Deploy the trained machine learning models to a space, or export the model training pipeline that you like from AutoAI into a notebook to refine it.
You want an advanced and automated way to build a good set of training pipelines and machine learning models quickly.

You want to be able to export the generated pipelines to refine them.
Notebooks and scripts Use notebooks and scripts to write your own feature engineering model training and evaluation code in Python or R. Use training data sets that are available in the project, or connections to data sources such as databases, data lakes, or object storage.

Code with your favorite open source frameworks and libraries.
You want to use Python or R coding skills to have full control over the code that is used to create, train, and evaluate the machine learning models.
SPSS Modeler flows Use SPSS Modeler flows to create your own machine learning model training, evaluation, and scoring flows. Use training data sets that are available in the project, or connections to data sources such as databases, data lakes, or object storage. You want a simple way to explore data and define machine learning model training, evaluation, and scoring flows.
RStudio Analyze data and build and test machine learning models by working with R in RStudio. You want to use a development environment to work in R.
Decision Optimization Prepare data, import models, solve problems and compare scenarios, visualize data, find solutions, produce reports, and deploy machine learning models. You need to evaluate millions of possibilities to find the best solution to a prescriptive analytics problem.
Federated learning Train a common machine learning model that uses distributed data. You need to train a machine learning model without moving, combining, or sharing data that is distributed across multiple locations.

Example: Golden Bank's machine learning model building and training

Data scientists at Golden Bank use AutoAI to build a machine learning model that predicts if a customer with purchase a bank product based on a promotion.


3. Deploy models

When operations team members deploy your AI models, the machine learning models become available for applications to use for scoring and predictions to help drive actions.

What you can use What you can do Best to use when
Spaces user interface Use the Spaces UI to deploy models and other assets from projects to spaces. You want to deploy models and view deployment information in a collaborative workspace.

Example: Golden Bank's model deployment

The operations team members at Golden Bank promote the predictive model from the project to a deployment space and then create an online model deployment. Next, they test the deployed model by inputting test data to predict if a customer will purchase a bank product based on the promotion.


4. Prompt a foundation model

Your team can write code in a Jupyter notebook or use the Prompt Lab to develop prompts with a foundation model.

What you can use What you can do Best to use when
Prompt Lab Use the Prompt Lab to experiment with prompting different foundation models. Select different foundation models to prompt. Save and share the most effective prompts. You want an easy user interface to explore and test different prompts.

You want to be able to save a prompt template, or prompt session as either a project asset or export as a Jupyter notebook to perform further analysis.
Notebooks and scripts Use notebooks and scripts to prompt foundation models programmatically by using the Python Library. The code can transform data, adjust foundation model parameters, prompt foundation models, and generate factually accurate output by applying the retrieval-augmented generation pattern. Use training data sets that are available in the project, or connections to data sources such as databases, data lakes, or object storage.

Use your favorite open source frameworks and libraries.
You want to use Python or R coding skills to have full control over the code that is used to perform and evaluate prompt engineering.

Example: Golden Bank's prompt engineering

Golden Bank's data scientist and prompt engineering teams work together to gather relevant documents from various online sources highlighting available promotions by its competitors. They feed their promotion data into a Jupyter notebook to automate sourcing online news articles. The Jupyter Notebook uses LangChain to chunk the text into smaller text extracts that are suitable for including in prompts. This task ensures that the token does not exceed the context token window limitations.

Then the team uses Prompt Lab to create a prompt template for generation and a prompt template for question-answering. For the generation task, the goal is to generate email content highlighting the bank promotions. For the question-answering task, the input and output varies on the question and answers, so they feed the promotion text into the instructions.

5. Tune a foundation model

Your team can write code in a Jupyter notebook or use the Tuning Studio to tune a foundation model. You might want to tune a foundation model to reduce costs or improve the model's performance.

What you can use What you can do Best to use when
Tuning Studio Use the Tuning Studio to tune a foundation model to reduce costs or improve performance. You want an easy user interface to create a tuned foundation model.
Notebooks and scripts Use notebooks and scripts to tune foundation models programmatically by using the Python Library. The code can trigger the prompt tuning process, deploy a prompt-tuned model, and inference a prompt-tuned model. Use training data sets that are available in the project, or connections to data sources such as databases, data lakes, or object storage.

Use your favorite open source frameworks and libraries.
You want to use Python or R coding skills to have full control over the code that is used to transform data, and then tune and evaluate models.

Example: Golden Bank's prompt tuning

Golden Bank's prompt engineering team prompt tunes the foundation model using articles containing additional promotions. The team produces a more cost-effective, smaller foundation model with the same performance level as the original foundation model that they selected for inferencing.


6. Automate the ML lifecycle

Your team can automate and simplify the MLOps and AI lifecycle with Orchestration Pipelines.

What you can use What you can do Best to use when
Orchestration Pipelines Use pipelines to create repeatable and scheduled flows that automate notebook, Data Refinery, and machine learning pipelines, from data ingestion to model training, testing, and deployment. You want to automate some or all of the steps in an MLOps flow.

Example: Golden Bank's automated ML lifecycle

The data scientists at Golden Bank can use pipelines to automate their complete ML lifecycle and processes to simplify the machine learning model retraining process.


Tutorials for watsonx.ai

Tutorial Description Expertise for tutorial
Refine and visualize data with Data Refinery Prepare and visualize tabular data with a graphical flow editor. Select operations to manipulate data.
Generate synthetic tabular data Generate synthetic tabular data using a graphical flow editor. Select operations to generate data.
Analyze data in a Jupyter notebook Load data, run, and share a notebook. Understand generated Python code.
Build and deploy a machine learning model with AutoAI Automatically build model candidates with the AutoAI tool. Build, deploy, and test a model without coding.
Build and deploy a machine learning model in a notebook Build a model by updating and running a notebook that uses Python code and the watsonx.ai Runtime APIs. Build, deploy, and test a scikit-learn model that uses Python code.
Build and deploy a machine learning model with SPSS Modeler Build a C5.0 model that uses the SPSS Modeler tool. Drop data and operation nodes on a canvas and select properties.
Build and deploy a Decision Optimization model Automatically build scenarios with the Modeling Assistant. Solve and explore scenarios, then deploy and test a model without coding.
Prompt a foundation model using Prompt Lab Experiment with prompting different foundation models, explore sample prompts, and save and share your best prompts. Prompt a model using Prompt Lab without coding.
Prompt a foundation model with the retrieval-augmented generation pattern Prompt a foundation model by leveraging information in a knowledge base. Use the retrieval-augmented generation pattern in a Jupyter notebook that uses Python code.
Tune a foundation model Tune a foundation model to enhance model performance. Use the Tuning Studio to tune a model without coding.
Automate the lifecycle for a model with pipelines Create and run a pipeline to automate building and deploying a machine learning model. Drop operation nodes on a canvas and select properties.

Next Steps

Learn more

Parent topic: Use cases

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more