To transform your business processes with AI-driven solutions, your enterprise needs to integrate both machine learning and generative AI into your operational framework. watsonx.ai provides the processes and technologies to enable your enterprise
to develop and deploy machine learning models and generative AI solutions.
Challenges
Copy link to section
You can solve the following challenges for your enterprise by using watsonx.ai:
Accessing high-quality data
Organizations need to provide easy access to high-quality data for data science teams who use the data to build machine learning models.
Operationalizing machine learning model building and deploying
Organizations need to implement repeatable processes to quickly and efficiently build and deploy machine learning models to production environments.
Finding answers with foundation models
Organizations need to get information from unstructured data in documents.
Example: Golden Bank's challenges
Copy link to section
Follow the story of Golden Bank as it uses watsonx.ai to implement a process to evaluate fairness in approving mortgage applicants. The team needs to:
Prepare the data to ensure the correct format to train the model.
Build and deploy a machine learning model to evaluate the fairness of mortgage approval predictions.
Find similar promotions for their competitors.
Construct prompt templates to perform generation and question-answering tasks.
Tune the foundation model with retraining data to ensure the best performance and cost effectiveness.
Create a pipeline to simplify the retraining process.
Process
Copy link to section
To leverage watsonx.ai for your enterprise, your organization can follow this process:
The watsonx.ai component provides the tools and processes that your organization needs to implement an AI solution.
1. Prepare the data
Copy link to section
Data scientists can prepare their own data sets. The data scientist teams can add those data assets to a project, where they collaborate to prepare, analyze, and model the data.
Generate synthetic tabular data based on production data or a custom data schema by using visual flows and modeling algorithms.
You want to mask or mimic production data or you want to generate synthetic data from a custom data schema.
Example: Golden Bank's data preparation
Copy link to section
In their project, the data scientists refine the data to prepare it for training a machine learning model by ensuring that the data is in the correct format. The machine learning engineers use the structured and sequential training data
in the AutoAI experiment that builds the model pipelines.
2. Build and train machine learning models
Copy link to section
To get predictive insights based on your data, data scientists, business analysts, and machine learning engineers can build and train machine learning models. Data scientists use watsonx.ai tools to build the AI models, ensuring that the right
algorithms and optimizations are used to make predictions that help to solve business problems.
Use notebooks and scripts to write your own feature engineering model training and evaluation code in Python or R. Use training data sets that are available in the project, or connections to data sources such as databases, data lakes,
or object storage.
Code with your favorite open source frameworks and libraries.
You want to use Python or R coding skills to have full control over the code that is used to create, train, and evaluate the machine learning models.
Use SPSS Modeler flows to create your own machine learning model training, evaluation, and scoring flows. Use training data sets that are available in the project, or connections to data sources such as databases, data lakes, or object
storage.
You want a simple way to explore data and define machine learning model training, evaluation, and scoring flows.
Train a common machine learning model that uses distributed data.
You need to train a machine learning model without moving, combining, or sharing data that is distributed across multiple locations.
Example: Golden Bank's machine learning model building and training
Copy link to section
Data scientists at Golden Bank use AutoAI to build a machine learning model that predicts if a customer with purchase a bank product based on a promotion.
3. Deploy models
Copy link to section
When operations team members deploy your AI models, the machine learning models become available for applications to use for scoring and predictions to help drive actions.
Use the Spaces UI to deploy models and other assets from projects to spaces.
You want to deploy models and view deployment information in a collaborative workspace.
Example: Golden Bank's model deployment
Copy link to section
The operations team members at Golden Bank promote the predictive model from the project to a deployment space and then create an online model deployment. Next, they test the deployed model by inputting test data to predict if a customer
will purchase a bank product based on the promotion.
4. Prompt a foundation model
Copy link to section
Your team can write code in a Jupyter notebook or use the Prompt Lab to develop prompts with a foundation model.
Use the Prompt Lab to experiment with prompting different foundation models. Select different foundation models to prompt. Save and share the most effective prompts.
You want an easy user interface to explore and test different prompts.
You want to be able to save a prompt template, or prompt session as either a project asset or export as a Jupyter notebook to perform further analysis.
Use notebooks and scripts to prompt foundation models programmatically by using the Python Library. The code can transform data, adjust foundation model parameters, prompt foundation models, and generate factually accurate output by
applying the retrieval-augmented generation pattern. Use training data sets that are available in the project, or connections to data sources such as databases, data lakes, or object storage.
Use your favorite open source frameworks
and libraries.
You want to use Python or R coding skills to have full control over the code that is used to perform and evaluate prompt engineering.
Example: Golden Bank's prompt engineering
Copy link to section
Golden Bank's data scientist and prompt engineering teams work together to gather relevant documents from various online sources highlighting available promotions by its competitors. They feed their promotion data into a Jupyter notebook
to automate sourcing online news articles. The Jupyter Notebook uses LangChain to chunk the text into smaller text extracts that are suitable for including in prompts. This task ensures that the token does not exceed the context token
window limitations.
Then the team uses Prompt Lab to create a prompt template for generation and a prompt template for question-answering. For the generation task, the goal is to generate email content highlighting the bank promotions. For the question-answering
task, the input and output varies on the question and answers, so they feed the promotion text into the instructions.
5. Tune a foundation model
Copy link to section
Your team can write code in a Jupyter notebook or use the Tuning Studio to tune a foundation model. You might want to tune a foundation model to reduce costs or improve the model's performance.
Use notebooks and scripts to tune foundation models programmatically by using the Python Library. The code can trigger the prompt tuning process, deploy a prompt-tuned model, and inference a prompt-tuned model. Use training data sets
that are available in the project, or connections to data sources such as databases, data lakes, or object storage.
Use your favorite open source frameworks and libraries.
You want to use Python or R coding skills to have full control over the code that is used to transform data, and then tune and evaluate models.
Example: Golden Bank's prompt tuning
Copy link to section
Golden Bank's prompt engineering team prompt tunes the foundation model using articles containing additional promotions. The team produces a more cost-effective, smaller foundation model with the same performance level as the original foundation
model that they selected for inferencing.
6. Automate the ML lifecycle
Copy link to section
Your team can automate and simplify the MLOps and AI lifecycle with Orchestration Pipelines.
Use pipelines to create repeatable and scheduled flows that automate notebook, Data Refinery, and machine learning pipelines, from data ingestion to model training, testing, and deployment.
You want to automate some or all of the steps in an MLOps flow.
Example: Golden Bank's automated ML lifecycle
Copy link to section
The data scientists at Golden Bank can use pipelines to automate their complete ML lifecycle and processes to simplify the machine learning model retraining process.
Try out different use cases on a self-service site. Select a use case to experience a live application built with watsonx. Developers, access prompt selection and
construction guidance, along with sample application code, to accelerate your project.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.