0 / 0
Quick start: Try the watsonx.ai end-to-end use case

Quick start: Try the watsonx.ai end-to-end use case

This tutorial focuses on a sample use case in the finance industry. Golden Bank needs to perform a stock anomalies analysis to boost productivity and increase the accuracy of a stock analyst's work in investment banking.

Required services
watsonx.ai
Watson Machine Learning

Scenario: Stock anomaly analysis process

To accomplish this goal, the typical process might be as follows:

  1. An investment banker or manager asks the stock analyst to research a company’s stock.
  2. The stock analyst downloads the company’s stock data.
  3. They search through the stock data manually to find anomalies in how the stock price performed.
  4. They explain the anomalies by manually searching the web for relevant news articles around the specific dates.
  5. The stock analyst summarizes the reasoning behind the anomalies using the news articles.
  6. They do follow up research about specific pieces of information and dates.
  7. They send the report to the investment banker for them to do further analysis in order to make and investment decision.

Basic task workflow using watsonx.ai

Watsonx.ai can help accomplish each phase of this process. Your basic workflow includes these tasks:

  1. Open a project. Projects are where you can collaborate with others to work with data.
  2. Add your data to the project. You can add CSV files or data from a remote data source through a connection.
  3. Train a model. You can use a variety of tools, such as, AutoAI, SPSS Modeler, or Jupyter notebooks to train the model.
  4. Deploy and test your model.
  5. Transform the data.
  6. Prompt a foundation model.
  7. Tune the foundation model.

Read about watsonx.ai

To transform your business processes with AI-driven solutions, your enterprise needs to integrate both machine learning and generative AI into your operational framework. Watsonx.ai provides the processes and technologies to enable your enterprise to develop and deploy machine learning models and generative AI solutions.

Learn more about watsonx.ai

Read more about watsonx.ai use cases

Watch a video about watsonx.ai

Watch Video Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface shown in the video. The video is intended to be a companion to the written tutorial.

This video provides a visual method to learn the concepts and tasks in this documentation.


Try a tutorial to watsonx.ai

In this tutorial, you will complete these tasks:





Tips for completing this tutorial

Here are some tips for successfully completing this tutorial.

Use the video picture-in-picture

Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.

The following animated image shows how to use the video picture-in-picture and table of contents features:

How to use picture-in-picture and chapters

Get help in the community

If you need help with this tutorial, you can ask a question or find an answer in the watsonx Community discussion forum.

Set up your browser windows

For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along.

Side-by-side tutorial and UI

Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later.



Task 1: Create the sample project

preview tutorial video To preview this task, watch the video beginning at 00:58.

This tutorial uses a sample project that contains the data sets, notebook, and prompt templates to perform the analysis. Follow these steps to create a project based on a sample:

  1. Access the Stock anomalies analysis project in the Resource hub.

    1. Click Create project.

    2. Accept the default values for the project name, and click Create.

    3. Click View new project when the project is successfully created.

  2. Associate a Watson Machine Learning service with the project:

    1. When the project opens, click the Manage tab, and select the Services and integrations page.

    2. On the IBM services tab, click Associate.

    3. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps:

      1. Click New service.

      2. Select Watson Machine Learning.

      3. Click Create.

      4. Select the new service instance from the list.

    4. Click Associate service.

    5. If necessary, click Cancel to return to the Services & Integrations page.

  3. Click the Assets tab in the project to see the sample assets.

For more information or to watch a video, see Creating a project.

For more information on associated services, see Adding associated services.

Checkpoint icon Check your progress

The following image shows the project Assets tab. You are now ready to visualize the training data.

alt text




Task 2: Visualize the data

preview tutorial video To preview this task, watch the video beginning at 01:27.

The three data sets in the sample project contain synthetic data generated using public stock data from the Yahoo! Finance website as a basis. The training data for a time series anomaly prediction model must be structured and sequential. In this case, the synthetic data is structured and sequential. Follow these steps to view the data assets in the sample project:

  1. Open the historical_data.csv data set. This data set contains historical stock price performance from May 2012 to May 2016.
  2. Return to the project's Assets tab, and open the test_data.csv data set. This data set contains historical stock price performance in Q1 2023.
  3. Return to the project's Assets tab, and open the training_data.csv data set. This data set contains historical stock price performance in 2023.
  4. Click the Visualization tab.
    1. Select the Date column, and then click Visualize data. The first suggested chart type, a histogram, displays.
    2. Select the Line chart type.
      1. For the X-axis, select the Date column.
      2. For the Y-axis, select the Adj Close column. This shows the adjusted closing price by date. The target column for anomaly analysis is the adjusted closing price.

Checkpoint icon Check your progress

The following image shows a visualization of the training_data.csv file. Now you are ready to build the model using this training data.

Visualization of the training_data.csv file




Task 3: Train the model

preview tutorial video To preview this task, watch the video beginning at 02:13.

You can use a variety of tools, such as, AutoAI, SPSS Modeler, or Jupyter notebooks to train the model. In this tutorial, you will train the time series analysis anomaly prediction model with AutoAI. Follow these steps to create the AutoAI experiment:

  1. Return to the project's Assets tab, and then click New asset > Build machine learning models automatically.

  2. On the Build machine learning models automatically page, type the name:

    Stock anomaly experiment
    ```1. Confirm that the Machine Learning service instance that you associated with your project is selected in the *Watson Machine Learning Service Instance* field.
    
    
  3. Click Create.

  4. On the Add data source page, add the training data:

    1. Click Select data from project.

    2. Select Data asset > training_data.csv, and click Select asset.

  5. Set the time series analysis settings:

    1. Select Yes if you are asked to create a time series experiment.

    2. Select Anomaly prediction.

  6. Select Adj Close for the Feature columns.

  7. Click Run experiment. As the model trains, you see an infographic that shows the process of building the pipelines.
    Build model pipelines

    For a list of algorithms, or estimators, available with each machine learning technique in AutoAI, see: AutoAI implementation detail.

  8. After the experiment run is complete, you can view and compare the ranked pipelines in a leaderboard.

    Pipeline leaderboard

  9. You can click Pipeline comparison to see how they differ.

    Pipeline comparison metric chart

  10. Click the highest ranked pipeline to see the pipeline details.

  11. Review the Model evaluation page to see the detailed evaluation metrics about the model performance.

    The AutoAI tool considers a wide range of criteria to spot anomalies. In the table, you can see the evaluation based on different metrics, such as Average precision and Area under ROC, for each of the anomaly types.

    Anomaly types
    Anomaly Type Description
    Trend anomaly A segment of time series, which has a trend change compared to the time series before the segment.
    Variance anomaly A segment of time series in which the variance of a time series is changed.
    Localized extreme anomaly An unusual data point in a time series, which deviates significantly from the data points around it.
    Level shift anomaly A segment in which the mean value of a time series is changed.
  12. Save the model.

    1. Click Save as.

    2. Select Model.

    3. For the model name, type:

      Anomaly Prediction Model
      
    4. Click Create. This saves the pipeline as a model in your project.

  13. When the model is saved, click the View in project link in the notification to view the model in your project. Alternatively, you can navigate to the Assets tab in the project, and click the model name in the Models section.

Checkpoint icon Check your progress

The following image shows the model.

The following image shows the model.




Task 4: Deploy the model

preview tutorial video To preview this task, watch the video beginning at 03:40.

The next task is to promote the test data and the model to a deployment space, and then create an online deployment.

Task 4a: Promote the test data to the deployment space

The sample project includes test data. You promote that test data to a deployment space, so you can use the test data to test the deployed model. Follow these steps to promote the test data to a deployment space:

  1. Return to the project's Assets tab.

  2. Click the Overflow menu Overflow menu for the test_data.csv data asset, and choose Promote to space.

  3. Choose an existing deployment space. If you don't have a deployment space:

    1. Click Create a new deployment space.

    2. For the name, type:

      Anomaly Prediction Space
      
    3. Select a storage service.

    4. Select a machine learning service.

    5. Click Create.

    6. Close the notification when the space is ready.

  4. Select your new deployment space from the list.

  5. Click Promote.

Checkpoint icon Check your progress

The following image shows the Promote to space page.

The following image shows the Promote to space page.

Task 4b: Promote the model to a deployment space

Before you can deploy the model, you need to promote the model to a deployment space. Follow these steps to promote the model to a deployment space:

  1. From the Assets tab, click the Overflow menu Overflow menu for the Anomaly Prediction Model model, and choose Promote to space.

  2. Select the same deployment space from the list.

  3. Select the Go to the model in the space after promoting it option.

  4. Click Promote.

Note: If you didn't select the option to go to the model in the space after promoting it, you can use the navigation menu to navigate to Deployments to select your deployment space and model.

Checkpoint icon Check your progress

The following image shows the model in the deployment space.

The following image shows the model in the deployment space.

Task 4c: Create and test a model deployment

Now that the model is in the deployment space, follow these steps to create the model deployment:

  1. With the model open, click New deployment.

    1. Select Online as the Deployment type.

    2. For the deployment name, type:

      Anomaly Prediction Model Deployment
      
    3. Click Create.

  2. When the deployment is complete, click the deployment name to view the deployment details page.

  3. Review the scoring endpoint, which you can use to access this model programmatically in your applications.

  4. Test the model.

    1. Click the Test tab.

    2. To locate the test data, click Search in space.

    3. Select Data asset > test_data.csv.

    4. Click Confirm.

    5. Click Predict, and review the predictions for the 62 records in the test data.

Checkpoint icon Check your progress

The following image shows the test results from the deployed model.

The following image shows the test results from the deployed model.




Task 5: Gather relevant news articles

preview tutorial video To preview this task, watch the video beginning at 05:07.

Although the Prompt Lab can work with structured and unstructured text, it is essential to ensure that you input the right data that the model can process. In this use case, you need to process news article text based on the anomaly dates you obtained from the anomaly prediction. You can integrate an external news API to extract the relevant news during those dates to simplify the data-gathering process. You can do this in a Jupyter notebook with Python code.

Since the foundation models have a limit on the number of tokens they can process in a single prompt (known as the context window), data may need to be chunked or summarized to fit within this limit. This step ensures that the input data is in a format that the foundation model can effectively process without losing essential information.

Follow these steps to run the notebook:

  1. From the navigation menu Navigation menu, choose Projects > View all projects.
  2. Open the Stock anomalies analysis project.
  3. Click the Assets tab.
  4. Click the Overflow menu Overflow menu for the Extract and Chunk Text from News Articles notebook, and choose Edit.
  5. Complete the Setup section.
    1. Run the first cell to import the libraries.
    2. Obtain the necessary API keys:
      1. Follow the link to create an account and API key at TheNewsAPI.
      2. Paste the API key in the thenewsapi_key variable.
      3. Follow the link to create an account and API key at ArticlExtractor.
      4. Paste the API key in the extract_key variable.
    3. Run the cell to set the two API key variables.
  6. Run the cells in the Define the function to get news article URLs section.
    • The first cell defines a function to get data from TheNewsAPI's Top Stories and set up parameters to ensure you can get relevant news.
    • The second cell defines a function to only get a list of URLs based on the response.
  7. Run the cells in the Define the function to extract news text section.
    • The first cell defines a function to extract news text from a specific news URL using ArticlExtractor API.
    • The second cell defines a function to combine news text from all of the article URLs obtained from TheNewsAPI.
  8. Run the cell in the Define the function to chunk news text section. To ensure the LLM foundation model can take on the information from the text, you need to make sure the token doesn't exceed the context token window limitations. In this example, you define a function to use LangChain to split the character text by taking into account the context of the news text.
  9. Run the cell in the Execute the functions section. In the response, you can see that the final output of data is ready to be fed into the Prompt Lab. LangChain’s text splitter splits the long text up into semantically meaningful chunks, or sentences, and combines them again as a whole text to be processed. You can adjust the maximum size of the chunks.

Checkpoint icon Check your progress

The following image shows the completed notebook. You now have the chunked text to use to prompt the foundation model.

The completed notebook




Task 6: Prompt the foundation model

preview tutorial video To preview this task, watch the video beginning at 07:17.

Now that you have the relevant news article appropriately chunked, you can construct your own prompt templates in the Prompt Lab, or you can use the sample prompt templates in the sample project. The sample project includes sample prompt templates for summarization and question answering tasks. Follow these steps to prompt the foundation model in the Prompt Lab.

Summarization task

  1. Return to the project's Assets tab.

  2. Click the Summarize News Articles prompt template. This opens the prompt template in the Prompt Lab.

  3. Click Edit to open the prompt template in edit mode.

    For the summarization task, you use the chunked news article text as the input example, and notes that the stock analyst usually manually writes to explain anomalies as the output example. This is to ensure that the output is similar to what the stock analyst might write themselves.

  4. Click Generate to see the summary results.

  5. Experiment with different input and output text from the chunked news article in the notebook.

Question answering task

  1. Click the Saved prompts Saved prompts to see saved prompt from your project.

  2. Click the Question Answer News Articles prompt template from the list of saved prompts.

  3. Click Edit to open the prompt template in edit mode.

    For the question-answering task, you use questions as the input example, and answers in the level of detail required and preferred format as the output example.

  4. Click Generate to see the summary results.

  5. Experiment with different input and output text.

Adjust the model parameters

In the Prompt Lab, you can adjust the decoding settings to optimize the model's output for the specific task:

  • Decoding
    • Greedy: always select words with the highest probability
    • Sampling: customize the variability of word selection
  • Repetition Penalty: how much repetition is allowed
  • Stopping Criteria: one or more strings that will cause the text generation to stop if produced

This flexibility allows for a high degree of customization, ensuring that the model operates with parameters best suited to the task's requirements and constraints.

In the Prompt Lab, you can set token limitations to ensure that the tasks remain within the operational scope of the model. This setting helps balance the response's comprehensiveness with the technical limitations of the model, ensuring efficient and effective processing of tasks.

Checkpoint icon Check your progress

The following image shows the Prompt Lab.

The Prompt Lab



Next steps

Experiment with prompt notebooks

From the Prompt Lab, you can save your work in notebook format:

  1. Load a saved prompt template.
  2. Click Save work > Save as.
  3. Select Notebook.
  4. Type a name.
  5. Click Save, and then explore the prompt notebook.
  6. Repeat these steps for the other prompt template.

Tune a foundation model

You might want to tune the foundatation model to enhance the model performance compared to prompt engineering alone or reduce costs by deploying a smaller model that performs similarly to a bigger model. See the Tune a foundation model tutorial.

Additional resources

Parent topic: Quick start tutorials

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more