0 / 0
AutoAI Overview
AutoAI Overview

AutoAI Overview

The AutoAI graphical tool in Watson Studio analyzes your data and discovers data transformations, algorithms, and parameter settings that work best for your predictive modeling problem. AutoAI displays the results as model candidate pipelines ranked on a leaderboard for you to choose from.

Required service Watson Machine Learning Watson Studio

Data format Tabular: CSV files, with comma (,) delimiter for all types of AutoAI experiments Connected data from IBM Cloud Object Storage

Data size Up to 1 GB or up to 20 GB. For details, refer to AutoAI data use.

AutoAI data use

These limits are based on the default compute configuration of 8 CPU and 32 GB.

AutoAI experiments with a single data source:

  • You can upload a file up to 1GB for AutoAI experiments
  • If you connect to a data source, such as a database table, that exceeds 1GB, only the first 1GB of records is used

AutoAI experiments with joined data sources:

  • You can upload files up to 20GB
  • You can use up to 20 files, with each file less than 4GB and a combined maximum of 20GB

AutoAI time series experiments

  • If the data source contains a timestamp column, the data must be sampled at a uniform frequency. That is, the difference in timestamps of adjacent rows is the same. For example, data can be in increments of one minute, one hour, or one day. The specified timestamp is used to determine the lookback window to improve the model accuracy. Note: If the file size is larger than 1GB, sort the data in descending order by the timestamp, and only the first 1GB is used to train the experiment.
  • If the data source does not contain a timestamp column, make sure the data is sampled at regular intervals and sorted in ascending order according to the date/time at which it was sampled. That is, the value in the first row is the oldest, and the value in the last row is the most recent. Note: If the file size is larger than 1GB, truncate the file size so it is smaller than 1GB.

For more information on choosing the right tool for your data and use case, refer to Choosing a tool.

Data operations in AutoAI

When you load data to train an AutoAI experiment, you can load a single data file, or you can join multiple data files that share common keys into a single training data set. For details, refer to:

For data gathered over a specified date/time range (such as stock prices or temperatures), you can create a time series experiment to predict future activity.

AutoAI process

Using AutoAI, you can build and deploy a machine learning model with sophisticated training features and no coding. The tool does most of the work for you.

To view the code that created a particular experiment, or interact with the experiment programmatically, you can save an experiment as a notebook.

The AutoAI process takes data from a structured file, prepares the data, selects the model type, and generates and ranks pipelines so you can save and deploy a model.

AutoAI automatically runs the following tasks to build and evaluate candidate model pipelines:

Understanding the AutoAI process

For additional detail on each of these phases, including links to associated research papers and descriptions of the algorithms applied to create the model pipelines, see AutoAI implementation details.

Data pre-processing

Most data sets contain different data formats and missing values, but standard machine learning algorithms work with numbers and no missing values. AutoAI applies various algorithms, or estimators, to analyze, clean, and prepare your raw data for machine learning. It automatically detects and categorizes features based on data type, such as categorical or numerical. Depending on the categorization, it uses hyper-parameter optimization to determine the best combination of strategies for missing value imputation, feature encoding, and feature scaling for your data.

Automated model selection

The next step is automated model selection that matches your data. AutoAI uses a novel approach that enables testing and ranking candidate algorithms against small subsets of the data, gradually increasing the size of the subset for the most promising algorithms to arrive at the best match. This approach saves time without sacrificing performance. It enables ranking a many candidate algorithms and selecting the best match for the data.

For information on how to handle automatically-generated pipelines to select the best model, refer to Selecting an AutoAI model.

Automated feature engineering

Feature engineering attempts to transform the raw data into the combination of features that best represents the problem to achieve the most accurate prediction. AutoAI uses a unique approach that explores various feature construction choices in a structured, non-exhaustive manner, while progressively maximizing model accuracy using reinforcement learning. This results in an optimized sequence of transformations for the data that best match the algorithms of the model selection step.

For more information on AutoAI features, refer to AutoAI feature comparison.

Hyperparameter optimization

Finally, a hyper-parameter optimization step refines the best performing model pipelines. AutoAI uses a novel hyper-parameter optimization algorithm optimized for costly function evaluations such as model training and scoring that are typical in machine learning. This approach enables fast convergence to a good solution despite long evaluation times of each iteration.

Next steps

Follow the steps in the topic Creating an AutoAI experiment from sample data to build and deploy a sample application, or use your own data to build an AutoAI model.

Watch a video and take a tutorial. See Quick start: Build and deploy a machine learning model with AutoAI.

Learn more

Parent topic: Analyzing data and building models