Feature Selection Overview
To narrow down the choices, the Feature Selection algorithm can be used to identify the fields that are most important for a given analysis.
Feature selection consists of three steps:
- Screening. Removes unimportant and problematic inputs and records, or cases such as input fields with too many missing values or with too much or too little variation to be useful.
- Ranking. Sorts remaining inputs and assigns ranks based on importance.
- Selecting. Identifies the subset of features to use in subsequent models—for example, by preserving only the most important inputs and filtering or excluding all others.
In an age where many organizations are overloaded with too much data, the benefits of feature selection in simplifying and speeding the modeling process can be substantial. By focusing attention quickly on the fields that matter most, you can reduce the amount of computation required, more easily locate small but important relationships that might otherwise be overlooked, and, ultimately, obtain simpler, more accurate, and more easily explainable models. By reducing the number of fields used in the model, you may find that you can reduce scoring times as well as the amount of data collected in future iterations.
Like your visualization? Why not deploy it? For more information, see Deploy a model.