Neural Net Overview
The trade-off for this flexibility is that the neural network is not easily interpretable. If you are trying to explain an underlying process that produces the relationships between the target and predictors, it would be better to use a more traditional statistical model. However, if model interpretability is not important, you can often obtain good predictions using a neural network.
A neural network is a simplified model of the way the human brain processes information. It works by simulating a large number of interconnected processing units that resemble abstract versions of neurons.
The processing units are arranged in layers. There are typically three parts in a neural network:
- An input layer , with units representing the input fields.
- One or more hidden layers.
- An output layer , with aunit or units representing the target field(s).
The units are connected with varying connection strengths (or weights ). Input data are presented to the first layer, and values are propagated from each neuron to every neuron in the next layer. Eventually, a result is delivered from the output layer.
The network learns by examining individual records, generating a prediction for each record, and making adjustments to the weights whenever it makes an incorrect prediction. This process is repeated many times, and the network continues to improve its predictions until one or more of the stopping criteria have been met.
Initially, all weights are random, and the answers that come out of the net are probably nonsensical. The network learns through training. Examples for which the output is known are repeatedly presented to the network, and the answers it gives are compared to the known outcomes. Information from this comparison is passed back through the network, gradually changing the weights. As training progresses, the network becomes increasingly accurate in replicating the known outcomes. Once trained, the networkncan be applied to future cases where the outcome is unknown.
In addition to fitting a single neural network model, the Neural Net node offers a boosting option to enhance model accuracy and a bagging option to enhance model stability:
Boosting produces a succession of “component models,” each of which is built on the entire dataset. Prior to building each successive component model, the records are weighted based on the previous component model’s residuals. Cases with large residuals are given relatively higher analysis weights so that the next component model will focus on predicting these records well. Together these component models form an ensemble model. The ensemble model scores new records using the weighted median of the ensemble component model predictions for regression models and a weighted majority vote to score categorical targets.
Bagging (bootstrap aggregation) produces replicates of the training dataset by sampling with replacement from the original dataset. This creates bootstrap samples of equal size to the original dataset. Then a “component model” is built on each replicate. Together these component models form an ensemble model. The ensemble model scores new records using either the mean or the median of the predictions from the component models for regression models. Ensemble predicted values for categorical targets can be combined using voting, highest probability, or highest mean probability. Voting selects the category that has the highest probability most often across the base models. Highest probability** selects the category that achieves the single highest probability across all base models. Highest mean probability** selects the category with the highest value when the category probabilities are averaged across base models.
Like your visualization? Why not deploy it? For more information, see Deploy a model.