Bayesian Network Overview

Bayesian networks are probabilistic graphical models consisting of nodes (typically representing features or variables) and edges, or directional arrows connecting nodes. The arrows specify the conditional dependencies posited in the model. The source node of an arrow is known as the parent and the destination node is known as the child. The structures in Bayesian networks are known as directed acyclic graphs (DAGs). A DAG is a graph such that following any arrow from a source node does not allow you to return to that node following the arrows in the graph.

Nodes that are not connected by arrows are presumed independent conditional on the connections in the network. A full illustration of a Bayesian network consists of a graphical representation and a table of conditional probabilities for each node, given the values of any parent nodes in the network.

The Bayes Net node fits two types of Bayesian networks: Tree-Augmented Naïve Bayes networks and Markov Blanket Networks. Both types of networks are used in Bayes Net for analyzing relationships between categorical or discrete fields or features, including a chosen target feature, and are thus classification models. Scale features may be specified. They are discretized into categories based on breaking the observed range of the feature into five equal-width bins (fewer than five categories may result from discretizing features where one or more of the bins are unpopulated in the training sample).

A Naïve Bayes network structure is particularly simple, featuring only arrows from the target node to each predictor node in the network, meaning that all predictors are assumed independent, given the value of the target. This structure is almost always drastically oversimplified, but classification models employing it tend to perform quite well, often in in comparison with much more complicated models, and are popular because they attain their comparatively good results much more quickly than most more complicated models. Tree-Augmented Naïve Bayes models improve on the results of Naïve Bayes models while remaining relatively simple and fast to train by allowing at most one arrow from another predictor node pointing to each predictor node (in other words, each predictor node can have a second parent node in addition to the target). A given predictor node can have arrows pointing to multiple other predictor nodes.

A Markov Blanket Bayesian network structure for a given target node consists of all predictor nodes that are children of the target node (have arrows from the target to the predictor nodes) and all predictor nodes that are parents of the target node or any of its child nodes. The Markov Blanket structure essentially identifies all predictors in the network that are needed to predict the target node. This usually produces more accurate results than the simpler networks using Naïve Bayes or TAN structures, but can take considerably longer to train with large feature sets. Feature selection is often used in conjunction with this method to reduce the problem size to a manageable one.

Next steps