Decision Tree

Decision trees partition the feature space into a set of hypercubes, and then fit a simple model in each hypercube. The simple model can be a prediction model, which ignores all predictors and predicts the majority (most frequent) class (or the mean of a dependent variable for regression), also known as 0-R or constant classifier.

Decision tree induction forms a tree-like graph structure as shown in the figure below, where:

  • Each internal (non-leaf) node denotes a test on features
  • Each branch descending from node corresponds to an outcome of the test
  • Each external node (leaf) denotes the mentioned simple model


Decision tree structure

The test is a rule for partitioning of the feature space. The test depends on feature values. Each outcome of the test represents an appropriate hypercube associated with both the test and one of descending branches. If the test is a Boolean expression (for example, f<c or f=c, where f is a feature and c is a constant fitted during decision tree induction), the inducted decision tree is a binary tree, so its each non-leaf node has exactly two branches ('true' and 'false') according to the result of the Boolean expression.

Prediction is performed by starting at the root node of the tree, testing features by the test specified by this node, then moving down the tree branch corresponding to the outcome of the test for the given example. This process is then repeated for the subtree rooted at the new node. The final result is the prediction of the simple model at the leaf node.

Decision trees are often used in popular ensembles like Boosting (for more details, see Boosting), Bagging and Decision Forest (for more details about Bagging and Decision Forest, see Classification and Regression > Decision Forest).

For more complete information about compiler optimizations, see our Optimization Notice.
Select sticky button color: 
Orange (only for download buttons)