Developer Guide

Contents

Details

Given:
  • n
    feature vectors
    x
    1
    =(
    x
    11
    , ...,
    x
    1
    p
    ), ...,
    x
    n
    =(
    x
    n
    1
    , ...,
    x
    np
    ) of size
    p
  • their non-negative sample weights
    w
    =(
    w
    1
    , ...,
    w
    n
    )
  • The vector of class labels
    y
    =(
    y
    1
    , ...,
    y
    n
    ) that describes the class to which the feature vector
    x
    i
    belongs, where
    y
    i
    {0, 1, ...,
    C
    -1} and
    C
    is the number of classes.
The problem is to build a decision tree classifier.

Split Criteria

The library provides the decision tree classification algorithm based on split criteria Gini index [ Breiman84 ] and Information gain [ Quinlan86 ], [ Mitchell97 ]:
If sample weights are provided as input, the library uses a weighted version of the algorithm: weighted gini or weighted information gain depending on chosen split criterion.
  1. Gini index
    where
    • D
      is a set of observations that reach the node
    • p
      i
      is specified in the table below
      Without sample weights
      With sample weights
      p
      i
      is the observed fraction of observations that belong to class
      i
      in
      D
      p
      i
      is the observed weighted fraction of observations that belong to class
      i
      in
      D
      :
    To find the best test using Gini index, each possible test is examined using
    where
    • O
      (
      τ
      ) is the set of all possible outcomes of test
      τ
    • D
      v
      is the subset of
      D
      , for which outcome of
      τ
      is
      v
      , for example,
      .
    • operator
      W
      (·) for arbitrary set of observations
      S
      is defined in the table below
      Without sample weights
      With sample weights
      which is equivalent to the number of elements in
      S
    The test to be used in the node is selected as
    . For binary decision tree with 'true' and 'false' branches,
  2. Information gain
    where
    • O
      (
      τ
      ),
      D
      ,
      D
      v
      are defined above
    • , with
      p
      i
      defined above in Gini index.
      Similarly to Gini index, the test to be used in the node is selected as
      . For binary decision tree with 'true' and 'false' branches,
      .

Training Stage

The classification decision tree follows the algorithmic framework of decision tree training described in Classification and Regression > Decision tree >Training stage .

Prediction Stage

The classification decision tree follows the algorithmic framework of decision tree prediction described in Classification and Regression > Decision tree > Prediction stage .
Given decision tree and vectors
x
1
, …,
x
r
, the problem is to calculate the responses for those vectors.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804