Developer Guide

Contents

Decision Forest

The library provides decision forest classification and regression algorithms based on an ensemble of tree-structured classifiers (decision trees) built using the general technique of bootstrap aggregation (bagging) and random choice of features.
Decision tree
is a binary tree graph. Its internal (split) nodes represent a
decision function
used to select the following (child) node at the prediction stage. Its leaf (terminal) nodes represent the corresponding response values, which are the result of the prediction from the tree. For more details, see Classification and Regression > Decision Tree, [Breiman84] and [Breiman2001].
For more information on the concepts behind the algorithm, see "Details" section.
For more information on the algorithm's parameters for a specific computation mode and examples of its usage, see "Batch Processing", "Online Processing" and "Distributed Processing" sections.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804