Formally you can use any classifier as a weak classifier in boosting algorithms. However, be aware about the following tradeoffs:
- Try to use a weak learner model with lower bias, but avoid over-fitting.
- Ensure the training time of a weak learner is reasonably low. Because an ensemble of weak learners is used for boosting, the overall training time may be hundreds or thousands times greater than the training time of a single weak learner.
- Ensure the prediction time of a weak learner is low. The boosting prediction rate is a lot slower than a single weak learner prediction rate.
In most cases, to achieve the best performance of a boosting algorithm, use the layout of an input and output numeric tables that is preferable for the underlying weak learner.
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804