Given n feature vectors X = { x 1 = (x 11,..., x 1p ),..., x n = (x n1,...,x np ) } of n p-dimensional feature vectors and a vector of dependent variables y = (y 1,...,y n ), the problem is to build a gradient boosted trees regression model that minimizes the loss function based on the predicted and true value.

Training Stage

Gradient boosted trees regression follows the algorithmic framework of gradient boosted trees training with following loss functions: squared loss .

Prediction Stage

Given the gradient boosted trees regression model and vectors x 1,...,x r , the problem is to calculate responses for those vectors. To solve the problem for each given feature vector x i , the algorithm finds the leaf node in a tree in the ensemble, and the leaf node gives the tree response. The algorithm result is a sum of responses of all the trees.

For more complete information about compiler optimizations, see our Optimization Notice.
Select sticky button color: 
Orange (only for download buttons)