Python* API Reference for Intel® Data Analytics Acceleration Library 2020 Update 1
Model of the classifier trained by the adaboost.training.Batch algorithm. More...
Public Member Functions | |
def | serializationTag |
def | getSerializationTag |
def | downCast |
def | __init__ |
def | getNumberOfWeakLearners |
def | getWeakLearnerModel |
def | addWeakLearnerModel |
def | clearWeakLearnerModels |
def | getNumberOfFeatures |
def | getAlpha |
![]() | |
def | getNFeatures |
def | getNumberOfFeatures |
def | setNFeatures |
![]() | |
def | __init__ |
def | getSerializationTag |
![]() | |
def | serialize |
def | deserialize |
def | getSerializationTag |
![]() | |
def | __init__ |
def __init__ | ( | self, | |
nFeatures = 0 |
|||
) |
Empty constructor for deserialization
Use Model.create instead.
def addWeakLearnerModel | ( | self, | |
model | |||
) |
Add weak learner model into the AdaBoost model
model | Weak learner model to add into collection |
def clearWeakLearnerModels | ( | self | ) |
Clears the collecion of weak learners
def downCast | ( | r | ) |
downCast(daal.services.SharedPtr< daal.algorithms.classifier.Model > const & r) -> daal.services.SharedPtr< daal.algorithms.adaboost.Model >
def getAlpha | ( | self | ) |
Returns a pointer to the array of weights of weak learners constructed during training of the AdaBoost algorithm. The size of the array equals the number of weak learners
def getNumberOfFeatures | ( | self | ) |
Retrieves the number of features in the dataset was used on the training stage
def getNumberOfWeakLearners | ( | self | ) |
Returns the number of weak learners constructed during training of the AdaBoost algorithm
def getSerializationTag | ( | self | ) |
getSerializationTag(interface2_Model self) -> int
def getWeakLearnerModel | ( | self, | |
idx | |||
) |
Returns weak learner model constructed during training of the AdaBoost algorithm
idx | Index of the model in the collection |
def serializationTag | ( | ) |
serializationTag() -> int
For more complete information about compiler optimizations, see our Optimization Notice.