
Comparative study of set methods for classification
Application of Adaboosting and Random Forest to Binary and Multi-class databases
Versandkostenfrei!
Versandfertig in 6-10 Tagen
29,99 €
inkl. MwSt.
PAYBACK Punkte
15 °P sammeln!
Ensemble methods are based on the idea of combining the predictions of several classifiers for a better generalization and to compensate for the possible defects of individual predictors.We distinguish two families of methods: Parallel methods (Bagging, Random forests) in which the principle is to average several predictions in the hope of a better result following the reduction of the variance of the average estimator.Sequential methods (Boosting) in which the parameters are iteratively adapted to produce a better mixture.In this work we argue that when the members of a predictor make differe...
Ensemble methods are based on the idea of combining the predictions of several classifiers for a better generalization and to compensate for the possible defects of individual predictors.We distinguish two families of methods: Parallel methods (Bagging, Random forests) in which the principle is to average several predictions in the hope of a better result following the reduction of the variance of the average estimator.Sequential methods (Boosting) in which the parameters are iteratively adapted to produce a better mixture.In this work we argue that when the members of a predictor make different errors it is possible to reduce the misclassified examples compared to a single predictor. The performance obtained will be compared using criteria such as classification rate, sensitivity, specificity, recall, etc.