Listen "Bagging"
Episode Synopsis
Bagging is an ensemble meta-algorithm. Basically, we take some number of estimators (usually dozens-ish), train them each on some random subset of the training data. Then, we average the predictions of each individual estimator in order to make the resulting prediction. While this reduces the variance of your predictions (indeed, that is the core purpose of bagging), it may come at the trade off of bias.
For a more academic basis, see slide #13 of this lecture by Joëlle Pineau at McGill University.
For a more academic basis, see slide #13 of this lecture by Joëlle Pineau at McGill University.
More episodes of the podcast Machine Learning Bytes
K-Fold Cross Validation
31/07/2019
Stratified Sampling
30/07/2019
Boosting
26/07/2019
Empirical Risk Minimization
20/07/2019
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.