Listen "K-Fold Cross Validation"
Episode Synopsis
K-fold cross validation is the practice by which we separate a large data set into smaller pieces, independently process each data set, and then train our models on some number of the segments, and validate it on the rest. This is generally considered a best practice, or at least good practice, in machine learning, as it helps ensure the correct characterization of your model on the validation set.
Machine Learning Mastery has a great post on the topic.
Machine Learning Mastery has a great post on the topic.
More episodes of the podcast Machine Learning Bytes
Stratified Sampling
30/07/2019
Boosting
26/07/2019
Bagging
24/07/2019
Empirical Risk Minimization
20/07/2019
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.