1.11. Ensembles: Gradient boosting, random forests, bagging, voting, stacking# Ensemble methods combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness over a single estimator. Two very famous examples of ensemble methods are gradient-boosted trees and random forests. More generally, ensemble models can be applied t