Overfitting is a problem with sophisticated non-linear learning algorithms like gradient boosting. In this post you will discover how you can use early stopping to limit overfitting with XGBoost in Python. After reading this post, you will know: About early stopping as an approach to reducing overfitting of training data. How to monitor the performance of an XGBoost model during training and plot
![Avoid Overfitting By Early Stopping With XGBoost In Python - MachineLearningMastery.com](https://cdn-ak-scissors.b.st-hatena.com/image/square/a6e3e0c344e2d888026b940ef9bb2f00721597d8/height=288;version=1;width=512/https%3A%2F%2Fmachinelearningmastery.com%2Fwp-content%2Fuploads%2F2016%2F07%2FXGBoost-Learning-Curve-Log-Loss.png)