Introduction
For much of this course, you have made predictions with the random forest method, which achieves better performance than a single decision tree simply by averaging the predictions of many decision trees.
We refer to the random forest method as an "ensemble method". By definition, ensemble methods combine the predictions of several models ( several trees, in the case of random forests).
Next, we'll learn about another ensemble method called gradient boosting.
Gradient Boosting
Gradient boosting is a method that goes through cycles to iteratively add models into an ensemble.
It begins by initializing the ensemble with a single model, whose predictions can be pretty naive. (Even if its predictions are wildly inaccurate, subsequent additions to the ensemble will address those errors.)
Then, we start the cycle:
- First, we use the current ensemble to generate predictions for each observation in the dataset. To make a prediction, we add the predictions from all models in the ensemble.
- These predictions are used to calculate a loss function (like Mean Squred error for instance).
- Then, we use the loss function to fit a new model that will be added to the ensemble. Specifically, we determine model parameters so that adding this new model to the ensemble will reduce the loss. (Side note: The "gradient" in "gradient boosting" refers to the fact that we'll use gradient Decent on the loss function to determine the parameters in this new model.)
- Finally, we add the new model to ensemble, and ...
- ... repeat!
![Naive model]()
Example
We begin by loading the training and validation data in X_train
, X_valid
, y_train
, and y_valid
import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
data = pd.read_csv('melb_data.csv')
# Select subset of predictors
cols_to_use = ['Rooms', 'Distance', 'Landsize', 'BuildingArea', 'YearBuilt']
X = data[cols_to_use]
# Select target
y = data.Price
# Separate data into training and validation sets
X_train, X_valid, y_train, y_valid = train_test_split(X, y)
In this example, you'll work with the XGBoost library. XGBoost stands for extreme gradient boosting, which is an implementation of gradient boosting with several additional features focused on performance and speed. (Scikit-learn has another version of gradient boosting, but XGBoost has some technical advantages.)
In the next code cell, we import the scikit-learn API for XGBoost . This allows us to build and fit a model just as we would in scikit-learn. As you'll see in the output, the XGBRegressor
class has many tunable parameters -- you'll learn about those soon!
from xgboost import XGBRegressor
my_model = XGBRegressor()
my_model.fit(X_train, y_train)
We also make predictions and evaluate the model.
from sklearn.metrics import mean_absolute_error
predictions = my_model.predict(X_valid)
print("Mean Absolute Error: " + str(mean_absolute_error(predictions, y_valid)))
Parameter Tuning
XGBoost has a few parameters that can dramatically affect accuracy and training speed. The first parameters you should understand are:
n_estimators
n_estimators
specifies how many times to go through the modeling cycle described above. It is equal to the number of models that we include in the ensemble.
- Too low a value causes underfitting, which leads to inaccurate predictions on both training data and test data.
- Too high a value causes overfitting, which causes accurate predictions on training data, but inaccurate predictions on test data (which is what we care about).
Typical values range from 100-1000, though this depends a lot on the learning_rate
parameter discussed below.
Here is the code to set the number of models in the ensemble:
my_model = XGBRegressor(n_estimators=500)
my_model.fit(X_train, y_train)
early_stopping_rounds
early_stopping_rounds
offers a way to automatically find the ideal value for n_estimators
. Early stopping causes the model to stop iterating when the validation score stops improving, even if we aren't at the hard stop for n_estimators
. It's smart to set a high value for n_estimators
and then use early_stopping_rounds
to find the optimal time to stop iterating.
Since random chance sometimes causes a single round where validation scores don't improve, you need to specify a number for how many rounds of straight deterioration to allow before stopping. Setting early_stopping_rounds=5
is a reasonable choice. In this case, we stop after 5 straight rounds of deteriorating validation scores.
When using early_stopping_rounds
, you also need to set aside some data for calculating the validation scores - this is done by setting the eval_set
parameter.
We can modify the example above to include early stopping:
my_model = XGBRegressor(n_estimators=500)
my_model.fit(X_train, y_train,
early_stopping_rounds=5,
eval_set=[(X_valid, y_valid)],
verbose=False)
If you later want to fit a model with all of your data, set n_estimators
to whatever value you found to be optimal when run with early stopping.
Conclusion
XGBoost is a leading software library for working with standard tabular data (the type of data you store in Pandas DataFrames, as opposed to more exotic types of data like images and videos). With careful parameter tuning, you can train highly accurate models.