This module is going to introduce you to...
But first, let's review a few things that we learned in the previous modules.
# packages used
import pandas as pd
from sklearn.model_selection import train_test_split
# import data
adult_census = pd.read_csv('../data/adult-census.csv')
# separate feature & target data
target = adult_census['class']
features = adult_census.drop(columns='class')
# drop the duplicated column `"education-num"`
features = features.drop(columns='education-num')
# split into train & test sets
X_train, X_test, y_train, y_test = train_test_split(
features, target, random_state=123
)
# packages used
from sklearn.compose import make_column_selector as selector
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
# create selector object based on data type
numerical_columns_selector = selector(dtype_exclude=object)
categorical_columns_selector = selector(dtype_include=object)
# get columns of interest
numerical_columns = numerical_columns_selector(features)
categorical_columns = categorical_columns_selector(features)
# preprocessors to handle numeric and categorical features
numerical_preprocessor = StandardScaler()
categorical_preprocessor = OneHotEncoder(handle_unknown="ignore")
# transformer to associate each of these preprocessors with their
# respective columns
preprocessor = ColumnTransformer([
('one-hot-encoder', categorical_preprocessor, categorical_columns),
('standard_scaler', numerical_preprocessor, numerical_columns)
])
# packages used
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
# Pipeline object to chain together modeling processes
model = make_pipeline(preprocessor, LogisticRegression(max_iter=500))
model
# fit our model
_ = model.fit(X_train, y_train)
# score on test set
model.score(X_test, y_test)
0.8503808041929408
In our "03-first_model.ipynb" notebook we split our data into training and testing sets and we assessed the performance of our model on the test set. Unfortunately, there are a few pitfalls to this approach:
Resampling methods provide an alternative approach by allowing us to repeatedly fit a model of interest to parts of the training data and test its performance on other parts of the training data.
The two most commonly used resampling methods include k-fold cross-validation and bootstrap sampling. This module focuses on using k-fold cross-validation.
Cross-validation consists of repeating the procedure such that the training and testing sets are different each time. Generalization performance metrics are collected for each repetition and then aggregated. As a result we can get an estimate of the variability of the model’s generalization performance.
k-fold cross-validation (aka k-fold CV) is a resampling method that randomly divides the training data into k groups (aka folds) of approximately equal size.
The model is fit on $k-1$ folds and then the remaining fold is used to compute model performance. This procedure is repeated k times; each time, a different fold is treated as the validation set.
This process results in k estimates of the generalization error (say $\epsilon_1, \epsilon_2, \dots, \epsilon_k$). Thus, the k-fold CV estimate is computed by averaging the k test errors, providing us with an approximation of the error we might expect on unseen data.
In scikit-learn, the function cross_validate
allows us to perform cross-validation and you need to pass it the model, the data, and the target. Since there exists several cross-validation strategies, cross_validate takes a parameter cv
which defines the splitting strategy.
Tip
In practice, one typically uses k=5 or k=10. There is no formal rule as to the size of k; however, as k gets larger, the difference between the estimated performance and the true performance to be seen on the test set will decrease.
%%time
from sklearn.model_selection import cross_validate
cv_result = cross_validate(model, X_train, y_train, cv=5)
cv_result
CPU times: user 1.71 s, sys: 66.7 ms, total: 1.77 s Wall time: 1.78 s
{'fit_time': array([0.32632685, 0.33183312, 0.33147502, 0.34435606, 0.32457304]), 'score_time': array([0.01660299, 0.01529479, 0.016078 , 0.01629019, 0.015697 ]), 'test_score': array([0.85191757, 0.84548185, 0.85790336, 0.85094185, 0.85558286])}
The output of cross_validate is a Python dictionary, which by default contains three entries:
fit_time
: the time to train the model on the training data for each fold,score_time
: the time to predict with the model on the testing data for each fold, andtest_score
: the default score on the testing data for each fold.scores = cv_result["test_score"]
print("The mean cross-validation accuracy is: "
f"{scores.mean():.3f} +/- {scores.std():.3f}")
The mean cross-validation accuracy is: 0.852 +/- 0.004
Your Turn
Using KNeighborsClassifier(), run a 5 fold cross validation procedure and compare the accuracy and standard deviation. Note: Don't forget to create a new model pipeline object.
Evaluation metrics allow us to measure the predictive accuracy of our model – the difference between the predicted value ($\hat{y}_i$) and the actual value ($y_i$).
We often refer to evaluation metrics as loss functions: $f(y_{i} - \hat{y}_i)$
Scikit-Learn provides multiple ways to compute evaluation metrics and refers to this concept as scoring.
Every estimator (regression/classification model) has a default scoring method
Most classifiers return the mean accuracy of the model on the supplied $X$ and $y$:
# toy data
from sklearn.datasets import load_breast_cancer
X_cancer, y_cancer = load_breast_cancer(return_X_y=True)
# fit model
clf = LogisticRegression(solver='liblinear').fit(X_cancer, y_cancer)
# score
clf.score(X_cancer, y_cancer)
0.9595782073813708
While most regressors return the $R^2$ metric:
# toy data
from sklearn.datasets import fetch_california_housing
X_cali, y_cali = fetch_california_housing(return_X_y=True)
# fit model
from sklearn.linear_model import LinearRegression
reg = LinearRegression().fit(X_cali, y_cali)
# score
reg.score(X_cali, y_cali)
0.606232685199805
However, these default evaluation metrics are often not the metrics most suitable to the business problem.
There are many loss functions to choose from; each with unique characteristics that can be beneficial for certain problems.
Scikit-Learn provides many scoring functions to choose from.
from sklearn import metrics
The functions take actual y values and predicted y values -- $f(y_{i} - \hat{y}_i)$
Example regression metrics:
y_pred = reg.predict(X_cali)
# Mean squared error
metrics.mean_squared_error(y_cali, y_pred)
0.5243209861846072
# Mean absolute percentage error
metrics.mean_absolute_percentage_error(y_cali, y_pred)
0.31715404597233343
Example classification metrics:
y_pred = clf.predict(X_cancer)
# Area under the curve
metrics.roc_auc_score(y_cancer, y_pred)
0.9543760900586651
# F1 score
metrics.f1_score(y_cancer, y_pred)
0.968011126564673
# multiple metrics at once!
print(metrics.classification_report(y_cancer, y_pred))
precision recall f1-score support 0 0.96 0.93 0.95 212 1 0.96 0.97 0.97 357 accuracy 0.96 569 macro avg 0.96 0.95 0.96 569 weighted avg 0.96 0.96 0.96 569
And since we prefer to use cross-validation procedures, scikit-learn has incorporated a scoring
parameter.
Most evaluation metrics have a predefined text string that can be supplied as a scoring
argument.
# say we wanted to use AUC as our loss function while using 5-fold validation
cross_validate(model, X_train, y_train, cv=5, scoring='roc_auc')
{'fit_time': array([0.33399296, 0.33877516, 0.33839607, 0.32831979, 0.32219696]), 'score_time': array([0.01821494, 0.0176549 , 0.01770401, 0.01844382, 0.01791096]), 'test_score': array([0.90485391, 0.90327043, 0.91316917, 0.90553718, 0.90816423])}
Note
The unified scoring API in scikit-learn always maximizes the score, so metrics which need to be minimized are negated in order for the unified scoring API to work correctly. Consequently, some metrics such as mean_squared_error() will use a predefined text string starting with neg_ (i.e. 'neg_mean_squared_error').
# applying mean squared error with k-fold cross validation
cross_validate(
reg, X_cali, y_cali, cv=5, scoring='neg_root_mean_squared_error'
)
{'fit_time': array([0.01037192, 0.00827122, 0.00409293, 0.00524306, 0.00397301]), 'score_time': array([0.00218201, 0.00053596, 0.00089598, 0.00042582, 0.00049281]), 'test_score': array([-0.69631786, -0.78898504, -0.80387217, -0.73702076, -0.70333835])}
You can even supply more than one metric or even define your own custom metric.
# example of supplying more than one metric
metrics = ['accuracy', 'roc_auc']
cross_validate(model, X_train, y_train, cv=5, scoring=metrics)
{'fit_time': array([0.3343451 , 0.35279107, 0.342453 , 0.36462188, 0.32294607]), 'score_time': array([0.03534794, 0.03328896, 0.03542399, 0.03617978, 0.03345275]), 'test_accuracy': array([0.85191757, 0.84548185, 0.85790336, 0.85094185, 0.85558286]), 'test_roc_auc': array([0.90485391, 0.90327043, 0.91316917, 0.90553718, 0.90816423])}
Your Turn
Using the KNeighborsClassifier() from the previous Your Turn exercises, perform a 5 fold cross validation and compute the accuracy and ROC AUC.
Given two different models (blue line) to the same data (gray dots), which model do you prefer?
A | B |
---|---|
![]() |
![]() |
Prediction errors can be decomposed into two main subcomponents we care about:
Error due to bias is the difference between the expected (or average) prediction of our model and the correct value which we are trying to predict.
It measures how far off in general a model’s predictions are from the correct value, which provides a sense of how well a model can conform to the underlying structure of the data.
High bias models (i.e. generalized linear models) are rarely affected by the noise introduced by new unseen data
Error due to variance is the variability of a model prediction for a given data point.
Many models (e.g., k-nearest neighbor, decision trees, gradient boosting machines) are very adaptable and offer extreme flexibility in the patterns that they can fit to. However, these models offer their own problems as they run the risk of overfitting to the training data.
Although you may achieve very good performance on your training data, the model will not automatically generalize well to unseen data.
Hyperparameters (aka tuning parameters) are the "knobs to twiddle" to control the complexity of machine learning algorithms and, therefore, the bias-variance trade-off.
Some models have very few hyperparameters. For example in a K-nearest neighbor (KNN) model K (the number of neighbors) is the primary hyperparameter.
While other models such as gradient boosted machines (GBMs) and deep learning models can have many.
Hyperparameter tuning is the process of screening hyperparameter values (or combinations of hyperparameter values) to find a model that balances bias & variance so that the model generalizes well to unseen data.
%%time
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score
# set hyperparameter in KNN model
model = KNeighborsClassifier(n_neighbors=10)
# create preprocessor & modeling pipeline
pipeline = make_pipeline(preprocessor, model)
# 5-fold cross validation using AUC error metric
results = cross_val_score(pipeline, X_train, y_train, cv=5, scoring='roc_auc')
f'KNN model with 10 neighbors: AUC = {np.mean(results):.3f}'
CPU times: user 1min 45s, sys: 1.21 s, total: 1min 46s Wall time: 21.6 s
'KNN model with 10 neighbors: AUC = 0.883'
But what if we wanted to assess and compare n_neighbors
= 5, 10, 15, 20, ... ?
For this we could use a full cartesian grid search using Scikit-Learn's GridSearchCV()
:
%%time
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
# basic model object
knn = KNeighborsClassifier()
# Create grid of hyperparameter values
hyper_grid = {'knn__n_neighbors': [5, 10, 15, 20]}
# create preprocessor & modeling pipeline
pipeline = Pipeline([('prep', preprocessor), ('knn', knn)])
# Tune a knn model using grid search
grid_search = GridSearchCV(pipeline, hyper_grid, cv=5, scoring='roc_auc', n_jobs=-1)
results = grid_search.fit(X_train, y_train)
# Best model's cross validated AUC
abs(results.best_score_)
CPU times: user 256 ms, sys: 42.5 ms, total: 298 ms Wall time: 1min 14s
0.8937157593356954
Tip
We use Pipeline rather than make_pipeline in the above because it allows us to name the different steps in the pipeline. This allows us to assign hyperparameters to distinct steps within the pipeline.
results.best_params_
{'knn__n_neighbors': 20}
However, a cartesian grid-search approach has limitations.
Note
Random search based on hyperparameter distributions has proven to perform as well, if not better than, standard grid search. Learn more here.
For example, say we want to train a random forest classifier. Random forests are very flexible algorithms and can have several hyperparameters.
from sklearn.ensemble import RandomForestClassifier
# basic model object
rf = RandomForestClassifier(random_state=123)
# create preprocessor & modeling pipeline
pipeline = Pipeline([('prep', preprocessor), ('rf', rf)])
For this particular random forest algorithm we'll assess the following hyperparameters. Don't worry if you are not familiar with what these do.
n_estimators
: number of trees in the forest,max_features
: number of features to consider when looking for the best split,max_depth
: maximum depth of each tree built,min_samples_leaf
: minimum number of samples required in a leaf node,max_samples
: number of samples to draw from our training data to train each tree.A standard grid search would be very computationally intense.
Instead, we'll use a random latin hypercube search using RandomizedSearchCV
.
To build our grid, we need to specify distributions for our hyperparameters.
# specify hyperparameter distributions to randomly sample from
param_distributions = {
'rf__n_estimators': loguniform_int(50, 1000),
'rf__max_features': loguniform(.1, .5),
'rf__max_depth': loguniform_int(4, 20),
'rf__min_samples_leaf': loguniform_int(1, 100),
'rf__max_samples': loguniform(.5, 1),
}
Now, we can define the randomized search using the different distributions.
Executing 10 iterations of 5-fold cross-validation for random parametrizations of this model on this dataset can take from 10 seconds to several minutes, depending on the speed of the host computer and the number of available processors.
%%time
from sklearn.model_selection import RandomizedSearchCV
# perform 10 random iterations
random_search = RandomizedSearchCV(
pipeline,
param_distributions=param_distributions,
n_iter=10,
cv=5,
scoring='roc_auc',
verbose=1,
n_jobs=-1,
)
results = random_search.fit(X_train, y_train)
Fitting 5 folds for each of 10 candidates, totalling 50 fits CPU times: user 50.9 s, sys: 790 ms, total: 51.7 s Wall time: 2min 55s
results.best_score_
0.9159613847988037
results.best_params_
{'rf__max_depth': 14, 'rf__max_features': 0.4233891550145859, 'rf__max_samples': 0.8068442678419226, 'rf__min_samples_leaf': 12, 'rf__n_estimators': 939}
This module discussed:
cross_validate
to perform k-fold cross-validation procedures.estimator.score()
b. Individual scoring functions - from sklearn import metrics
c. Scoring parameters - cross_validate(..., scoring='roc_auc')
GridSearchCV
and RandomizedSearchCV
to perform hyperparameter tuning.