- Overview
- Statistics and Machine Learning
- The Impact of Big Data
- Supervised and Unsupervised Learning
- Linear Models and Linear Regression
- Generalized Linear Models
- Generalized Additive Models
- Logistic Regression
- Enhanced Regression
- Survival Analysis
- Decision Tree Learning
- Bayesian Methods
- Neural Networks and Deep Learning
- Support Vector Machines
- Ensemble Learning
- Automated Learning
- Summary
Enhanced Regression
As the volume of data grows, analysts struggle to work with data sets containing large numbers of potential predictors. Expanding the number of candidate predictors poses technical issues for analytic algorithms, increases the demands for computing resources, and poses potential methodological problems for the analyst. Analysts consider a number of mathematical transforms for predictors as well as interaction effects among predictors; consequently, the number of measures actually used in the predictive model expands exponentially as the number of raw candidate measures increases.
There are two widely used methods to address this problem: stepwise methods and regularization. We discuss these methods next.
Stepwise Regression
Stepwise regression is a hybrid method that combines statistical modeling with machine learning techniques. Recall that in the previous discussion on linear regression, we noted that the analyst specifies a model, estimates the model, inspects the significance tests for the coefficients, and respecifies the model to remove nonsignificant predictors. This process of constructing a model works reasonably well with a limited number of possible predictors but takes a considerable amount of time when there is a large number of predictors.
Stepwise regression methods streamline the model-building task by automating the process. Three approaches to automation are used widely:
- Forward selection—The algorithm begins with an (optional) intercept-only model and progressively adds candidate predictors until it reaches a stopping point.
- Backward selection—The algorithm begins with a model that includes all candidate predictors and progressively eliminates them from the model until it reaches a stopping point.
- Bidirectional stepwise—The algorithm proceeds similar to forward selection, but at each step, it can either add or drop candidate predictors until it reaches a stopping point.
Stepwise algorithms evaluate candidate predictors by comparing two versions of the model: one that includes the predictor and another that does not include the predictor. The algorithm performs a statistical test to select one of the two candidate models; in most software implementations, the user can select the test criterion. The three most widely used measures are the F-test, Aikaike’s information criterion (AIC), and the Bayesian information criterion (BIC).
Although stepwise regression is efficient and effective for predictive modeling, the method is less useful for analysis of variance, in which there is a premium on analytic rigor and statistical validity. Stepwise regression is also subject to overfitting, in which the model produced does not generalize well from the training data to production data (for more on overfitting, see the next section). For these reasons, many analysts use stepwise regression primarily as an exploratory tool to narrow the set of possible predictors.
Stepwise regression methods work with any underlying form of regression; the most popular are stepwise linear and stepwise logistic regression.
Regularization
Overfitting or overlearning is a condition in which the accuracy of a model is much higher on its training data set than on an independent data set. In short, the model does not generalize well because the algorithm that produced it learned random features of the training data. This is a serious problem for analysts because the ultimate test of a model is how it performs in production, not how well it performs in the lab.
As a rule, overfitting is a larger problem for machine learning than statistics because statistical models have a foundation in known statistical distributions. However, as the complexity of a model increases and additional predictors are added, even statistical models can suffer from overfitting.
There are several techniques to prevent overfitting, including validation of the model on an independent sample, n-fold cross-validation, and regularization. We cover the first two under machine learning; in this section, we discuss regularization.
Regularization methods limit complexity by penalizing models based on the number of predictors. To enter into the model, each new candidate predictor must overcome a progressively higher complexity penalty. There are several specific methods for regularization; the most widely used are ridge regression (also called Tikhonov regularization or constrained linear inversion) and LASSO regression (or least absolute shrinkage and selection operator). The Elastic Net method combines ridge and LASSO regularization.
Higher-end statistical software generally includes ridge and lasso regularization, and so does open source R. For Elastic Net, MathWorks offers a commercial implementation, and in open source R, the popular glmnet package supports the capability.