How Excel's LINEST() Handles Collinearity
It's not unusual—in fact, it's the normal state of affairs—for the predictor variables in a multiple regression equation to be correlated with one another. Suppose that you were investigating the relationship between income as an outcome variable, and age and years of education as predictor variables.
You expect age to be positively correlated with years of education. You don't expect a perfect correlation of 1.0 between the two variables, but you're not at all surprised to find a moderately strong correlation, something along the lines of 0.7.
Multiple regression analysis in general (and Excel's LINEST() function in particular) is perfectly capable of dealing with correlated predictor variables (what Excel terms the x-values, as distinct from the outcome variable's y-values).
In fact, that’s one of the purposes of multiple regression analysis: to determine the amount of variability in the outcome variable that's uniquely attributable to each predictor variable. And to determine that unique portion of the variance, you have to be able to untangle the relationships between the predictor variables. Among the methods of doing so is the models comparison approach, as well as a technique which works beautifully with Excel's relative and absolute addressing, detailed in Chapter 12 of Statistical Analysis: Microsoft Excel 2010, Que, 2011.
Troublesome Collinearity
But there's a problem when one of the predictor variables is completely dependent on one or more of the other predictors. In that case, traditional approaches to generating the multiple regression equation (and the goodness-of-fit statistics such as R2) are uninterpretable or simply wrong. See Figure 1 for an example.
Figure 1 In Excel 2002, LINEST() returns a zero for each of the regression coefficient standard errors.
The particular result shown in Figure 1 is due to Excel 2002 (and earlier versions), and to the particular set of inputs. The input values result in a sum of squares and cross products (SSCP) matrix, denoted as X. The matrix product X'X can be inverted, but the inverse has negative values on the main diagonal and therefore returns negative standard errors. Excel 2002 evidently converts negative standard errors to zeros.
Figure 2 depicts another, related problem with the Excel 2002 version of LINEST().
Figure 2 In Excel 2002, LINEST() returns nothing but #NUM! error values for this set of inputs.
In Figure 2, the problem is that the collinearity causes the X'X product matrix to have no inverse (it has a determinant of zero), and therefore none of the regression statistics can be calculated using traditional approaches.
QR Decomposition
The "traditional approaches" I mention in the prior paragraph have to do with fairly straightforward techniques of matrix algebra: matrix transposition, multiplication, and inversion (although no matrix inversion process should be termed "straightforward" if more than three variables are involved).
In Excel 2003 through 2010, Microsoft employs a different approach to solving the multiple regression problem: QR decomposition. This process has two advantages:
- QR decomposition is not stumped by serious collinearity, as is the process of matrix inversion. The multiple regression calculations can be completed and an alternative result provided, one that omits the linear dependency in the predictor variables.
- It does not rely on matrix multiplication and inversion of the raw values, which are thought to cause numeric overflows in many computer systems and consequent inaccuracies in the results. QR decomposition does involve matrix manipulation, but the input values are adjusted beforehand to reduce the likelihood of overflows that can cause inaccurate results.
Because this is intended to be a short paper, I will not get into the particulars of QR decomposition here, except to note that it usually involves the replacement of the observed X values with either zeros or with sums of squares. Matrix operations are still involved but there is much less opportunity for them to cause numeric overflows. The benefits therefore include more precise results and intermediate calculations that are not derailed by negative sums of squares and by determinants that equal zero.
Figures 3 and 4 repeat the data sets used in Figures 1 and 2, with the LINEST() results that are returned in Excel 2003 through Excel 2010.
Figure 3 The LINEST() regression equation returns non-zero standard errors—with one exception.
Figure 4 LINEST() returns numeric results rather than a matrix of error values.
Notice in both Figure 3 and Figure 4 that one of the variables has a zero value both for the regression coefficient (cell B9 in both figures) and for its standard error (cell B10 in both figures). This is Excel's way of communicating to the user that, in these cases, it regards the X(1) variable in both cases as contributing no unique information in the estimation of Y.
Therefore, LINEST() assigns X(1) a regression coefficient of 0.0, which is tantamount to removing X(1) from the regression equation:
Ŷ = -7.586 + 0.0 * X(1) + 1.480 * X(2)
When you multiply X(1) by zero for all records, X(1) has dropped out of the equation. If X(1) is completely dependent on X(1)—or vice versa—then the information in one of the variables is completely redundant and one of them should be omitted from the equation.
The variables X(1) and X(2) are perfectly dependent on one another. X(2) is just X(1) minus 1 – or, if you prefer, X(1) is just X(2) plus 1. Therefore, X(1) cannot provide any information about Y, once the information in Y attributable to X(2) has been accounted for.
Notice, by the way, that the omission of one of the X variables is reflected in the degrees of freedom (df) for the residual, in cell B12 in both Figure 3 and Figure 4. The df residual is the number of cases less the number of predictor variables. There are five cases, one each in rows 2 through 6. After omitting one of the dependent X variables, there is one X variable left on the worksheet. Because the third argument to LINEST() has been omitted—the same as setting it to TRUE—Excel automatically supplies a column of 1's to represent the constant. So, 5 cases less the X variable remaining on the worksheet, less the unseen column of 1's to represent the constant leaves 3 degrees of freedom, as reported by LINEST().
A Difficult Diagnosis
The dependency in the X variables need not be restricted to two of the variables, such as the case in which variable X(2) is the result of multiplying variable X(1) by a constant. In that sort of situation, a simple correlation analysis reveals the dependency. See Figure 5.
Figure 5 The dependency is clear from the correlation matrix in B9:D11, particularly cell B10, but not from B23:D25.
In Figure 5, the correlation between B2:B6 and C2:C6 is both perfect and obvious from the correlation matrix in B9:D11. X(2) is simply twice X(1).
But there is no zero-order correlation of 1.0 in the data shown in B16:D20; there is no correlation of 1.0 in the matrix shown in B23:D25. Here, X(3) is the sum of X(1) and X(2). There is no perfect correlation between any of the individual variables, but there is perfect linear dependency between X(3) and {X(1) and X(2)}, as is shown in cells G23 and G25. To determine that the dependency exists without running LINEST(), you would have to check for a valid determinant of the SSCP matrix.
No Warning
This is all sensible and it's the approach taken by the major statistical applications such as SAS, SPSS, and R.
However, those packages go a step further and alert the user with a message to the effect that there is complete linear dependency in the underlying data, and that one or more variables have been removed from the equation. This is considerate. Excel provides the user with no warning along those lines, apart from zero-valued regression coefficients and standard errors.
Without knowledge of what Excel might do if it encounters this sort of linear dependency, the user might not understand the reason that one of the variables' regression coefficients is 0.0, that its standard error is given as 0.0, and that the df for the residual has in consequence been increased by 1.
Of course, LINEST() is a worksheet function and as such is expected to return results, not warnings in the form of text messages. However, it would be consistent with the behavior of other Excel worksheet functions if LINEST() were to return a value such as #NUM! or #N/A! when QR decomposition reveals linear dependency among the X variables.
Furthermore, TREND() uses the same approach to calculating the regression equation as does LINEST(). But nowhere in the TREND() results is it apparent that a variable has been omitted from the regression equation. Granted, a user should always arrange for and examine the results returned by LINEST() before uncritically accepting the results of LINEST(). Nevertheless, TREND() is accompanied by no warning at all that something unexpected might have occurred.