- Regression a la Bayes
- Sample Regression Analysis
- Matrix Algebra Methods
- Understanding quap
- Continuing the Code
- A Full Example
- Designing the Multiple Regression
- Arranging a Bayesian Multiple Regression
- Summary
Arranging a Bayesian Multiple Regression
Earlier in this chapter I described how to provide arguments to a quap function that support a single-predictor regression. I’ll review it briefly here. You supply these arguments:
A variable that represents the outcome for each case, such as a car’s MPG, usually the name of the outcome variable. For example:
MPG <- dnorm ( mu, sigma)
specifies that MPG’s density is normally distributed (dnorm) with a mean of mu and a standard deviation of sigma. This outcome variable is usually input in a data frame along with the predictor (see below).
A parameter, often but not necessarily termed mu, that represents the result of the regression equation. For example:
mu <- alpha + beta ( predictor )
Parameters, usually but not necessarily named alpha and beta, that represent the constant (or the intercept) and the coefficient (or the slope) in the regression equation.
A parameter, often but not necessarily termed sigma, which represents the standard deviation of the outcome variable. This determines the spread of the outcome variable’s distribution across its x-axis.
A data frame that contains, at a minimum, the values for the outcome variable (in this example, MPG) and for a predictor variable such as Speed. The data frame might be named CarData.
Here’s how the quap function might appear for an analysis of MPG given a single predictor variable, Speed:
CarQuap <- quap( alist( MPG ~ dnorm ( mu, sigma ) mu <- alpha + beta ( Speed ) alpha ~ dnorm ( 0, 1 ) beta ~ dnorm ( 0, 1 ) sigma ~ dexp (1) ), data = CarData )
A few comments about the arguments to the quap function:
As I mentioned earlier in this chapter, it’s usually a good idea to standardize the values that you supply for the outcome variable and the predictor variable(s) before passing them along to quap. Doing so minimizes the effects that numeric overflows can have on the results of the analysis. You can use an R function, standardize, to handle this for you, or you can subtract the mean value of a variable from each actual value and divide each result by the variable’s standard deviation. (The results are often termed z-scores.)
One result of this standardization is that the z-scores will have a mean of 0 and a standard deviation of 1. It often works out well, especially if you have standardized the predictors and the outcome variable, to use 0 and 1 as the mean and sigma of the dnorm arguments that describe the distributions of alpha and beta.
Notice the use of the tilde instead of an assignment operator in several lines of the quap code. This simply indicates that a parameter is to be distributed as the density of, in this case, a normal curve.
In this example, sigma is specified as sigma ~ dexp(1). The dexp function returns the density of the exponential distribution, which is the parent for a variety of other continuous distributions such as the Gaussian-normal, the Gamma, the Poisson, and the Binomial.
The exponential distribution has one parameter, rate (or lambda); by contrast, the Gaussian distribution has two: the mean and the standard deviation. In R syntax, the exponential distribution’s rate parameter is 1 by default, and the dexp function returns the density probability for the associated quantile, x (or 1 as here). Among other reasons, the exponential distribution is handy for specifying sigma, because the exponential is constrained to positive returns, and the standard deviation is, by definition, a positive quantity.
That’s all you need for a simple regression of one outcome variable on one predictor. To add a predictor and analyze the simultaneous effect of two on one outcome variable, you need four items omitted from the single-predictor analysis:
The additional predictor named Weight should be added to the input data frame named CarData above.
The additional regression coefficient, for Weight, must be specified by the addition of this line of code:
Weight_beta ~ dnorm ( 0, 1 )
In addition, for clarity it makes sense to edit the existing specification for the Speed coefficient to this:
Speed_beta ~ dnorm ( 0, 1 )
The Weight predictor and its coefficient should be added to the regression equation. In the single-variable example, that equation looks like this:
mu <- alpha + beta ( Speed )
In the two-variable example the equation looks like this:
mu <- alpha + Speed_beta ( Speed ) + Weight_beta (Weight)
The full code example might look like this:
library(rethinking) setwd("C:/Users/conra/Documents/Pearson Bayes/Drafts/Ch 6") CarDataFrame <- read.csv("Cars.csv") #You may need to adjust the path to the .csv file on your computer #The three variables are named Spd, Wt and Mileage #in the csv file. They are saved as newly standardized data #with new names (Speed, Weight, and MPG) in the #same steps that standardize them. CarDataFrame$Speed <- standardize( CarDataFrame$Spd ) CarDataFrame$Weight <- standardize( CarDataFrame$Wt ) CarDataFrame$MPG <- standardize( CarDataFrame$Mileage ) regmodel <- quap( alist( MPG ~ dnorm( mu , sigma ) , mu <- a + ( Speed_beta * Speed ) + ( Weight_beta * Weight ) , a ~ dnorm( 0 , 1 ) , Weight_beta ~ dnorm ( 0, 1 ) , Speed_beta ~ dnorm ( 0, 1 ) , sigma ~ dexp( 1 ) ) , data = CarDataFrame )
You can get a smattering of summary information using the rethinking library’s precis function. Simply supply it with the name of the quap model you just created, and specify the number of significant figures if you wish:
precis(regmodel, digits=6)
Here’s what precis returns:
|
mean |
sd |
5.50% |
94.50% |
a |
-1.1E-05 |
0.131111 |
-0.20955 |
0.20953 |
Weight_beta |
-0.30059 |
0.137421 |
-0.52021 |
-0.08096 |
Speed_beta |
-0.01806 |
0.137417 |
-0.23768 |
0.201556 |
sigma |
0.93517 |
0.092242 |
0.78775 |
1.08259 |
(The 5.50% and 94.50% limits are how the developer of the rethinking package chooses to protest the conventional and arbitrary criteria of, for example, 5% and 95% confidence intervals.)
To check your work, consider running a true regression package on the data that this section has analyzed. One convenient way, using continuous predictors and an outcome as here, is to use the lm package. If you do so after running your Bayesian analysis you can take advantage of the data frame you just created. For example, you can get quite a bit of summary information from these two statements, which return the results shown in Figure 6.4:
Car_lm <- lm (CarDataFrame$MPG ~ CarDataFrame$Speed + CarDataFrame$Weight) summary(Car_lm)
Figure 6.4 The lm function performs a traditional multiple regression analysis.
Notice first that the intercept and coefficients returned by lm are close to the a (alpha) and Speed and Weight (betas) returned by quap and precis, but do not duplicate them precisely. This is largely due to traditional regression’s use of the maximum R2 as its criterion that a solution has been reached.
Furthermore, lm by default returns only three significant figures, but you can choose the number of digits with quap’s digits argument. You might want to compare as many as, say, eight digits in the regression coefficients. One way to do so is via the options function. For example, these functions:
options(digits=4) coef(Car_lm)
return these results:
(Intercept) CarDataFrame$Speed CarDataFrame$Weight -3.036e-16 -2.005e-02 -3.066e-01
but these functions:
options(digits=6) coef(Car_lm)
return these results:
(Intercept) CarDataFrame$Speed CarDataFrame$Weight -3.03642e-16 -2.00472e-02 -3.06564e-01
(In the latter two examples I’ve used the coef function instead of the summary function to save space by showing only the coefficients.)
There are lots of ways to specify numeric formats in R. The options statement, just discussed, belongs to R’s base functions, whereas the digits specification belongs, among many others, to the quap function. This situation tends to make matters more confused rather than less.