- Dummy Coding
- Effect Coding
- Orthogonal Coding
- Factorial Analysis
- Statistical Power, Type I and Type II Errors
- Coping with Unequal Cell Sizes
Statistical Power, Type I and Type II Errors
In previous chapters I have mentioned a topic termed statistical power from time to time. Because it is a major reason to carry out factorial analyses as discussed in this chapter, and to carry out the analysis of covariance as discussed in Chapter 8, it’s important to develop a more thorough understanding of what statistical power is and how to quantify it.
On a purely conceptual level, statistical power refers to a statistical test’s ability to identify the difference between two or more group means as genuine, when in fact the difference is genuine at the population level. You might think of statistical power as the sensitivity of a test to the difference between groups.
Suppose you’re responsible for bringing a collection of websites to the attention of consumers who are shopping online. Your goal is to increase the number of hits that your websites experience; any resulting revenue and profit are up to the people who choose which products to market and how much to charge for them.
You arrange with the owner of a popular site for web searches to display links to 16 of your sites, randomly selected from among those that your company controls. The other randomly selected 16 of your sites will, for a month, get no special promotion.
Your intent is to compare the average number of hourly hits for the sites whose links get prominent display with the average number of hourly hits for the remaining sites. You decide to make a directional hypothesis at the 0.05 alpha level: Only if the specially promoted sites have a higher average number of hits, and only if the difference between the two groups of sites is so large that it could come about by chance only once in 20 replications of this trial, will you reject the hypothesis that the added promotion makes no difference to the hourly average number of hits.
Your data come in a month later and you find that your control group—the sites that received no special promotion—have an average of 45 hits each hour, and the specially promoted sites have an average hourly hit rate of 55. The standard error of the mean is 5. Figure 7.23 displays the situation graphically.
Figure 7.23 Both power and alpha can be thought of as probabilities and depicted as areas under a curve.
Assume that two populations exist: The first consists of websites like yours that get no special promotion. The second consists of websites that are promoted via links on another popular site, but that are otherwise equivalent to the first population. If you repeated your month-long study hundreds or perhaps thousands of times, you might get two distributions that look like the two curves in Figure 7.23.
The curve on the left represents the population of websites that get no special promotion. Over the course of a month, some of those sites—a very few—get as few as 25 hits per hour, and an equally small number get 62 hits per hour. The great majority of those sites average 45 hits per hour: the mode, mean and median of the curve on the left.
The curve on the right represents the specially promoted websites. They tend to get about 10 hits more per hour than the sites represented by the curve on the left. Their overall average is 55 hits per hour.
Now, most of this information is hidden from you. You don’t have access to information about the full populations, just the results of the two samples you took—but that’s enough. Suppose that at the end of the month the two populations have the same mean, as would be the case if the extra promotion had no effect on the average hourly hits.
In that case, the difference in the average hit rate returned by your 16 experimental sites would have been due to nothing more than sampling error. That average of 55 hourly hits is among the averages in the right-hand tail of the curve on the left: the portion of the curve designated as alpha, shown in the chart in Figure 7.23 in a darker shade than the rest of the curve on the left.
Calculating Statistical Power
The boundary between alpha and the rest of the curve on the left is the critical value established by alpha. When you adopted 5% as your alpha level, with a directional hypothesis, you committed to the 5% of the right-hand tail of the curve. The critical value cuts off that 5%, and you can find that critical value using Excel’s T.INV() function:
=T.INV(0.95,30)
That is, what is the value in the t distribution with 30 degrees of freedom that separates the lowest 95% of the values in the distribution from the top 5%? The result is 1.7. If you go up from the mean of the distribution by 1.7 standard errors, you account for the lowest 95% of the distribution. In this case the standard error is 5 (you learned that when you got the data on mean hourly hits), and 5 times 1.7 is 8.5. Add that to the mean of the curve on the left, and you get a critical value of 53.5.
In sum: The value of alpha is entirely under your control—it’s your decision rule. You have made a directional hypothesis and you have set alpha to 0.05. Therefore, you have decided to reject the null hypothesis of no difference between the groups at the population level if, and only if, the experimental group’s sample mean turns out to be at least 1.7 standard errors above the control group’s mean.
Sometimes, the experimental group’s mean will come from that right-hand tail of the left curve’s distribution, just because of sampling error. Because the experimental group’s mean, in that case, is at least 1.7 standard errors above the control group’s mean, you’ll reject the null hypothesis even though both populations have the same mean. That’s Type I error, the probability of incorrectly rejecting a true null hypothesis.
Now suppose that in reality the populations are distributed as shown in Figure 7.23. If the sample experimental group has a mean at least 1.7 standard errors above the critical value of 54—which is 1.7 standard errors above the control group mean—then you’ll correctly reject the null hypothesis of no difference at the population level.
Focus on the right curve in Figure 7.23. The area to the right of the critical value in that curve is the statistical power of your t-test. It is the probability that the experimental group mean comes from the curve on the right, in a reality where the two groups are distributed as shown at the population level.
Quantifying that probability is easy enough. Just take the difference between the critical value and the experimental group mean and divide by the standard error of 5:
=(54 − 55)/5
To get −0.2. That’s a t-value. Evaluate it using the T.DIST() function:
=T.DIST(−0.2,15,TRUE)
using 15 as the degrees of freedom, because at this point we’re working solely with the experimental group of 16 websites. The result is 0.422. That is, 42.2% of the area beneath the curve that represents the experimental group lies below the critical value of 54. Therefore 57.8% of the area under the curve lies to the right of the critical value, and the statistical power of the t-test is 57.8%. See Figure 7.24.
Figure 7.24 Type I error and alpha have counterparts in Type II error and beta.
In Figure 7.24 you can see the area that corresponds to statistical power in the curve on the right, to the right of the critical value. The remaining area under that curve is usually termed beta. It is alpha’s counterpart.
If you incorrectly reject a true null hypothesis (for example, by deciding that two population means differ when in fact they don’t), that’s a Type I error and it has a probability of alpha. You decide the value of alpha, and your decision is typically based on the cost of making a Type I error, in the context of the benefits of correctly rejecting a false null hypothesis.
If you incorrectly reject a true alternative hypothesis (for example, by deciding that two population means are identical when in fact they differ), that’s a Type II error and it has a probability of beta. The value of beta is not directly in your control. However, you can influence it, along with the statistical power of your test, as discussed in the next section.
Increasing Statistical Power
One excellent time to perform a power analysis is right after concluding a pilot study. At that point you often have the basic numbers on hand to calculate the power of a planned full study, and you’re still in a position to make changes to the experimental design if the power study warrants. While a comparison of costs and benefits does not always argue for an increase in statistical power, it can warn you against pointless use of costly resources.
For example, if you can’t get the estimated statistical power above 50%, you might decide that the study just isn’t feasible—your odds of getting a reliable treatment effect are too low. Or it might turn out that increasing the sample size by 50% will result in an increase of only 5% in statistical power, so you’re not getting enough bang for your buck.
You have available several methods of increasing statistical power. Some are purely theoretical, and have little chance of helping in real-world conditions. Others can make good sense.
One way is to reduce the size of the denominator of the test statistic. That denominator is typically a measure of the variability in the individual measures: a t-test, for example, might use either the standard error of the mean or the standard error of the difference between two means as the denominator of the t-ratio. An F-test uses the mean square residual (depending on the context, also known as mean square within or mean square error) as the denominator of the F-ratio.
When the denominator of a ratio decreases, the ratio itself increases. Other things equal, a larger t-ratio is more likely to be significant in a statistical sense than is a smaller t-ratio. One way to decrease the standard error or the mean square residual is to increase the sample size. Recall that the standard error of the mean divides the standard deviation by the square root of the sample size, and the mean square residual is the result of dividing the residual sum of squares by the residual degrees of freedom. In either case, increasing the sample size decreases the size of the t-ratio’s or the F-ratio’s denominator, which in turn increases the t-ratio or the F-ratio—improving the statistical power.
Another method of decreasing the size of the denominator is directly pertinent to factorial analysis, discussed in this chapter, and the analysis of covariance, discussed in Chapter 8. Both techniques add one or more predictors to the analysis: predictors that might have a substantial effect on the outcome variable. In that case, some of the variability in the individual measures can be attributed to the added factor or covariate and in that way kept out of the ratio’s denominator.
So, adding a factor or covariate to the analysis might result in moving some of the variation out of the t-test’s or the F-test’s denominator and into the regression sum of squares (or the sum of squares between), thus increasing the size of the ratio and therefore its statistical power. Furthermore, and perhaps more importantly, adding the factor or the covariate could better illuminate the outcome of the study, particularly if two or more of the factors turn out to be involved in significant interactions.
You should also bear in mind three other ways to increase statistical power (neither of them directly related to the topics discussed in this chapter or in Chapter 8). One is to increase the treatment effect—the numerator of the t-ratio or the F-ratio, rather than its denominator. If you can increase the size of the treatment without also increasing the individual variation, your statistical test will be more powerful.
Consider making directional hypotheses (“one-tailed tests”) instead of nondirectional hypotheses (“two-tailed tests”). One-tailed tests put all of alpha into one tail of the distribution. That moves the critical value toward the distribution’s mean value. The closer the critical value is to the mean, the more likely you are to obtain an experimental result that exceeds the critical value—again, increasing the statistical power.
A related technique is to relax alpha. Notice in Figure 7.24 that if you increase (or relax) alpha from 0.05 to, say, 0.10, one result takes place in the distribution the right: the area representing statistical power increases as the critical value moves toward the mean of the curve on the left. By increasing the likelihood of making a Type I error, you reduce the likelihood of making a Type II error.